IGF 2019 – Day 2 – Raum II – WS #36 Data-Driven Democracy: Ensuring Values in the Internet Age

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Is everyone good?  Can you hear me?  Esteemed guests in the auditorium, the online participants, dear panelists, it's my great pleasure to welcome you all to the workshop and panel discussion today with the topic data‑Driven democracy: Ensuring values in the Internet Age.  My name is Tobias, and I'm a researcher at Helmut Schmidt University in Hamburg and my research deals with parents of collaborative recreation and I'm a member of a German institution called Young Forum Technical Sciences, JF Tech.  And this JF Tech, the Young Forum, is also represented today by other members on the panel like my peers and Elke and Carmen.  And I'm going to facilitate the panel discussion not alone but together with Sissy, who's in the back, and with (Inaudible) to my left, she's a cybersecurity architect, and Sissy‑Ve Basmer‑Birkenfeld, scientific director as well as a research at the Helmut Schmidt University in Hamburg and both will support me facilitating the discussion not only with you but also with the online community.

Digital applications based on algorithms and analysis of big data support our everyday lives today.  However, data can and have been misused, humans have to adapt to technology, but it should be that technology ought to be adapted to users and society in the long term.  So this workshop proposes a viable shift from technology‑centric to human‑centered development of technology, which is also one of the goals of JF Tech.  Some of the key questions are and you have seen them hopefully on the screen already in the back.  Some of the questions are who holds the data necessary for democratic decision‑making.  Who can support ‑‑ how can we support digital serenity based on democratic values.  Or what inference do filter bubbles and algorithms have on our social co‑existence?

So together with the panel, experts, we will discuss the above and hopefully more questions and consider different positions to analyze the actual influence of AI, algorithms and filter bubbles on our society.  We want to support a dynamic presentation and discussion of the main diverse points by interacting with you, the auditorium, and also with the online community.  And we will reserve 60 minutes of the panel for the participation of all of you and the exchange with the auditorium.  And this online and on site.  So if you have a question, please go to one of the microphones.  And then please tell your name and your background and then raise your question.  We tried to integrate the questions from the audiences during the whole session.  But within the next 20 minute, I'd like ‑‑ and I have the great pleasure to introduce our panelists.  Or better, I will give them the opportunity to introduce themselves.  And therefore please keep in mind the panelists do not extend three minutes for each individual introduction and a brief statement.  So that we can dive into the discussion with the audiences right after.

Okay.  First, to my far right, Dr. Matthias Kettemann.  Matthias is the head of the research program regulatory structures and the emergence of rules in online spaces at the institute for media research, and he's also interim professor at the University of Heidelberg.  Matthias, please tell us more about your work and answer maybe the following question.  What does data governance mean to you?

>> MATTHIAS KETTEMANN: Hi.  Thank you very much.  I really hate to be on the far right in any situation ‑‑

[ Laughter ]

‑‑ especially here.  So let me move towards the middle of society in regulating data.  I have worked in rules for quite some time, rules in online spaces.

What I found out was that many of the questions on how to best create rules are very much dependent on data sources, on questions of data management and data governance so at least three levels: On a societal level, on an organizational level, and on the level of individuals.  It is important to note at the outset that we do not have a one‑size‑fits‑all solution to data governance.  There are certain key standards on how to use data in a human rights‑sensitive way without endangering societal cohesion that have developed from the principles of the organization of economic operation and development to the data principles of the council to the data minimization rules contained in key normative instruments on both sides of the Atlantic.

However, I feel that we are still not there yet to understand conceptually the key challenge of data.  There are so many misunderstandings regarding who should own data, if owning is a concept that is worth considering, the recent data ethics commission, for instance, in Germany said that data ownership is a concept that shouldn't be used at all.  We'd rather talk about common usage rights, and how the question of data used in data protection interacts with the larger questions of realizing societal goals.  Sometimes it makes much sense to demand data if states can make better decisions.  But the question I have come across very often is who is ‑‑ who holds the data necessary to make good and sustainable normative decisions.  And very often data is held by private parties.

So what I see as the key challenge for data governance is to develop a ‑‑ to develop an approach on how to socialize data access.  I don't think we should force companies, for instance, to open up all algorithms, but it would be an important added value to ensure that the freedom of information law that has been applied with much success against states for the last two or three decades, that something along those lines is developed for companies.  And the good news is that most of them aren't even opposed to that.  When you talk, for instance, representatives of Google, they are rather open towards giving access to the data they have.  For instance, geographical data, because they also see that there's an added value in sharing.  And I think if data governance does good, it has to develop added value for societies and for individuals as well.  Thank you.

>> MODERATOR: Our next panelist is Professor Dr. Elke Greifeneder.  He may provide us insights from human information behavior research and the user experience design.  She's deputy director of the Berlin school of library science and head of the research group information behavior.  Elke, please introduce us to your work and answer maybe the following question: What means data driven, and why do you think ‑‑ it's two questions ‑‑ why do you think technology has to adapt to people and not the other way around?

>> ELKE GREIFENEDER: Why do I get questions?  And I only have three minutes.  Thank you very much for having me here.  Professor for library and information science, so I'm an unusual panel member here.  But I come from a background where we have centuries long ago experience with a lot of data and it was not data that we just collected but we want to be able to refined and reuse.  So we need to talk about structured data.  When we talk about data‑driven democracy, we have to talk about structured data because otherwise we will not find it again.

Technology needs to adapt to users.  I don't think that this works on my expertise on information behavior, to help people use and interact with information.  And while we believe and like to believe that we just have to develop the best working system, the users will use it in the end how they want to.  So when we talk about data‑driven democracy, we have to keep that in mind that even the best system might not be used like the developers want it.

>> MODERATOR: Thank you.  Our next panelist is Gustavo Paiva.  Gustavo is a member of the order commission on digital law and IT.  His main topic of research, because he's also a researcher, is anonymity on the 'net and its protection under constitutional law.  Gustavo, please tell us what kind of political or legal norms would have to be shaped into future Internet governance in your opinion.

>> GUSTAVO PAIVA: Yes.  Hello.  My name is Gustavo.  I'm here thanks to the Youth Summit.  And my answer to this question is somewhat straightforward.  I think we have to discuss the way data relates to platforms and AI.  I have a short case I would just like to mention here.  It is about a Senator from my state.  His name is Senator Steven Valentin, and he was victim in his campaign to a series of computer‑generated disinformation.  And, of course, this kind of AI‑based technology comes from lots of massive volume of data.  So in this, of course, impacts directly our democracy.  So I think it is, in the next few years, it should be a priority in the IGF and other forums to discuss how data in AI and platforms interact.

>> MODERATOR: Thank you for this great example.  Our next panelist is Dr. Nadine Abdalla.  Nadine delivers thoughts from the field of political transformation youth movements and how basic political principles might influence the debate.  She's an assistant professor of social ‑‑ sociology at the American University in Cairo.  Nadine, tell us about your work and please answer the following question.  Is social media the right channel to support democratic processes around the world?

>> NADINE ABDALLA: Okay, thank you.  So I'm working basically on social movements and approaches to democracy and social and political transformation, and this is what we have been witnessing lately.  As for the question, I think I will think of a contradictory statement or much of a paradox that I would like to raise here in the debate, which is that social media has supported, of course, democratic ‑‑ has supported uprising.  It was a significant tool for mobilization during that uprising.  It has also helped a lot of movement to mobilize like in Spain, like in Black Lives Matter in the U.S., during also the Arab uprising.  When it comes to building a democracy, when it comes to building consensus, social media didn't appear to fare well.  So in this case we can see that cluster of like‑minded people who have been formed all over the world, this has been experienced in Egypt after 2011 where there was more mobilization of fear, mobilization of consensus via social media because of the formation of cluster of like‑minded people that are really interacting within themselves.  In Syria this has been witnessed as well where we have seen the building of narratives of fear and the violence and also sectarianism via social media.  So at last I would say that, yes, it is supporting mobilization.  Yes, it is supporting a channel for grievances.  However, when it comes to the time of building a real democracy based on institutional and conventional politics, in this case we can see a certain paradox which is the formation of clusters of like‑minded people, the algorithm of social media and so on and so forth.  And in this case it's not so helpful to mobilize consensus via social media.  Thank you.

>> MODERATOR: Thank you for your insights, Nadine.  The next panelist is Dr. Carmen Givy.  Carmen is a principal scientist at Honda research Europe, and he's an expert on topics such as IT security, governance, data and communication.  So Carmen, we have been talking about rules, norms, regulations so far, and social media.  Do you think technological innovation may also help to overcome the already‑stated problems?

>> CARMEN: Yeah, thank you, Tobias.  So I must say I've been, prior to working for Honda also professor of technology for social networks.  I have a long background on researching new technologies that might take influence on how societies interact.  And somehow my perspective is that still technology is one driving motor into how the interactions take place.  I mean, because you have Twitter and your Facebook, you can interact that way.  And that's why it's interesting to see what newer technologies might be out there that help us in maybe solving the problems nowadays.  And there are some solutions that, for example, allow to come up with ‑‑ to do process data to analyze data without having the data.  So topics like privacy‑preserving computing based on encryption, based on encrypted secrets and differential privacy.  They all deal with the laws of encryption.  But the main idea is that before sending out your data, you encrypt it or you somehow distort it in a way that the receiver cannot see it, cannot interpret it, but still can do valuable and reasonable calculations on it to, for example, create statistics out of it.  So that your individual data is not recognizable again, but the data of the community is reasonable, the statistics that you get out of it.  And another point is also communications in the social networks take place.  Because like Nadine said, it is taking impact on how the democratic movements evolve.  They are also research fields going on to analyze what can we do better.  Because one of the problem fields that I and we analyze in the institute for Internet democracy is that one big challenge is that you always have this linear view of the augmentation.  So if you have 100 times bad argument, it's overrunning the good ones.  And if you would have another visualization of the arguments and the facts and the points that are discussed maybe in a mind map, I mean, there are other options to visualize such a conversation.  It would help the people then to really identify also the less and rarely said good arguments that might change the discussion.  So technology and research in new technologies that allow privacy‑preserving computing and also to support democratic discussions on the Internet would help us to overcome maybe the challenges that we have nowadays.  Thank you.

>> MODERATOR: Thanks.  Our next panelist is Jessica Berlin.  Jessica is a security and foreign policy expert who turned sustainable business and development innovator.  And is a founder and managing director of coast track.  She advises governments, companies, multilaterals and nonprofit organizations on strategy.  Jessica, what is your experience from a practical perspective?  Do you tackle those questions in your work, and how important are those questions in different regions of the world?

>> JESSICA BERLIN: Thanks for that.  So as Tobias said, I am not an academic.  I'm not a researcher.  Rather I am a practitioner who is constantly asking the question how do we leverage and connect the resources and infrastructure of the public and private sector to solve global challenges?  And in this space, to answer your question around what does this mean across the world, it means that context is everything.  When we talk about data‑driven democracy and what is it going to take to build an inclusive, sustainable, digital infrastructure to enable an equitable society, that means a different thing in Germany than it does in China than it does in the U.S. than it does in Sierra Leone, Zimbabwe and Brazil.  Because depending on who your government is depending on who is holding the veto cards in their hand to determine what happens in the regulatory environment and who gets to own data and use it for what ends completely and fundamentally changes how you want to ‑‑ how you want to build solutions and strategies to this.  So coming from the broadly stated western world, we are having different debates about this than activists and digital innovators in countries with less robust democratic institutions.  So this is an issue that needs to be addressed uniquely and context‑based in each region.  And we have to have as a starting point in these discussions, regardless of whether we're policymakers in the private sector or in academia, to recognize who are we answering this question for?  When we talk about ensuring values, whose values, and who decides?  And how do those values differ from place to place, and how must that inform our strategy?

>> MODERATOR: Thank you, Jessica.  So after introducing the panelists, I will give you the floor now, the audience.  And I'm looking to Sissy, do we already have incoming questions from the online community, too?  Once again, I want to repeat, if you have a question, please stand up.  Go to one of the microphones.  And before raising your question, please introduce yourself short with a name and where you're from.  While you're figuring out your questions ‑‑

>> SISSY‑VE BASMER‑BIRKENFELD: We did a little poll to look with the online participative community, who is participating, and we have 50% from the Technical Community as well as 3% from the Civil Society and 17% from the private sector.  Please send us your questions.  Last call.

>> MODERATOR: Okay.  Thank you, Sissy.  Do we have already a question from the online community?  No.  None so far.  So we'll come back to the audience in the room.  Otherwise I would raise my first question.  Please use one of the microphones.

>> AUDIENCE: Can you hear anything?  I'm working with the German Development Corporation.  It's a question for Jessica and all the others who feel...  The contextualization that you said is so important, I mean, I completely agree.  And also, I have to say all the panelists that we've seen so far, they are very white.  There is not so much contextualization on the panels, maybe.  But what I wanted to ask is, you know, translating it into practical terms, how, you know, what will be a way to go about it?  Because, you know, the more options you have, the less manageable it becomes.

>> JESSICA BERLIN: Yeah, that's an excellent question.  In a single word, ask.  Inclusion.  You need to ask the people you are ostensibly designing for, right?  And if you don't know who that is, ask.  Find out, you know.  Country by country, sector by sector, context by context, you know, ask yourself, who is already active in this space?  Who do I know is active in this space, and talk to those people and find out maybe who do I not know that I don't know?  What don't I know don't I know?  Because this ‑‑ when we're coming from large multilateral or bilateral institutions and on this big global macro policy design level, we often don't know what we don't know, especially when we're talking about grass‑roots innovation or digital communities where there's just such a gap between the culture of those organizations and the culture of the large institutions.  So, you know, find people who build bridges between those spaces and find out who you should be talking to that you haven't.  And, I mean, I see this so many times because in my function through Costruct, building bridges is what I do.  You have a partner from a major institution and they don't know the key local players working on it, but they've been in there for months and months or even more than a year.  And so those conversations aren't being had.  People are not being included who are actually key to the process.  So even the fact that you asked that question is already an excellent sign.  And encouraging your colleagues and other partners to do the same.

>> MODERATOR: Okay.  One more answer from Matthias.

>> MATTHIAS KETTEMANN: I should comment on that.  I think that the potential of data for development is really untapped until now.  If you talk to data development experts with a lot of ministries, they perhaps do not grasp exactly how they can use data, especially open‑source data, to make their development policies more and more efficient and more effective.  I've come across so many great examples where people from local communities use, for instance, axis mapping to show where the sewers were to make them ‑‑ to put themselves literally on the map.  Because we should never forget that when we talk about, you know, data minimization and the importance of who has access to data, this is coming from a very privileged position, you know?  We have so much data that it's almost dangerous if people ‑‑ if, say, know too much about it.  But also a huge part of the world who has ‑‑ who doesn't produce data who ‑‑ which is nonexistent in the datafied sphere.  Simply put, if you are not visible to your state, you will not receive a license.  You will not receive a birth certificate.  You will never receive money.  In certain societies you won't be able to board a bus perhaps in a couple of years.  So producing some data can also be extremely beneficial for development.  So we have to keep that in mind.  That data really is nothing good or bad.  It is always and ever what we do with the data.  And especially in development policies, I think we have to critically think about how to use data in a better way.

>> MODERATOR: Thank you.  Another comment from Gustavo.

>> Gustavo: Matthias's comment earlier in the session, I think it came at the best possible time.  Just a few hours ago in the main session, we had Brian Fishman, a Facebook representative, and he had his comment that quite often Facebook is very excited and eager to share their datasets with people.  And they have ‑‑ they even have an initiative.  I think it's called Social Science One that works exactly with that.  But they don't know if they can.  And lots of it falls into gray areas of the GDPR.  So I think that is really ‑‑ that it is really ‑‑ the socialization of data.  So more grass‑roots projects, so more students and/or initiatives can enjoy this kind of training data.  I think that is very important.  And there is a little bit of my personal reality that I would like to share in this regard.

I teach quite often in the northeast of Brazil.  And there was this time ‑‑ there's a little bit of context here.  In the southeast of Brazil, there is a very intense debate about data protection.  We just recently approved our general data protection law.  But in the northeast, the perspective is very different.  And even in college, even in well‑educated groups, people don't actually care too much about data protection.  And in fact, if you frame it in any other way, for example, facial recognition, people are quite eager to have facial recognition for security purposes.  There is quite a bit of approval when I went to classes about it.  They would much rather have security and facial recognition than greater control of their data.  So I think this relates with the point of cultural accessibility that each place has its own approach to data.  And by the way, I'm not implying that we should pick security over data protection.  I'm really trying to express that there are these differences in perspective.

>> MODERATOR: Thank you, Gustavo.  There were other questions in the room.

>> AUDIENCE: Michael Whitinger from democracy without borders.  Isn't there controversy between the context and the need for global identity on the other, and how should we proceed if we think that a global identity is necessary?  I think there are many hundreds of millions of people who don't have an identity, and it's a big issue that they get an identity.  But this maybe needs to go beyond context‑related identity proofs to a global perspective in order to assure this identity issue for every world citizen.

>> MODERATOR: Thank you for the question.  I would like to ask Matthias to answer this question.

>> MATTHIAS KETTEMANN: A brief comment on that.  I totally agree that it might have very positive effects to conceive of such a global identity.  However, there are also, especially, I think, in certain societies, huge issues with the idea of a universal database.  And our history, I think, has shown us that databases are never, ever safe.  So we would have to first really think hard about the technology to be used.  There are alternatives.  You know, there's this great project called three words are not mistaken which allows you to localize yourself on the whole globe just using three words.  They have a database of three words, and you can pinpoint every one meter, one square meter in the world with those three words, which is an alternative to traditional geographic location and could be used by the people who are disenfranchised by, for instance, not living on the streets which are mapped.  So we still, I think, need to conceive of such a notion for digital identity before we can proceed.

>> MODERATOR: Good.  Any more comments?  Other questions?  No more questions so far.  So I would like, then, to get back to the interesting example from Gustavo from Brazil.  Tell us more about this.

>> GUSTAVO PAIVA: I want to try to convey this from the start.  Back then he was a candidate.  His name is Steven Valentin.  He began a campaign with a very unique twist.  He was looking for austerity.  So it was a low‑budget campaign.  And Stevenson, he had ‑‑ he was famous in the state for being a police officer with a very ‑‑ with a lot of integrity.  He played a key part ‑‑ a key role in applying the law about driving under alcohol, under influence.  And so he was a famous local figure, and he got elected based on that.

Now, he was also a victim of computer‑generated scale campaigns.  And this eventually led him to ‑‑ him and his team towards trying to figure out if, well, this was based on AI.  AI was created based on data, of course, massive amounts of data.  And that's how they started discussing ideas for an AI regulation.  I, as a concerned citizen and as someone from the same state, I am ‑‑ my experience is relevant to this case.  I volunteered.  I approached him.  I wrote a report commenting on the bill flaw, and now we are discussing it.  But what is essential to this discussion is that the use of data for AI, or let's say the misuse of data for something that is inherently against democracy, it is data used to generate industrial‑scale computer‑generated sounding gibberish in this information.  This is affecting our democracies.  And this is having a feedback loop on our regulation and our legislators who are struggling to frame the situation.  And this may lead us to a future of reduced development because if our laws are being created based on these experiences, we may be endangering development.  The innovation in AI and in other fields.

So I think that this relation between data and new technologies and how they can harm our democracies, we really need to look into this with a lot of care.

>> MODERATOR: Thank you.  Both of my sides here, we have two good examples.  Nadine goes in the same direction so that we have both the benefit from the technologies and social media channels and whatever.  And we have the misuse of technologies in both cases, so maybe we can get an even better insight in your work and also explain a bit more the two poles.

>> NADINE ABDALLA: So, yeah, exactly.  So in a lot of cases, social media were very helpful in order to push for democratization in a lot of cases.  This was already obvious in Europe for social movements which was pushing for that but also in the Arab uprising where they were using social media as a tool for channeling the grievances and mobilization and also for permission of narratives against authoritarian regimes.  But as we are going ‑‑ was the time, there is the problem that more and more of clusters are being formed, and more and more of like‑minded people are forming through social media.  And in this case in period when you are searching for building consensus, which is something that is very important in democracy, this is not happening because through clusters, it's only the mobilization of fear and the mobilization of certain narratives that are built.

But this is not the only thing.  Another thing is that sometimes since the Internet is so easy to use, sometimes you can use it as a tool for collective action, as a tool to be an influencer on Twitter or Facebook.  But then you forget that democracy requires to build actually structured, sustainable and alternatives that are sustainable full time.  And this happened in Egypt.  So after 2011, it was so easy to use Facebook as a tool for influencing people and Twitter and so on.  But then you forget that the time will come to the ballot box.  And the ballot box is asking for organization and sustainable structure.  And in this case those people who build the structure in the real world, not in the virtual world, will gain in the ballot box.  And this happened already in 2012 where anti‑democratic forces gained election in 2012.  So this is, like, one example, but there is others.  But I'll keep it for discussion.

>> MODERATOR: So there is a question from the auditorium.

>> AUDIENCE: So actually, more of a comment concerning the regulation of social media and so on.  My name is (Inaudible) professor of legal informatics and also member of the young forum.  My concern is that you try to regulate the influence in social media, we could try AI regulation, but I think AI is not the problem.  You can also hire 20 or 30 students for a couple of dollars per hour who do basically the same thing that we criticize about the AI.  So maybe we should try to regulate one‑sided influence, but that's a restriction of freedom of speech.  So actually, the more you think about it, the harder it gets.  So my problem is we don't really have a simple solution in that field, and I guess the whole community has to continue working on trying to better understand the problem and regulation approaches as well.

>> MODERATOR: Thank you for the comment.  Before we come to another question from the online community, I would ask Gustavo or Matthias, he just said there is no simple solution in terms of regulation.  But what are the first ideas to do ‑‑ to get rid of the problem?

>> MATTHIAS KETTEMANN: Well, I actually agree that it is quite likely too early to have, for any country, to have a comprehensive AI regulation.  I think we are working towards that.  But I think the actual first step is a national AI strategy.  Because the thing is, many countries are ‑‑ well, AI is estimated to generate $15 trillion in the next ten years.  That is a fact that we cannot ignore.  So for development, it is absolutely necessary that we create an environment that fosters innovation.  But we don't want to leave it absolutely vulnerable to abuses.  So in a way, I agree that it is far too early to really implement AI regulation.  And the first step, I think, would be a national AI strategy.

And about 15 countries in the world are working on that.  So we perhaps can ‑‑ we can perhaps start discussing this in comparative ways.

>> GUSTAVO PAIVA: I totally agree.  I just said what you just said, so that's fine.  But perhaps to add a bit of nuance, I think we should, first of all, stop talking about AI and start talking about automated decision systems and differentiate between automated and human decision‑making systems, and we need both of them to interact effectively to ensure a civil rights‑sensitive online discussion sphere for some kind of content.  Pictures, especially sexual exploitation of children.  Companies have very effective tools which they can use for other kinds of content, especially jokes or hate speech with a national dimension that invoke prejudices.  Their automatic decision‑making systems are not able to do that right now.  The semantic power is still far off.  But we need to just consider those two elements in tandem.  We have not come to a final solution, but that's perfectly fine.  There are always cases on freedom of expression in an offline context, so I always grapple with these issues.

>> May I add to that?  I think we're talking too much about regulation and not enough about users.  And I think we also have to differentiate the users.  I mean, there are those ‑‑ yes, we can regulate the social media.  Then those who really want to harm, they will just find another place that's out of the regulation.  I mean, it's just relative.  They are quicker than we can think.  And then we have to talk about all the others who are the users.  So all the ‑‑ when we talk about data‑driven democracy and what we know from research is they are not stupid, and they are damn lazy.  And I just think about how many times have you clicked on the second page on your Google results.  I'm not making a poll here.  It will not end well for you.  And we know that ‑‑ we know ‑‑ I mean, Facebook, Google, they allow privacy settings.  It's just that we're so lazy that we don't do it.

And what we haven't understood so far is why we are so lazy.  We know ‑‑ so what we have to do is we have to talk about privacy by design.  We have to talk about the design side that's not forcing us to make those decisions because we know ‑‑ we know we're dealing with lazy people, and then we have to build systems that accommodate that.  Otherwise we don't have to talk about democracy.

>> JESSICA BERLIN: One comment as well about national strategies.  Who makes national regulatory strategies?  Policymakers.  Who are arguably the most clueless people you will ever meet about digital innovation?  Policymakers.  So a very concrete thing that needs to happen.  This truly globally, in any country, in any government.  We need to embed technical experts from the outside, from the private sector, from academia into ministries and agencies responsible for making these digital national strategies so that it's not just like, oh, we had a meeting with people who do AI and machine learning.  So now I know that that's a thing.  That's not enough.  They need ‑‑ there needs to be embedding between the public and private and academic sectors in this space so that the people who are building these systems understand the regulatory and policy realities that they're building in, and likewise for the policymakers to, even if they will never fully understand, for them to have a higher level of sophistication than currently exists.

>> MODERATOR: Thank you.  I would like to take your question, and afterwards we will have a question from the online community.  Is it only one?

>> SISSY‑VE BASMER‑BIRKENFELD: Yes.

>> Hello.  Thank you for the discussion.  And this nice panel.  I am Christina.  I just wanted to challenge a bit the of Elke because actually laziness is one point, but you have privacy, like, information on the pages we use.  And even if you start looking into this, it's not laziness.  It's, like, pages, pages, pages of wrong information.  If you find freedom of information requests, you don't get the answers either from the government, policymakers or from the companies because they know how to avoid whatever they want to avoid.  It's not laziness by users alone.  Like, there's no usability or practical regress mechanisms.  There's no feasible transparency.  It has to have different layers.  Explainability has to have different layers.  And I think that's no the on the users to just say, okay, I have to read all these cookie regulation or whatever is offered to me as the transparency tool that actually complies to regulation that is on hand now.

>> MODERATOR: Thank you for the question.  I have another question to you.  So you said it's transparency that is missing.  But do you have an idea how to get ‑‑ what might be a solution for the problem you are stating?  Do we have already a suggestion to that?

[ Laughter ]

It seemed to me that you have a suggestion.

>> I think somebody mentioned that in the beginning.  Like, freedom of information requests that we have and all countries to make them, like, real.  Like, they don't work.  Governments pay a lot of money to challenge them and to not answer them.  So, like, to make a practical regressed process as possible, you could have existing institutions involved to provide or check for you.  You could use initiatives like in Germany there's (Inaudible) or My Data or whatever to facilitate these requests.  You could ‑‑ yeah.  Just to share some.  Like a layered responsibility to actually implement oversight and write implementation.

>> MODERATOR: Thank you.  So this brings us back to the question who owns the data or how you earlier stated ‑‑ what was the term?  Not only owning it?  Matthias, do you have any comments to the question, again, or should we proceed?

>> Can I just comment?  I mean, obviously, it's a panel so I need to be a little bit provocative.  There are many levels to that.  But what you just said, this initiative and still it's not working.  Why is it not working?  Why are we not taking it up?  Why are we not requesting a documentation to the data that we collect?  I mean, open data is great.  But as long as I have no idea who collected that data, who did what with that data, who interpreted the field in a very interesting way?  And I can give you a very concrete example.

I think three, four weeks ago I went to the World Health Summit here in Berlin and I talked to someone who collected data in India on health.  And she said she had a lot of problems because she ‑‑ one of the requested fields was a question, how many children do you have?  And the answer was one boy, two boys, two boys, one boy.  Hmm, that's a lot of boys in that country.  And then she started talking to someone, and at some point it turned out oh, and yes, and my daughter just went to University.  Your daughter?  You never talked about your daughter.  Well, it's not what counts.  And that's really very, very oversimplifying now.  But as long as ‑‑ I mean, in the database afterwards, it's just a number.  And we have to collect that.  So laziness, we have to ‑‑ we have to ‑‑ we have to find a way to make us all more engaged.  Just to push the responsibility to someone else so the policymakers take care will not work to the companies, just take care will not work.  And I don't know.  I don't have the solution.  We have to find a way that we all get more engaged.  Otherwise this will not end well.

>> MODERATOR: Are there technical solutions, Carmen?

>> CARMEN: I would also like to say some words on that because as computer scientist, what always puzzles me is there's lots of discussion about I don't like this.  I don't like that.  However Facebook is to be used and what effects there are.  But what is missing is somehow counterdesign.  How should the usability, let's say, look like that allows you to check what data is stored?  Or how should the cookies on the websites that you visit should present to you differently?  So what would you like to have?  As a computer scientist, I'm always very happy if you come up with a design document, this is what I would like to have, and then they can implement it.  Because at the end, everything is doable, everything is programmable.  And if you don't like it, then either find somebody who can do it or describe what you would like to have.  So coming back to that, also to the point ‑‑ to the usability and to the laziness, I mean, of course, there are different interests involved.  So on the one side, the companies would like to collect the data.  And still they have to provide you the option to opt out.  And if you don't opt out, it's still your choice.  So how would you like to have that choice implemented, if not that way?  So there's always ‑‑ my impression is that there's some creativity needed, how to really get all these requirements and all these cases that you would like to have matched for people who don't have ‑‑ want to have their data collected for people who are oblivious about it, who don't care.  So how would you like to have it all in one big technological solution?  And if that's doable, then it's easy to program.

>> JESSICA BERLIN: As a non‑techie, can I ask you a follow‑up on that?  Would it be possible to make it device based so that on your device, anything you access ‑‑

>> CARMEN: Yes.  That's very easy.  It's very simple ‑‑ I'm not sure if you have sometimes set up websites.  It's very easy if you want.  If you have a presentation of the content.  So if you visit a web page on your smartphone, then typically some of the menus are hidden.  That's fluid design.  And you can differentiate based on the browser.  There are lots of options, just as a programmer of (Inaudible) in a very big sense.  You don't have all use cases in mind.  And if somebody's unhappy about how it is to be used, you don't get that feedback.  And actually, what I like also to bring into.  So if we are unhappy about the data collection of some big companies, there are also lots of alternatives how it can ‑‑ that produce the same functionality.  It's not just Facebook.  You can also use other social media sites if you want.  But that's a choice nobody can take from you.

>> MODERATOR: Thank you.  Let's continue in the auditorium.  What kind of question do we have there?

>> Okay.  There is a question from Molly, NGO women.  What is the definition and the usefulness of data governance and data (Inaudible) and how can it be useful or can it be used for blackmailing on the Internet?

>> MODERATOR: Matthias, it is a question I can forward for you, the first one.

>> MATTHIAS KETTEMANN: As a lawyer, I love abstract questions.  But you are, of course, right that every concept matters.  We have to talk about data governance just as we have to talk about global governance.  In global governance, we decide on which rules we want to develop.  Two, distribute rights and goods in a fair and equal way.  In data governance, we have to discuss how to develop rules and how to ensure that data is produced, collected, processed, distributed and harnessed in a way that we as a global society can agree on.  That's, of course, very abstract.  But, you know, this is what notions are about.

Data (Inaudible), on the other side, it's a very fluid idea that you, either as a state or you as an individual, are able to use the data resources that you produce and that you need to take decisions in a way that is not dependent on other entities.  So, for instance, it would be a violation of the concept of data serenity if you had no possibility to get the data back from companies which you use.  So data serenity, I think, is one of the key notions for the future.  Even though we don't know, of course, get quite how it's going to work out, that no doubt rights which you can trace to data serenity.  But I believe that this idea of re‑establishing ‑‑ re‑establishing sovereign decision‑making on how to deal with your data is important.  And that's why I'm not quite sure that using the word "lazy" is always the best one to do.  I think people are just people.  We use statistics to make decisions.  The world is complex and we have a really limited time.  Therefore, I think it's better to think about how we can nudge people towards the right decisions.  The best example being, of course, privacy opt‑ins versus privacy opt‑outs.

>> MODERATOR: Thank you.  Gustavo.

>> GUSTAVO PAIVA: I think it's better ‑‑ well, I don't think it's good to frame it as user laziness because platforms are designed with goals in mind.  So if Facebook wants people to share more data, they don't have to force you.  They can display design elements, and they can change the user interface to incentivize people towards that.  And the reality of the matter is that the majority of people in the world aren't knowledgeable about data and Internet governance, in government surveillance and surveillance capitalism.  They aren't informed about that.  And if the platform, by default, sets it so spreading data and creating data is easier, then most people will do that.  It's the same thing with cognitive biases.  The human brain can be tricked in a variety of different ways.  And that's what marketing does, too.  So I don't think it's really useful to talk about it as laziness because it isn't so much that people are failing to really study and understand the situation.  It's that design elements can be used to trick them.

And if some of you are familiar, we have Lawrence Slessig's code of law theory which says that human behavior can be influenced by regulation, laws, norms, that is normal norms, social norms, market forces, and architecture.  Architecture being the elements of the world, either created or found.  So code is also law.  And we can design websites.  We can design the Internet to control people's behaviors.  So it is not so much laziness as it is intentional design by platforms.  And when we talk about ‑‑ well, in Brazil, earlier this year, we had the Brazilian IGF.  And there was a main session on which platforms ‑‑ platform responsibility was a topic.  And after this discussion, we were talking, what could be the values of a platform regulation?  And I was thinking ‑‑ this is how I was thinking.  How can we discuss the design features of a platform?  Meanwhile, other people were about sovereignty.  So we are still really trying to figure out what values we want for this.

And I think this really interacts with the point of data as in privacy by design.  Privacy by design is this idea of trying to make it so by default, we aren't giving out so much data.  So I think this is all really close together.

>> MODERATOR: Thank you.  Do we have another question?  Thank you.

>> SISSY‑VE BASMER‑BIRKENFELD: This one is from a person in telecommunications.  So we see an acceleration of AI and IT across the globe.  And the makers of those work tirelessly to solve both to government and to private agencies.  It becomes a scare with the government because in most cases they are not able to weigh the actual data transparency on all these devices.  Can there be an effort to at least formulate a certain baseline and have the solutions combined globally first so that they can be adopted or be localized in each region, and can that undertaking be possible?

>> MODERATOR: Can you repeat the question, please?

>> SISSY‑VE BASMER‑BIRKENFELD: Not a problem.  There is an acceleration of IoT and AI across the globe.  And the makers of these platforms or these technologies work tirelessly to solve both government and private agencies about, and it becomes a big scare with the government because in most cases, they are not able to weigh the actual data transparency on all those devices.  Can there be an effort to at least formulate a certain baseline and have the solutions combined globally first so they can be localized for each region?  Is that something that's possible?

>> JESSICA BERLIN: It's not quite clear to me, a baseline of what?

>> SISSY‑VE BASMER‑BIRKENFELD: I think based on governance, data governance?  Compliance?

>> GUSTAVO PAIVA: I personally don't think so.  You have two points.  The one is AI that's mostly software where you just code.  And there you don't have any restrictions at all.  You can just program as you wish.  And the second is the hardware producers for the Internet of things devices.  And in the first case, I don't think that it's possible that there will be a guideline for all AI programs as to what restrictions or rules they should follow because it's just ‑‑ so neither the programming language nor any further restrictions might apply.  You still can do whatever you want on the data you have in your private basement, whatever, where they program.

And for the second part, which devices will be brought out?  I mean, the intention is to sell those.  There are some restrictions applying, especially in which frequencies you are allowed to submit some data.  I mean, not to interfere with other transmissions on that frequencies, but other than that, I think that's also not really regulated, and the hardware producers can come up with devices and whatever they like.

>> JESSICA BERLIN: I agree and follow up on that.  That brings us back to that original point around context.  Every country is going to have their priorities and their context and use cases.  And so companies can't realistically be expected to make a common baseline on their own, and governments certainly would never agree.  There's also the issue of the fact that, you know, as you said, to put it in other words, companies are creating ahead of the curve of regulation.  You can't regulation something that's only going to come onto the market two weeks from now.  So I don't think it's practicable in reality.

>> MODERATOR: We have another comment from Gustavo and then Matthias.

>> GUSTAVO PAIVA: I think Jessica hit the nail on the head.  That disruption is a business model.  And businesses can actively try to stay ahead of regulation and try to capture a market before a regulation hits.  So that is a point.  I also think that many governments in the world don't really have an interest in this common idea because really it is about making technology for your own reality and for your own national industry and so on.  It could also raise some questions about security and centralization.  So maybe we don't want, you know, country A, B or C to have such a central role in AI.  It is also good to keep in mind that AI is a highly dynamic technology.  It has existed for quite a while, decades now, but it goes through winters and then periods of rapid development.  So it really is an unpredictable technology.  Maybe in ten years it will be completely different from now.  We are still trying ‑‑ much of the debate we have today about AI is more specifically about machine learning.  And we are still struggling with the implications.  So I think even if it was desirable and possible, I don't think the world and the countries are even ready to have this discussion of a minimum standard yet.

>> MODERATOR: Matthias, short comment on it?

>> MATTHIAS KETTEMANN: I feel like I have to disagree a bit because we have a lot of minimum standards.  They're called fundamental rights.

And we don't need to reinvent them, you know.  So when we talk about we don't have minimum standards, that just means, well, we haven't quite clearly established how exactly certain kinds of sectoral use of artificial intelligence, for instance, can be done in a way not to endanger large datasets.  For instance, AI in hospitals or AI in military technology.  So I think we should be careful not to convey the expression that we are entering a no man's land of regulation.  We have laws.  We have standards.  We have soft law standards.  So we're not entering an unknown world.  It's kind of ‑‑ a thing that comes back all the time in Internet‑related law discussions, you know.  We don't need to reinvent everything.  So first of all, don't believe that there's no rule just because a technology is new.  And also it is not quite sure that you can't regulate for the future.  It's difficult.  It's difficult, of course.  But just think about the general data protection regulation and its right to have access to the logic of a decision by an automated decision‑making system which was objected.  Such a rule that hasn't been implemented yet very often in front of courts is a good example of how you can provide for technology neutral future‑oriented regulation.  And the DPR is a success story.  The California bill is basically a copy of that.

>> MODERATOR: Okay.  There was a question in the audience.  Please.

>> AUDIENCE: Hi.  My name is Hanata and I'm from Brazil.  Even though the data is not necessarily related to technology, we don't usually think about paper when you talk about data and data‑driven democracy.  So I'd like to take a step back and ask if you have any opinion on how we could harmonize data‑driven democracy with those people that does not have access to Internet and social media yet, especially in developing countries.  Thank you.

>> JESSICA BERLIN: Brilliant question.  I'll have a go.  I was going to bring this up later.  So thanks for that.  Data‑driven democracy is not just about Facebook.  How do we use data to make our societies, for example, more inclusive and equitable?  When we talk about democracy, democracy is incompatible with the gross economic inequalities we see today.  That's in part why so many democracies all across the world, and we're feeling it, you know, even here in Germany, are struggling.  Our democratic institutions and even the people's belief in the democratic system are being shaken to their core because we are not solving people's challenges.

The democratic states and systems have proven themselves not capable of creating an economic environment and a free market environment that actually helps lift all citizens equally and fairly.

So data can and should also be used to identify more effectively where and how we need to improve public service delivery, for example.  And also, this means not only from government and public services but also for companies to understand how do we create products and services and go‑to‑market strategies and customer engagement strategies that work for everyone, that reach all customers.  So whether we're talking about customers for companies or citizens for governments, we can be using data to make our countries and our economies more inclusive.  And in this way, you know, creating a data‑driven democracy means also using data to reach people who are offline, who are marginalized, who are maybe even data skeptic and don't want to be online.  That's fine.  But they still need good healthcare.  They still need access to decent affordable fresh groceries.

They still need access to school and day care and all of this stuff.  So this, I think, is really the low‑hanging fruit when we talk about data‑driven democracy.  Let's learn to use our data to solve analog problems as well and in doing so strengthen our democracies.

>> MODERATOR: So thank you.  We are now into the second part of the discussion already, thinking of measures, how to realize a data‑driven democracy.  So continue.

>> Yes.  I have also an example there, how this can be done also in a privacy‑preserving way.  Because if you talk about data for democratic processes, then let's say you would like to know what's the average salary of the people in specific regions, what is the ‑‑ do you want to get the statistics, but actually, you don't want to know the individual data of the people.  So ‑‑ and there's an example which we will also somehow ‑‑ which I would like to refer to that was in Boston.  The city council would like to ‑‑ wanted to find out from I think, 900 to 1,000 companies there, what's the gender pay gap?  That's a prominent example.  Among the employees.  And they would not like to know the individual salary of person X, but they would like to have the statistics.  Then with the collaboration of the Boston University use which basically is a technology that allows you to submit somehow encrypted somehow distorted data to computing parties, which individually cannot recognize the content, but together you can create statistics out of that data.  And then you can see really the differences among the different genders in salaries and other characteristics.

So what I would like to point out that there's really technology available if you would like to gather data in a statistical form.  You don't have to have the view on the individuals.  So you can trace them for our purposes.  And that's really also improving the acceptance of such data gathering because they are not in any way endangered.

>> MODERATOR: Thank you for this insight.  And to the technology developments, we have two more questions from the online community.

>> SISSY‑VE BASMER‑BIRKENFELD: This is one question from someone in sustainability.  How do countries and communities in the University marry and find a consensus on values that will apply and be inclusive to the community as a whole?  And who determines the ultimate values?

>> If I may.

>> MODERATOR: This comes back to the question of are there global values and rights?  I think this is what Matthias already answered earlier, but maybe you have a comment on that again.

>> Yeah.  I mean, it's the basic question that asked us to consider what values we want to ensure within our societies.  We don't, again, need to reinvent everything.  We have values.  They are enshrined in constitutions but we need to make sure they are still responsive to the challenges of online communication.  One approach that is nuanced on the discussion on values, for example, vacation on Monday, his contract with the web where he provided a new kind of social contract that illustrates quite clearly the demands on states, on people and on companies.  And they're differentiated but mutually dependent role in ensuring values, also in the next generation of Internet governance.

But just one more thing.  I really do believe that just as we shouldn't call for ‑‑ or shouldn't argue that there are no laws that apply or that we need to reinvent everything, we also should be really careful in criticizing the foundation of democratic decision‑making.  So I'm really not so happy, if we say on this panel that the democratic system is being shaken to the core.  No it is not.  This kind of talk is sort of, you know, blowing the pipe of the populists who want people to believe that.  They want people to not have trust in the system.  They want people to believe that democracy is failing you.  It's not responsive to you.  It's American companies that rule everything.  So please be careful.  Democracy is such an important goal.  A lot of countries are striving to ensure democracy.  So yes, it has its problems, but please let's not throw away the baby with the bath water.

>> JESSICA BERLIN: Hooray, our first real controversy on the panel.  I would say, to use you're words, please be careful.  How do you say it in English?

[ Speaking non-English language ]

The apathy, the apathy that right now is creeping into mainstream society in the strongest democracies in the world.  That is the first serious crack in the wall.

>> MODERATOR: (Inaudible) for future?

>> JESSICA BERLIN: Yes, I have.  So the fact that there is Fridays for future does not mean that the mainstream society are there.  I mean, if anything, Fridays for future, firstly, very inspiring, and by saying democracy is being shaken to its core does not mean that nobody is doing anything.  We care.  They care.  That's fantastic but it's not enough.  Because if anything, when we see Fridays for Future and extinction, rebellion, et cetera, you know, this gets good press in, you know, The Guardian or what have you, but who's reading it?  Who's actually showing up?  It's the same ‑‑ so we are in our bubbles, our vulture bubbles, our online community bubbles.  The average citizens across ‑‑ thank you for talking about Europe now or North America ‑‑ the average voter is not showing up to Fridays For Future.  And I think that to ‑‑ you know, these two realities can co‑exist.  And the fact that there are these youth‑led movements appearing does not negate the fact that our democratic institutions are being challenged in a way unprecedented in our generation.  Or since, you know, since the end of the cold war.

>> MATTHIAS KETTEMANN: If you're saying two realities cannot, you are doing fake news.  They cannot exist.  Different interpretations of one reality can exist.  And I totally admit that democracy has its challenges.

>> JESSICA BERLIN: I think that's more of a semantic distinction than anything.  The reality is complex.

>> MODERATOR: One more comment from Carmen and then I want to think of a poll.  I would ask you to raise your hands whether you can support the one position or the other one.  Is democracy endangered by the state of problems or not?

>> I wanted to pick up the question and answer and also involve your comments.  The question was whether we can have a consensus on the values.  And I think this is exactly what is now taking place on the panel, that maybe there are different values.  And I'm not sure whether a consensus is possible.  Also, when we do new technologies, how to get the opinions of people, how democratic discourse can take place.  At the core you can have full transparency of the arguments of the facts and still people come to different decisions, whether they are pro or contra.  And this all relates to the values they have.  Some prefer security over freedom.  Some think that the environment is more important than the personal lifestyle.  And this is something that is difficult to bring to a consensus.  So I think we can have not just two realities but very different values in how we protect the world and what we find for ourselves valuable to live for.

>> NADINE ABDALLA: My position is in the middle which is as far as people are engaged and are asking ‑‑ and fighting for the values, then there is a hope for certain position to gain over the other.  So it all depends on who is engaged to fight for which value.  Thank you.

>> MODERATOR: There is another question from the online community.

>> SISSY‑VE BASMER‑BIRKENFELD: Thank you very much.  We have a question from the Technical Community.  She mentioned that technology must meet up with society and the online participant wants more clarity on that.  How can that be done?  Because I would think that it's the other way around, where society must meet up with technology.  Being in the forced industrial revolution society needs to adjust to this ongoing inevitable change.  Coming from a country that faces a lot of challenges in terms of progression and technology, how do we protect and store data so that it may influence the wide channels?  Thank you.

>> ELKE GREIFENEDER: When they say users do not adapt to technology, it means that, yes, I mean, there's no way ‑‑ we have to build a system, but we always have to keep in mind that they will use it as it fits them.  And a very practical example are dietary apps.  And there are a lot of studies on people who suffer from anorexia but do not know there are apps to help them.  What do they use?  They use dietary apps to overcome anorexia, which means every time they are gaining weight, which is good, the system tells them that's bad.  But they're still using it.  So they're taking the pieces out of the things.  So to come back to that question, I think what we have to do is we shouldn't just throw the system out and then say here it is.  Great.  But have more like ‑‑ we talked a lot about uses and design, and I think the term that's more coming on is more like core design because you invite the users to give nice feedback, and then you say hello, good‑bye, users.  Thank you.  Now we finished developing our product.  Whereas code design is more a longer process where you keep being in contact with users, you keep monitoring what actually happens.  How do they use it, and how maybe we need to adapt?  Does that answer the question?

>> MODERATOR: You were talking about code design or participative design or development of technologies.  I have the question how can we do this with ‑‑ I mean, have a participatory approach in terms of regulation?  I mean, this is what the conference is all about, we are here, but what are practical steps, Matthias and Gustavo, to get there?  What are the next steps?

>> GUSTAVO PAIVA: I think we can always talk about participative drafting of bills of law.  But it is maybe not as powerful as we want.  It really is a difficult question.  How to do it.  There are many ways we could try, of course.  That's why we have multistakeholderism is for.  I've had very productive discussions here with platforms on how to improve them and how to deal with these issues.  Matthias, do you have any other insights?

>> MATTHIAS KETTEMANN: I think you really hit it on the head here.  We first have to ask the right questions.  What we are doing here is really great.  And we have to go back to the toolbox of regulation.  Again, we don't need to reinvent the wheel.  First of all, how important it is not to reinvent the wheel so we can go back to the relational tools we have, but they should be informed by new insights into how humans interact with technology.  We haven't talked about affordances yet.  What do products make us do.  And how can technology shape how products make us do.  So the pull into the void to enter something in a linking box if the program asks you, so what did you do today.  These are aspects we need to take into account when drafting those rules, and those rules need to be very smart, which is a problem because, you know, societies are getting progressively smarter and parliamentarians, too.  So yes, sometimes policymakers are not the most, let's say, the most normative ones.  They can because that's not their specific role.  Parliamentarians are here, have gathered for the first time at IGF from across the world.  I've talked to them from Ghana, from the U.S., from Kazakhstan.  So it's really great that they are here, and I think we're going into the right direction.  Perhaps that's just my sunny Austrian nature.

>> I also have a comment on how core design can take place.  The first option is when companies come up with technology, of course, they have some user studies and they evaluate how to be perceived.  But if it's about more ‑‑ the second point is open‑source movements where users could also come up with their own ideas.  That's happening all the time.  There are always new projects popping up.  They will not be successful mostly, but there is progress going on.  And if code design is desired, then maybe good to also join in Github and wherever these open‑source programmers put their code, and they also ask for comments and for discussion.  So it would be very helpful as a practical proposal to join in such open‑source projects, although not being a programmer and just to give feedback just to help and to give suggestions, what the design should look like, what functions should be in.  Because sometimes a programmer is also very happy to get insights and that feedback.

>> MODERATOR: Thank you.  Jessica, I would like to ask you if you have any idea of how to solve this problem from your practical perspective.

>> JESSICA BERLIN: Yeah.  I think ‑‑ you know, touching on what you just said, you know, engaging the users in the design process.  And this links back also to previous comments around inclusion.  You know, reaching out to communities that are not already in online fora and giving feedback on digital tools but going to rural areas, for example, or to older communities, you know, elderly citizens, for example, and engaging with them and seeing how can our products, services and technologies help solve their problems?  I mean, understanding the user is at its core and understanding their context.  And I think in this way, you know, as the question was formulated, how do you help bring society to the technology rather than the other way around?  You know, this also ‑‑ when you can show someone that this new technology doesn't bite but is actually going to solve a problem for you, it's going to make your life easier, then that incentivizes engagement and incentivizes views.  So you need to ask yourself what problem am I trying to solve and then when you've identified that who, ask yourself who's not in there and should be?  Who have we not spoken with that we should?  And then designing accordingly.

And I think, you know, in the long term, whether it's from the product development side or the regulatory side, or the customer/citizen engagement side, this is also for in terms of concrete recommendations something that we need to create fora where we're bringing all three together.  The citizen/customer, the product developers and creators, and the regulatory ‑‑ the regulatory advisers and policymakers to have them speaking with each other to solve the common challenge and identify new opportunities.  And, you know, there are many different ways this could be structured, but I think there needs to be a bit more motivation and proactive engagement perhaps on the public sector side to create those fora to say hey, we are in the process, in the ministry of X, to improve our strategies and policies around digital issues in this sector, and then actively reaching out to citizens and to companies and bringing them together to gather that feedback.

>> MODERATOR: Thank you.  There's another question from the audience, please.

>> AUDIENCE: Hello.  I am (Inaudible) from Brazil.  And I was just going to comment towards what you say because I also came from a rural area.  Because I also came from a rural area.  And the thing is that as you say, these people can also be included in technology, for example, when it comes to agriculture.  Using technology can increase sustainability.  So how do you think we can address these communities?  The people who are working in agriculture, students and things like that, and how can we as a society work with these local communities and take the discussion abroad, for example?  Because what I see as someone who came from that place is that we are discussing the educational system that we have, that is so traditional, does not allow these people to be actually engaged and to understand how they can prove the work that they are already developing along with technology.  So that should be it.  Thank you.

>> JESSICA BERLIN: Yeah, absolutely.  It's about informing, absolutely, and also to a certain extent incentivizing.  The agriculture example is fantastic.  You know, I once worked on a rural development project in Afghanistan.  And this would be true whether it was in Afghanistan or in Germany, but farmers are stubborn.  Farmers do what they know.  Hey, my family's been doing it this way for, you know, for five generations.  Why should I change?  What does fancy new type of seed or not to mention some digital technology?  I don't want to mess with the system that I know works for me.  And so, you know, just taking that sector as an example, you have to not just tell but show.  Show results.  Demonstrate results.  And that involves engagement of Civil Society.  That requires also subsidies and incentives for people to use it and demo plots or what have you to show, hey, this will make your life easier, increase your crop productivity and reduce your output costs, blah, blah, blah.  You know, that's key.

Or if we're talking about education and school systems, teachers, et cetera, being stuck in their ways, well, that's where the incentive has to be getting a little bit harder and also creating disincentives to not changing.  You know, if they just want to be a bit lazy and keep doing what they know, don't want to be fussed, that's where the government has to step up and enforce and say, if this is the new standard that we're using in our school system, then you have to do this, or you'll lose your job, for example, as an extreme case.  But also, you know, making it easy for people to adapt or as easy as possible.  You're not just saying hey, here's a new system.  Now you have to use it.  But giving trainings, workshops, gathering their feedback, you know, rather than just dumping something, supply driven in their laps and say now you have to use this.  Making sure that they're part of the process from the beginning so both the teachers and the principals and the school administrators should ‑‑ and the students, what are your current haves, needs and wants and what's working and how can we co‑create using these digital tools out there?  Because I think if people are part of the process from the beginning, then it feels like okay, do this now and hey, co‑creating this new system.  They've at least have my feedback.  I know what to expect and what's coming.  Yeah, thank you for that question.

>> MODERATOR: Thank you, Jessica.  The last question from the audience.  Before the session comes to an end, I want to invite all of you panelists to give a short last statement, like not more than 30 seconds that are left for every one of you.  It could be a call for action summary or some insight you gathered today.  So please, Matthias, start.

>> MATTHIAS KETTEMANN: We don't need new rules.  We need new data and better data about how to best use data for the societal good.

>> NADINE ABDALLA: We need to find a way to make the users, in a more general way, feel that they are part of ‑‑ that they can have a you decision and that they're not just providers of data.

>> GUSTAVO PAIVA: I'll take my statement to answer directly.  We should exchange for experiences with research centers like, for example, the Berkman center because they have very interesting methodologies.  We take good use to try and reach out to these populations and then bring back this information to developers.  So I would really suggest improving and learning more about research methodologies.

>> ELKE GREIFENEDER: Why we don't think more about transparency.  For example, the sponsors of social media since this has a big effect on the transparency and good functioning of democracy.

>> TAMIRACE FAKHOURY: More systems that allows us to keep the data private so they are happy and that the users can calculate it and that they are happy as well.

>> JESSICA BERLIN: I would like to call for the creation of an inclusion certification for digital products.  So just like we can certify an agricultural product as organic or fair trade, we should have some kind of distinction for digital products and services that say this product was created for all potential users, whether on the basis of gender ethnicity, skin color, age, mobility, et cetera.  I believe this is a new ‑‑ a new distinction that we need to introduce.

>> MODERATOR: Dear panelists, thank you so much for the last statements, which are also the bottom line for me.  I thank all of you for your expertise and the audience and online audience also for the questions and the fruitful discussions here.  I thank Sissy for your work and especially because they have been the mastermind behind this workshop today.  Thank you very much and hope to see you again.

[ Applause ]