IGF 2019 – Day 1 – Saal Europa – WS-218 Deliberating Governance Approaches To Disinformation

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MAX SENGES:  Use the mics so the transcription and online participants can follow as well.  A very warm well from our side.  This is actually the third time that we are at the IGF thinking about how to bring the voice of the people to this discussion, how to make multi‑stakeholder governance more informed by democratic values and practices and bringing the voice of the people.

At this time we're talking about disinformation and disinformation policies in particular, and yeah, I'm really looking forward to the discussion.  As I said, if the people want to move to the front, we'll definitely spend most of the time thinking about the questions regarding bringing in the voice of the people, using this deliberative method and, of course, also on the subject of disinformation and what works and what doesn't work in that area.

So in terms of flow, it's going to be sort of, you know, us sharing and giving an overview and setting the scene for the discussion point number 4 where we're going to spend most of the time.  We start with a very brief intro about deliberative polling and the benefits that we hope and think and try to prove it can bring to the multi‑stakeholder governance model.

Then we walk through the briefing materials and present some first results from a survey we have done.  We're really looking for your feedback and your observations on how to make these things better.  That's really the core part of the discussion about the disinformation and the method itself.

Megan, do you want to complement in any shape or form?

>> MEGAN:  I think we're good so far.

>> MAX SENGES:  Let's go through.  We have invited contributions from a number of colleagues to add different perspectives to this.  We have Vidushi Marda right next to me from Article 19.  We have John wiseman, who I have not seen yet.  We have Titi Akinsanmi and Antoine Vergne, a really interesting operation bringing collaborative approaches to Internet Governance as well.  Mattias Kettemann has a bad cold, so he cannot be with us.  We do have Dylan Sparks from the Luminate Group, which is part of the  bigger Omidyar Group.  Dylan is right over there next to Antoine.

So let's jump into the subject matter.  So maybe to start and explain what deliberative polling is in the first place.  It's a method that is developed in variations and various places around the world.  We're particularly working with Jim Fishkin and Alice Siu from Stanford Center for Deliberative Democracy.

The way it works is you choose a topic, and it can be anything from disinformation to encryption to access to privacy to gay marriage or voting rights.  You ask people a representative sample ideally, a poll question and understand where they stand.  Then you invite them to deliberate in small groups amongst each other and ask questions to experts and really understand the trade‑off of different ‑‑ of the different options on the table.  Then at the end of the day, you give them the same survey again, and obviously, then you have various outputs from that.

I think maybe most interestingly but all of them are interesting in their own rights, the polling results that you get from the second poll can be considered what people think and want after they have really thought through all the pros and cons.  So that might be a result that really informs multi‑stakeholder governance decisions and discussions in an interesting way.

The second thing is the delta between the poll in the morning and the poll in the afternoon or evening where you can see if there is a lot of movement, probably that topic is not ripe for a bigger democratic decision because people need to understand better in order to take an informed decision.

Then thirdly, and maybe most interesting for this community, I think, in a fast‑moving policy spacious like Internet Governance to have reference materials in the public domain and peer‑reviewed and approved over time, that can be a very, very valuable resource.  In fact, a group of stakeholders has banded together this year at Intgofviki.org to prepare materials like that in an open manner with the media where everybody can add views and edit it.

We have some academic oversight, I would say, in that process, but it's really emergent, and you're more than welcome to join.

On a deeper level, the benefits of this method could be to move the dialogue beyond general consensus statements, which often just paper over differences to confront trade‑offs and the pros and cons of specific proposals when you consolidate two short statements, and also to clarify where general movement is possible to really find those places where you can find common ground between different options.

So as mentioned, we have been at the IGF twice before.  In 2015 we had a full trial where we had about 300 people take the online survey, and then 60 people at a day zero event at the IGF to deliberate.  It was super interesting.  A lot of learnings including how difficult it is in a multi‑stakeholder setup like this for government representatives to participate in something like that where we have now come to the conclusion that we have to ask everybody to participate as a citizen rather than in their stakeholder group.  One thing that's interesting to discuss possibly later.

In seven out of 13 policy options, the opinions ‑‑ the polls changed really significantly, so that gives you the delta that shows you how much movement there is, and the surveys indicated a very strong knowledge gain, so it's not just the deliberation.  It's also really the learning that people take home in terms of media literacy and being able to participate in these conversations.  Overall, people really enjoyed themselves.

It's a good experience, and we'll see that later also in our surveys from this time.  That people do think that they want to be listened to and they have something to say, and I think that's very encouraging.  With you need to find the right formats to actually make that possible to have an organized conversation.

Then a year later in 2016 interesting things happened.  The clicker doesn't seem to work.  In 2016 we came back and we had an even ‑‑ or a much more difficult subject.  To be honest, we debated and deliberated how to give access to the next billion people and how to improve access in 2015 because we thought, okay, let's start with something that's not super controversial.

Next time we talked about encryption governance, which is a very controversial question.  To what degree states should have the ability to ask companies to include back doors, to what degree encryption needs to be governed and controlled by governments.

Really interesting feedback.  What we did then is a little similar to what we do today.  We went through the briefing materials and really got the multi‑stakeholders to contribute their views and help improve the materials, and we made those available after the session and spread them for use for the community.

With that, I pass it over to Megan, who is going to give us an overview of the briefing materials that we prepared on disinformation.  We can share those after the session, and there should be a good number of copies at the tables.  So if you want to come to the table and discuss with us, that has the benefit of you having access to the materials.  It's not necessary, though.

We have designed the session in a way to have the conversation without you having the materials right in front of you.

>> MEGAN:  Thank you very much.  So I'm going to do an overview of the briefing materials, which as max said one of the things we'd love to hear is whether you think things are missing or need to be elaborated more.  The goal of these materials originally was to outline some of the key policies, aspects of policies either proposed or implemented by governments within the EU, and then clearly articulate in favor or against.

I want to emphasize a couple aspects of this these are only government policies, so one huge aspect of disinformation is platform policies.  There's a whole range of actions that Civil Society actors might engage with.  The goal of these was to think about government policy specifically, and in part in order to narrow the scope a little bit.

Why did we choose the EU?  First of all, at the time that we originally started this project, really countries within the EU were the main countries that had explicit proposals on disinformation that were on the table or in the past.  Now that's not as true.  Europe has still been one of the leaders in actually passing regulation, government regulation in this space, and many of the policies that have been passed around the world have sort of mirrored or drawn on what has been done in the European Union.  Again, in part to narrow the scope, we focused within Europe.  That is not to say that only European policies are important or only government policies are important, just that that's the scope of these materials.

So we focused on disinformation, although you'll see in the briefing materials we asked some questions about policies not explicitly about disinformation but sort of related and have the potential to impact on it.  We defined disinformation as the deliberate creation and spread of information to deceive and mislead in order to promote vested interests using the speed scale and technologies of the open web, and then you see here, because this is from the materials themselves, as well as related policies about user‑generated content.  Those are the policies that are within the scope here.

>> MAX SENGES:  Misinformation, fake news, these are all different concepts, and it's important to look for the nuance there.  That's why we'd be interested if you have feedback on the definition that we chose and/or pointers to places to where we can negotiate what the different concepts mean exactly.  I think that makes an enormous difference to be clear what you're talking about.

>> MEGAN:  You see later in the presentation we actually talk about it as content moderation.  So, you know, there's a lot of ways in which to sort of think about this framing.

So just to give a brief summary in the materials if you have them, but just to give an overview, we sort of divide these policies into three kinds of categories.  That's self‑regulatory and whole society approaches, why that's one category.  These are approaches that don't attempt to regulate content directly, right?  These are approaches about online content but not directly regulating content itself.

This includes things like the EU self‑regulatory codes of practice and also whole society approaches like those taken in Sweden and Finland, which often don't include explicit regulations about online content but have a suite of approaches that try to tackle issues related to disinformation in the context of increased civic education, increased cooperation across ministries and increased cooperation with the media and broader societal approaches.

The second category of approaches are approaches that attempt to directly regulate online content.  Here you think about laws like Germany's NetzDG law where it's a law that directs the way that content online is going to be governed.

Finally new proposals like those from the United Kingdom and franc that propose new regulatory regimes entirely.  The creation of new regulators or adaptation of existing regulators in substantial ways in order to approach the sort of challenges around online content.  So we sort of outline what those policies look like, and then we pull out a few key policies here and we go ‑‑ what you see in the next section in the materials, if you're sort of following along, what we do is go through and evaluate the pros and cons of different policy proposals.

The things that are included, if you have it in front of you, if you don't have it share with a neighbor.  Sorry, we had 70 copies, but obviously, that wasn't quite enough.  So what some of the things that are included there, we think about requirements to ‑‑ you'll see this in the survey results, too.  The survey results go through in detail, but requirements for platforms to remove certain kinds of content, increased funding for civic education and digital literacy and the creation of new regulatory regimes and the concept of a duty of care, which is what is proposed as the online harms white paper.

We outline what these are and go through and try in a balanced way to make arguments in favor and arguments in opposition to each of these potential policy approaches.

Do you have something else you want to add there?  Okay.  What we're going to show you now is really initial results from a content moderation survey we did some IGF participants.  After this event, we will have a full deliberative process, but we did an initial ‑‑ we did the initial survey, and we have those initial results.

Now, this is a small sample size, so we're not showing this as a definitive outcome of what people think about these issues, but in part to show you what does the survey look like?  There's a couple of interesting things to pull out and talk about in the further discussion.

Do you want to start?

>> MAX SENGES:  Sure.  So the number is really a good representation of the international scope, so we got people from all over the world but 31 is obviously not something that means the data is quotable.  It's more of a means to start to discuss where people are trending, whether the questions are the right questions, whether the framing is correct.

So we always went with the scale from 1 to 10.

>> MEGAN:  Zero to 10.

>> MAX SENGES:  Zero to 10.  Sorry.  In this case we wanted to see if people oppose or support certain statements, and from the first set we thought that the point that people have a pretty strong support, 5.7 is not, you know, incredibly strong but pretty strong to say, yes, we want platforms on a national order to remove ‑‑ to react to complaints and do that without an additional law and court order.

>> MEGAN:  To say, this is sort of, obviously, for those of you familiar, but just to say this is relevant to sort of ‑‑ this is the German NetzDG model that you remove in 24 hours after receiving a complaint.  What is interesting here that you can see ‑‑ is it okay that I go on?

>> MAX SENGES:  Yeah.

>> MEGAN:  What's interesting is newer proposals toyed with the idea of legally requiring the removal of hateful content.  It's unclear in some proposal whether that only includes content illegal under national law or not.  What we see here is kind of a difference, right?

People in the survey ‑‑ I don't think this is super surprising, but it's kind of interesting to think about.  We're much more supportive of platforms being legally required to remove content already illegal than they are of platforms needing to remove hateful content.  We somewhat intentionally did not define in the survey in part because this is ‑‑ partially because it's a survey and needs to be short.  Also, I think it's not super well‑defined in some proposals, so that's how we sort of proceeded.

>> MAX SENGES:  It is a term that's been around, but that's never been defined concretely.  So that, I guess, is one of the reasons why people do not support it as much.

Same set, just the second half of those.  Should all international courts have the right to order that content be removed globally?  To be honest, I was surprised.  A good amount of people do think there's merit to that.  Interesting to hear what our contributors and the participants more generally think.  Then there's a new concept about the duty of care.  Megan, do you want to elaborate on it?

>> MEGAN:  Sure.  I'll say it's still the case that people more oppose than support the global removal here.  So the duty of care, we ‑‑ this is actually ‑‑ this question was quite long in part because we did feel we did need to define duty of care here.  This was on a zero to 10 scale where zero means that there is ‑‑ I can't remember now.  This is why ‑‑ it's quite long.  Zero is you have an obligation to take care, right?  And 10 is that you did not have such an ‑‑ I mean, that platforms don't have that kind of obligation.

You can see that people are more ‑‑ so this is the one where zero is not disagree.  It was a slightly different scale, but the scale was told to the participants.  Here what you can see is people more leaning in the direction that platforms do have a duty of care.  I think that this ‑‑ I think that this description is ‑‑ may contribute to that because we do say that the second part of this is they don't have that duty even if it results in mental or physical harm, which I think is part of what is implied in duty of care.  You see that more in the direction of being in favor.

Then I'll pull out one thing here that I think is really interesting given what we said before with the same group of people.  Here whether platform have a responsibility to do something, whether being legally required, people are in favor of them removing hateful content.  Illegal content is still slightly higher, but it's interesting to think about the difference between believing a platform has a responsibility to do something and believing that it should be legally required to do it as a regulation.

We do see ‑‑ this is quite substantial.  So on the legal requirement question, people were more opposed than they were in favor.  In the responsibility question here, we see people are more in favor than they are opposed.  So we thought that was ‑‑ that, to me, is probably the most interesting single pull‑out of this small sort of pilot survey results.

>> MAX SENGES:  One thing we should have mentioned when I told about the learnings that we had from earlier IGF research is that we actually targeted the IGF community in particular.  Meaning that we used participant lists of current and earlier sessions.  So these are people who are part of an Internet Governance community that have been to IGFs.  That's why even though we tried to write everything accessible to normal people on the street, I do think that, you know, the nuance that is in question is like the one that Megan just reported, is not lost on people who are actually thinking and considering Internet policy in this space.

So two more that are more on the ‑‑ on people's mindsets and how they see themselves and how they are participating in an experiment like that.  I'd mentioned this earlier.  People do really feel that they have opinions and that they're worth listened to.

They do feel also that they are listened to and can make themselves heard as you can see in the first questions.  They do think that they mean to participant.  I'm interested, Antoine, on your view.  I think that is different from what people would say if you went to the more general population.

Last but not least, this one is ‑‑ or the two that we wanted to pull out here is that people are actually or the IGF community is actually willing to accept other opinions and they see benefit in the exchange.  They are looking to find compromise.

Not as strongly as you would hope to see, but I still would believe that if you went out on ‑‑ if you compared it to the more general population that that number would be lower.

One thing to note about that, one thing we expect from the IGF population and we didn't talk about this at the top.  Part of the reason to do this with IGF is to get an informed group to work with the optimal high information group here and then compare it to other groups.  I think one thing is these people in this community may be more firm in their beliefs about these things than people in other communities because they know more as well.

So that may make it so that you feel less willing to compromise because you feel more certain that you're right.  Whereas, if you ‑‑ that may be a reason why this is ‑‑ this is slightly lower than I would have hoped for from IGF folks I have to say.  Okay.

>> MAX SENGES:  Apologies for this kind of long info‑sharing block in the beginning.  We really want to spend most of our time and we still have most of our time to deliberate and get the different perspectives in.  If that's okay with everyone, I would take note for people who want to come in, if you'd just give me a signal, and, of course, our invited contributors are more than welcome to contribute as well.

If you could please state your name and however you want to explain your affiliation.

>> MEGAN:  We'll go straight to you.  We forgot to say at the top.  We have to end 15 minutes earlier than in the schedule because the secretary‑general will be in this room next, and they need to come in and do security.  So we're going to get kicked out at 12:45, I believe.  So just ‑‑ we still have 45 minutes, but just so everyone is aware or maybe a little earlier.  Until we get kicked out, we're going to talk.

>> RICARDO CAMPOS:  I'll make it really quick.  My name is Ricardo Campos.  I'm Brazilian and working in Frankfurt as an assistant at a law faculty.  I work in (?) and for one year I worked in Brazil concerning misinformation.

Our problem was at that time should we copy the German legal statute, or should we face another problem?  Then we see that what we have ‑‑ what the problem was in Brazil's election was the misinformation.  It was not a kind of individual misinformation, but we saw that was a kind of illegal industry behind that.

Then we draw with design this tattoo facing this problem.  So we make a kind of compliance system for the platforms, and the platforms should inform the accounts that are sending above average messages per day.  With that, we hope to track this industry behind it.  Then we have not the problem like in Germany that the platforms need to remove content.  Then we don't have the problem of freedom of expression.  Thank you.

>> MAX SENGES:  Am I understanding correctly you're proposing and working on an additional solution that you consider more balanced and adequate to tackle the problem?

>> RICARDO CAMPOS:  Yes, to make a differentiation between the individual misinformation and the industrial misinformation.  With that you don't have the problem of freedom of expression and so on.

>> MAX SENGES:  Okay.  Thank you.  I guess if people are interested in that option, they can chat with you later.  You also changed the subject somewhat to misinformation.

I want to notice that.  So let's try to be clear about the differences, and thank you for that contribution.  You wanted to come in next.

>> VIDUSHI MARDA:  Thank you.  I work with Article 19 and most of my focus is usually on platforms and the technical infrastructure that underpins it.  I think that she had a number of really interesting findings they're nonintuitive like you both mentioned.  A couple of things that are is talking about the business model that underlies content on a platform, especially whether we talk about disinformation.  I think it's a deliberate attempt, according to your own definition, and I think the missing part of the puzzle is that these systems are technically made in a way that encourages hateful speech or shocking speech.

The difference between saying these companies should take down content is the business models need to be changed so it isn't so easy to manipulate content.  It's two different participants of the puzzle.  I will be interested in seeing, you know, do we have a consensus on the underlying business model or the incentives that come with technical infrastructure that gives us content?

>> MAX SENGES:  Thank you, Vidushi, that's an interesting point.  Like many of you, I wear various hats.  I do work for Google in my day job, and I discuss these things from one perspective.

Today I'm here as a neutral researcher moderator off the topic, so I'd love to see if Titi wants to come back to generate dynamic on the panel, and she also indicated she wanted to come in.

>> TITI AKINSANMI:  Thank you very much.  Good afternoon.  For those not in the room earlier, I have a day job as well with Google but wear the hat as a Beckman claim fellow at the Harvard Law School.  For me, yes, there is a need to actually rethink or re‑evaluate the new business models that we're beginning to see that enables disinformation.

Thank you for catching up on that.  It's not misinformation.  It's disinformation, which means that there is a deliberate attempt, the whole modus operandi is to ensure that facts become non-fact or alternative fact.  The ability to rethink is one of the things we need to see a bit more.  The new proposals, what are the existing business models, which ways do they need to be adjusted?  To be able to respond to this major trap.

On the second level is this.  Take a step back and look at how historically disinformation has been addressed.  From going back to Lenin's time through to looking at dictate oral spaces with a deliberate attempt by state or other actors to actually inform people's thoughts to serve their ongoing positions.

On the third level is taking not a step backwards but viewing this from the viewpoint of the end user.  I would love to see more of that.  From the end user's point of view, is this disinformation possible based on political ‑‑ what political context enables this information even more significantly?

Then I'm going to apologize.  I will probably need to step out in a bit because I have to be the African Union meeting.  This is the IGF.  They expect you to clone yourself, and it doesn't usually work.

>> MAX SENGES:  Thank you so much.  Anybody else want to comment?  Do I have you on that particular point.  It's interesting.  Is it on the same point?

We could agree on one model.  If you raise your fist, I know you want to come in directly on this point.  If you raise your finger, I know you want to come in in general and I'll add you to the list.  I'll add you to the list.  I added you to the list.

>> SPEAKER:  I think he's making a fist and a finger.

>> MAX SENGES:  A fist and finger?  That's not fair.  You definitely go first because you were first in the queue.

>> AUDIENCE:  Thank you very much.  I'm a member of parliament in Ghana, and so I look at it from a policy perspective.  Before I became a member of parliament, I was a communications specialist to the immediate past president.  So I have worked directly with disinformation in the last elections.

Looking at the proposals that you have put out here and the points that were made by Titi and the previous speaker, you realize that disinformation is a deliberate attempt.  It is systemic.  The business models of the online platforms support the disinformation because the allowance of bots to actually do the propagation of disinformation is something that the business models have not done enough to combat.  So you realize that using the solution or the approach of self‑regulatory code of practice may not necessarily fix the problem.  You need to have a combination of a number of approaches.

We need to have a conversation with the platforms and say, what better self‑regulation frameworks will you put in place?  I believe that there needs to be proactive law that seeks to criminalize actions that are supported.  That's where the UK's new intervention comes into play where you put a duty of care.  Businesses, regular businesses have a duty of care to support people who patronize their businesses.  The same platforms need that same duty of care where if disinformation is a deliberate attempt to distort the fact, that is illegal by any legal jurisprudence.  It's illegal.  They have a duty of care to deal with it in a timely manner.

In Africa we have a challenge with many companies have a presence, but they're not fully incorporated in the country where the operations are happening.  Trying to get legal jurisprudence to shut down the data and content becomes a challenge.  So I was in a previous session where we spoke about the Jewish food territoriality and cross‑board cooperation.  We need a framework that looks at a multiplicity of approaches, and possibly have another survey outside of the IGF confines.  I would be interested what your findings would be.

>> MAX SENGES:  Thank you, sir, very much.  A number of interesting points.  I had one immediate reaction and I'm sure Antoine will come in later and tell us about a survey that is actually happening internationally and will happen on a grander scale with normal citizens.

I think he had a finger and fist.  We should allow him to come in and introduce him.

>> ANDREW BRIDGES:  I'm Andrew Bridges, and I'm a lawyer in Silicon Valley.  I'd like to build on a couple of things people said.

Answering some of the questions posed, I do think there was significant gaps in the survey.  The survey was asking people to comment about other persons and other parties responsibilities.  The survey asked people to comment about government's responsibilities and platform's responsibilities but not about the respondent's own responsibilities.

>> MAX SENGES:  Very good point.

>> ANDREW BRIDGES:  To the last speaker, a duty of care on platforms, does that mean that somebody who participates by clicking like or share, who participates in the disinformation propagation has a duty of care and should be thrown in jail for clicking like or share?  That may be the consequence of imposing a duty of care on the participants, because these are online communities that are facilitated by platforms.

So we need to talk about the role of the audience as regenerators and problem ‑‑ propagators of the information.  We also talk about content moderation, which is a euphemism.  When I post a comment to Facebook, I don't think it's content.  I think it's my expression and viewpoint.  When somebody moderates it by blocking it, that's not being moderate.  I perceive that as censorship.

When we talk about content moderation and don't recognize that that is censorship of expression, it sounds very bland.  The last thing I want to point out is the question about should platforms have an obligation to take down illegal content?  Easy question to answer.  What's not easy is who determines that it's illegal?  Do the platforms have a responsibility to censor something because somebody has been accused, not found, to be illegal?  Do we require takedowns based on mere accusations?

Are we turning platforms into star chambers with the own private rules of adjudication that are outsourced adjudications of the rule of law from governments?  Whom do we expect to determine and declare the illegality on which the platforms should act?

These are major fundamental questions, and my fear with the survey is it's dealing with a sort of current trending terms without looking at the political, decisional, legal infrastructure that's necessary to be analyzed for thorough evaluation of this.  Thank you.

>> MAX SENGES:  Andrew, thank you so much.  There were a bunch of very, very strong points in there, I think.

That said, we struggled immensely with cutting this topic that really goes to the heart of freedom of expression, the human dignity.  You could go in so many different department directions, and Megan really did an amazing job, I think, leading the write‑up of this after so many different drafts that we're trying to include this and the other.  You want to present something that fits on 12 pages.

Actually, Jim Fishkin says that.

>> JAMES FISHKIN:  It's constructive criticism.

>> MAX SENGES:  Aye seen you and the lady over here if you could introduce yourself.

>> AUDIENCE:  Yes.  Good morning.  I'm Charlotte and I'm with the Council of Europe.  On this discussion here, I very much agree with points raised.

I think that we have to, indeed, look at the business model and have to also maybe focus our attention not only on the content itself where we indeed get backed into is this covered by freedom of expression, is it not?  We must look at the sources, and there is an obligation in platforms to assess what the sources are.  Are these sources that have been known to propagate disinformation and have been already been flagged by users as having the fake news.  The council of Europe is working on several certification mechanisms for media outlets where a ranking of media outlets in terms of what their credibility is really and what their contribution to quality news sets up.  The difficulty is, of course, to assess quality, but we all agree there's some sort of public interest criterion there.

>> MAX SENGES:  Can you come a little closer to the microphone?

>> AUDIENCE:  Focus on the source of misinformation and not only on the content.  Thank you.

>> MAX SENGES:  Thank you.  Really interesting.  I'd love to follow‑up, if I may.  I do agree this is more of an infrastructure improvement, which I think is a very good path to go down.  Just because it has come up several times, you can decide which hat I'm wearing.  It's really me believing this.

I think it's a little bit too easy to say, you know, we need another business model, because this is about user‑generated content in general, and then there are platforms that manage to monetize it and platforms that are completely open.  This is a question of how do you allow speech online as Andrew pointed out earlier?

I'm happy to discuss how business model can evolve and to hear ideas, but I think, you know, just to say, well, the business model supports this kind of scandal and, you know, excitement and all of that, I think in the case of YouTube, that was a goal, to maximize watch time.

It's not ‑‑ like, that's the easy thing to address.  You don't maximize for watch time but for quality.  That's not really the business model.  That's the model of engagement and what you want on the platform.

I think there are some things coming together here.  We had a gentleman here on the right.

>> AUDIENCE:  Hello.  I'm (?) Here with the youth IGF Summit.  We produced a few statements, which I think are relevant here.  I won't read them out loud because our time is very short.  They relate to artificial intelligence and platform transparency.  I think these could be used as points for the next poll.

Specifically, I have a few things that might be productive.  Those would be do people approve of using AI to detect and remove content in platforms?  AI and algorithms.  Should these algorithms be transparent, that is, something people might have an opinion on?  Should ‑‑ is it the practice of not removing content but making it less visible.  It's less extreme than the removal, but people might not still accept it.

Oh, there's a final point.  If there's human oversight for detection and removal, does it make it more acceptable?  These are the points.

>> MAX SENGES:  Again, I think there is a lot of knowledge in the room.  Great perspectives.  Thanks for coming in from the young point of view.  You guys are really growing up with this, while we still remember the telephone that you had to pick up.

So thanks very much, and I'm looking forward if that triggered some more debate.  Titi wanted to come in one more time before your meeting.

>> TITI AKINSANMI:  Looking at duty of care as it relates to user‑generated cone tent content and freedom of expression.  When I say new business models, it's not necessarily speaking to the existing platforms, but the fact that the fine art of disinformation is a business model in itself.

That's something that we as of yet have not addressed and I would like to see it addressed.  One, ensuring that we actually break down duty of care from the perspective of the end user, the person putting that content out or consuming it.  Then also ensuring that that conversation is not happening in isolation to your point around making them respond based on their role as well.  Thank you.

Apologies.  I have to go.

>> MAX SENGES:  Thank you so much.  More people have to go to the African Union.  Thank you for joining.  Interesting point earlier.

>> AUDIENCE:  Excuse me.  Can I ask something also?

>> MAX SENGES:  All the way in the back.  Sorry.  I must have missed you.  Come up and introduce yourself.

>> AUDIENCE:  (?) With the Russia Federation.  I want to talk about censorship on the Facebook platform.  I think the most problem is who decides what is misinformation and disinformation and which is fake news.  We have to demand Facebook, Google, Twitter to publish their rules.

We have to see the stop list of words you cannot use if you want not to be banned.  We cannot find all those words, which is forbidden to use.  So I think this is a problem.

We have to fight with that censorship because now their moderators decide which is hate speech, which content is harmful, and they banned the way for a lot of people with millions of subscribers.

>> MAX SENGES:  Thank you very much.  I'm adding a cultural dimension, I think, is which is what some people find offensive and the U.S. might not be offensive in Russia or Europe.  It's a very difficult problem.  That's why we have such a lively conversation.

I see you wanted to come in directly on that point.  Please introduce yourself.

>> BARRONS SOKA:  Barron Soka, tech freedom.  Having a detailed list of what you're not allowed to say is how one circumvents the rules in place to prevent disinformation.  It is not coincidental that that comment was just made by a Russian.  I'm sorry.  They engaged in a deliberate attack against the United States and other western democracies spreading information.  They made a business out of it.

They're doing it at the service of the Kremlin, and they want more transparency in how to make it easier to spread disinformation on platforms.  You should consider who that message is coming from, and remember that a certain degree of opacity in how content moderation is dealt with online is essential to combating the spread of misinformation, especially around elections.

>> MAX SENGES:  Thank you very much for that point of view.

>> AUDIENCE:  I'm speaking about not Russians but, for example, the citizen of the United States called Alex Jones.  He was banned everywhere on each platform, and he's not a Kremlin agent.

>> AUDIENCE:  Oh, really?

>> SPEAKER:  Maybe I can suggest that there's a component of the transparency conversation that is maybe slightly different.  I agree with you that a complete list of words, this is not the way to go about it.  Perhaps I do think there's an argument that there is a place for transparency ‑‑ for an increase in transparency in the process, and that's quite different from increasing the transparency of like the exact mechanisms in a way that makes it gamable.

Yeah, I think ‑‑ you wanted to say something in relation to this.

>> MAX SENGES:  We're heating up the conversation a bit.  I was actually going to suggest that we also go a bit more on the deliberative democracy bit, but now it's a really good conversation and a lot of people want to come in.  If the gentleman from Russia is okay to be a representative of that view, that's fine.  I want to make sure that it's clear he's representing himself as far as I understand, and this should not be to partialize.  That said, I see several people who want to come in.

If you could introduce yourself first, and then I add you on the list.  I saw Vidushi also wanted to come in.  Over to you.

>> AUDIENCE:  Hello.  My name is (?) from Japan.  My question is how much did you explain to participants the individual concepts such as illegal or hateful?  Because I think those words are quite vague because some people consider something hateful, but as a people they might not as some do.

Maybe, piracy is illegal but maybe (?) is not.  Also, I'm very concerned that there's a danger to labor (?) disinformation attempts.  I'm not sure how much the party understands the vagueness of those concepts.  How can I say it?

I'm not very convinced that you are briefing materials because the participants might be misunderstood on some of the concepts.  I want to know how you make sure that the parties understand.

People say the devil is in the details, so how much do participants understand the concepts.  Thank you.

>> VINDUSHI MARDA:  I'm responding to this because it's about the materials.  To the extent possible, we use the language from the laws themselves that we drew things from.

Obviously, on the survey I'll say about hateful content, we went back and forth about whether to include a definition of hateful content there.  When we looked into the laws that have actually used that a concept, which there is a couple of them, they're not always very clear about the meaning.  So in evaluating what people think about them, we were slightly more vague.

Maybe that's not the right choice, but that was sort of the motivation behind that choice for other things.  In the materials for when the deliberation happens, we tried as much as possible to mirror the language from whatever policy or white paper or whatever it was we were drawing on to do that.  In part because since this is focused on government regulations and how governments might respond, we were trying to use the language that the governments used.

I'm open to the idea that there should be a different approach than the one we've taken so far, but that's how we did it practically speaking.

>> MAX SENGES:  Thank you so much.  The gentleman with the suit wanted to come in.

>> AUDIENCE:  Thank you.  Steve with NetChoice.  I think the discussion with business model reformation is not conductive if conducting at a high level of attacking a business model that is a two‑sided market since the beginning of radio, newspaper and magazines where one side is to attract audience.  With the audience there, the second is to attract advertisers who want to reach that audience.  That's been the model since division, radio, broadcast has been around.

Attacking that model itself as promoting disinformation is probably not going to be productive.  It would make more sense to better understand what the business is doing when it's maximizing profits and serving that model.  I think Max made the point about maximizing quality versus watch time.  Neither is actually what the company does.

It will maximize the ability to keep the audience engaged and not to have them repelled by content and ads and make sure that advertisers feel safe spending their money putting ads on the platform with the obscene material.  To marry the two sides, a platform will come up with community standards that trying to strike a balance between both sides and adjust that over time.

I think if you asked for transparency of algorithms, you'll be really bored.  The algorithms to determine la shows up next in my feed or the next video in YouTube are based on associations with things my friends have liked.  If I watch this video, the next one I see would be one that others that watch this video also like the following video.

So it's about sequencing feeds that are designed to keep the audience engaged, but if they succeed in repulsing an audience, the advertisers will go away.  So I don't understand the notion of how to reform business models, and I would invite those advocates who believe the business models are broken to give us some more information as to what you desire.  Thank you.

>> MAX SENGES:  Thank you.  Really interesting point.  I do think that it's right that the quality of content and other engage determines what value you get as an advertiser.  So the market incentives at least are right but that doesn't mean that the content and dynamic was problematic and needs to evolve.

I'm with you personally on the possibility to evolve rather than to change the business model.  Vidushi, you wanted to come in?

>> VIDUSHI MARDA:  I'm happy to do that.

>> MAX SENGES:  Antoine didn't come in yet.  I waited for you because this is a change of tone because we're talking more about democracy and deliberation and bringing citizens into these debates.  So I'd suggest we move there for a little bit, but that doesn't mean let the other topic develop in your med and you're welcome to come in later on that again.

>> ANTOINE VERGNE:  I used the fist because I knew we would be kicked out, and I wanted to give an alternative.  We worked on citizen participation in over 20 years.  In 2018 and 2020 and 2022 so we're working on engaging ordinary citizens on Internet Governance with the future of the Internet.

This year we have done a similar approach in five countries in the world with around 300 random citizens, so in all the countries.  These countries where Germany, Japan, Brazil and the refugee camp at the border between Bangladesh and Myanmar.  We had groups of citizens meeting for a full day of discussion and deliberation based on information, so the same approach.

We had three big topics that were about digital identity, disinformation and governance of the Internet.  The groups were 50% women and 50% men.  We're happy about that diversity also in terms of diversity in terms of connectiveness, we had 20% of the participants having no Internet, and we had in terms of occupation a very diverse group.  We had something around 20 farmers and peasants and farmers and white collar.  We had a very broad representation in that sense.  In the exercise we asked them to do on disinformation was following.

We first asked them to reflect on where they get their information and how they rate that information.

>> MAX SENGES:  We have to keep it two minutes, Antoine.

>> ANTOINE VERGNE:  The exposure to this information is the global exposure.  That's a topic we had before about asking people to hold themselves ‑‑ position themselves in this field, and that was one question.  In relation to how they feel the global exposure is, that was quite interesting.

We asked them to work on this to tackle disinformation and the model of governance for that.  On the tools, education came first, and education was also part of me myself what I have to do to tackle disinformation.  So the role of the individual.

So technical tools, algorithms and system downgrades and third came regulations.  For the people it was not the priority, but under the survey whether we asked who should take care, they had a strong support for co‑decisions.  They wanted to have a co‑decision between all stakeholders.  At the end we asked if someone would have the last word, who should that be?  States came first but furloughed by private companies.

That was an interesting answer on governance.  I wanted to react auto something you said about enjoying because people go through that process.  They have no idea about the topics.  They come in, and they discuss it with fellow citizens and go out and enjoy it.  This has a transformative roll, and I think it needs to be applied in IGF.

This is what happens, you start discussing and get to the common ground.  That's very, very important.  That's why we do that.  Thank you.

>> MAX SENGES:  Thank you, Antoine.  For those who might not know this, IGF has another means to bring in the voice of the people.  The German government has invited parliamentarians from around the world to come here.  For the first time we have 120 parliamentarians from all over participating in our conversations.

I think that's a really, really good thing.  Internet Governance is something we all experience on a daily basis.

Vidushi wanted to come in.

>> VIDUSHI MARDA:  Thursday at 4:00 we have an open forum and we present the results.  If you want to do more of the results, it's Thursday at 4:00.

I wanted to pick up on a couple of things that were said in the discussion and oversimplifying very complicated problems.  The first is around censorship and free speech, which I believe you brought up.  I don't believe free speech is an absolute right.  It has a right with reasonable restrictions provided by law and should be necessary and have a legitimate aim by which you ‑‑ by what you detect speech.  I think the current problem is when you have community standards or a private company determining what is legitimate speech or not, there's no clarity in the process.

A big example is whether Mark Zuckerberg went before the senate and says any speech that makes other people uncomfortable will be taken down.  Uncomfortable speech is legitimate speech and protected by freedom of expression laws around the world.

Understanding or rather reckoning with the texture that comes with legitimate speech that we may not like but needs to be kept up versus illegitimate speech restricted by a procedure established by law internationally that is beyond any one body.

The second thing about the business model to quickly respond, I don't think that at least ‑‑ I don't want to speak for the other speakers who aren't here anymore, but I wasn't saying we need a new business model because that's problematic without a viable solution.  I agree that isn't constructive at all.

I think questioning the business model, and I said Steve said a little earlier is crucial.  If we're going to say show us the algorithm, we won't get anything.  No human will make sense of it.

Understanding these systems are social Techsystems to understand why that system was designed, who it was designed by, what it's optimizing for, what weights that particular system is starting to respond to is really important.  I will end with this example.  I think that Facebook didn't do in Myanmar, which we saw happen with the genocide.  A system was thought of as a technical system in isolation without occurring the nuances with the social context.

I think the business model there was to optimize for something in an efficient way when it should have been more deliberate.  We can talk more.  I don't want to take up too much time.  I hope that kind of answers your question.

>> MAX SENGES:  Thank you very much.  Dylan, you wanted to come in.  Please introduce yourself.

>> DYLAN SPARKS:  I'm Dylan Sparks and work at Luminate.  I want to go off the points Vidushi may, which I think is important.  I think context is very important with the proposals.  I think the harmful or illegal content model Germany adopted is a model which is probably going to be quite specific to this legal system in this country's history in the same way that the duty of care model in the UK will have particular appeal because of precedent, which might not exist in other countries.

I think it's important to remember that this isn't the first time mass communication platforms have been regulated like starting with the telegram and newspapers and what can or cannot be said on live news or radio.  These things, I think it's important sometimes the Internet and digital, everything seems new.  In many ways we have precedent for the types of ways we make sure they're productive functions in societies and specifically in democracies.  But I really thought that the briefing materials, even for people like us that work in this field, often it was like a good reminder of the different models and options and the pitfalls with each of them.

I think it was just kind of reinforced ideas that many of us have that is not a one size fits all.

It's not a global solution either, especially with legal systems.  Much happens case‑by‑case.  I think the point raised by the MP from Ghana is an important one, because I think there are inequities with access to platforms.

Some states have much more influence in trying to push for like political ad transparency.  Like the MPs took to Facebook, obviously it would be a different response to like British or Canadian or American MPs.  I think all of these issues are important to think about as we move forward.

>> MAX SENGES:  So thank you very much.  I want to invite everybody who is interested in the subject matter to check out the website where the materials will be available to consider and to evolve as a public good basically.  You know, just to respond very quickly to the point about politicians and different legal systems having different access to companies, that's why I do think we need an international solution.

You know, it cannot be nation by nation and to different cultures to different cultures.  What I think makes much more sense is to consider each online domain a place.  When you go into a hip‑hop club, a certain language is expected.  When you go into a classical opera, certain language is expected.

I think we should not ‑‑ in Saudi Arabia it's different than in London, et cetera.  So I think we have to do both.  We have to accept this plurality and this diversity of place as well as an international common infrastructure to discuss these things.  This has to promote some reactions.

I have you, you and you.

>> SPEAKER:  I have a very quick response, and then we'll go.  One thing to keep in mind is NetzDG or these laws may be place‑specific, but they also get copied.  Thinking about them in isolation risks us missing some of the spillover, and they often get copied in places where the context is quite different and we have to think about that.

>> MAX SENGES:  I think that all of you want to come in directly on this point.  I invite you from the back, you were first to come to the front to the microphone so we can hear you.

>> SPEAKER:  We have three minutes I'm told.  Please be very quick.  We have a couple other people.

>> AUDIENCE:  I'm Carmen.  I don't represent any organization, but my background is in ( indiscernible )  I want to add a quick comment and a question in the end on a working level.  I feel like the discussion here is very much high level, and there are points regarding for the platforms, how would they have the knowledge/authority to decide whether content is harmful or illegal.  There's been points made on the business models of the platform.  I wanted to bring a very working level point regarding what would be the role of press?

We're in a multi‑stakeholder dialogue here.  I know that some platforms already having news partnerships with different journalists and media when it comes to disinformation.  You are selling a car ‑‑

>> MAX SENGES:  We want to give time to other speakers.

>> AUDIENCE:  Right, to make it very quick.  The platform companies are not journalists, which means they can't do the journalistic work of that fact finding and fact tracking of videos, which do we have the full video of who shot who first or the beginning or the end?  So my question is, how would you consider inviting the role of journalists or press into balancing the work that tech companies just can't do?

>> MAX SENGES:  Fascinating question.

>> AUDIENCE:  Will that pose another problem to the journalism industry, basically increasing the reliance on platforms?

>> MAX SENGES:  I hope not.  I hope journalism goes the way it does.  Let's go to the other two speakers to hear from as many as possible.

>> AUDIENCE:  I'm from the American University in Cairo and Berlin.  I want to thank you for all the valuable inputs and second my question is, okay, what if the authoritarian regimes or governments use the regulations on the Internet to put more oppression or restrictions on the freedom of speech?  How can we protect ourselves from them?  Thank you.

>> MAX SENGES:  Very good question.

>> COLE QUINN:  Cole Quinn from Microsoft.  I'm wondering what your plans are for the I couldn't tell puts of the study?  ‑‑ outputs of the study?

>> MAX SENGES:  As said, the three impacts we are seeking to get to understanding what an informed community would prefer, to see the delta with the before and after.  So far we have the briefing materials.  The next two steps are to come.  Everybody is more than welcome to come up to us after this session and collaborate.  I think it takes a village to get this right, and the more people look at the materials and spread the word, the better it is.

>> MEGAN:  I'm certain we're going to be kicked out now.  I want to thank you for coming and participating in such a lively discussion.  It was great to have all of you here.

( Applause )