The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> The council of Europe has been active since the beginning. Also the Budapest convention with respect to the rights of individuals towards automatic processing and that's from '81 and has over 60 member states. So we're trying to work also with the kind of global sphere in mind. And we are now working on a number of areas but also in the areas of the artificial intelligence.
>> We have the digital media research center. Where we study the internet.
>> I'll give you the framing of the center and I'll hand it over to Nicholas who will explain how we would like to achieve what we are going to discuss here. As you know, the title of the session is AI will solve all problems, but can it? When we first started thinking about it, we were very much aware of the presence of illegal and harmful content online. And ongoing private and public initiatives to try to tackle this. Which are based on many different set of problems. From child pornography and exploitation on to hate speech. And a couple of months ago, we were referring to Mark Zuckerberg's testimonies talking about how AI would be the solution to our problems.
Today, we are in a partnership between France and Facebook.
Announced today how Facebook tackles hate speech. And we have a little bit of understanding. The speech yesterday which opened the door a little bit of an intermediate liability conversation, conversation and platform neutrality conversations which I think a lot of conversations and we are very happy it informs this session this morning.
So I see all of these initiatives turning into legislative proposals quite quickly. To name one, the European Commission is working on a regulation to tackle terrorist content online. And on the other side of the equation, many states are working on artificial intelligence working on strategies and proposals. And yet, I have improvise a little bit. Available here in hard copies, but also online, we did a human rights comparison of all the member states strategies on artificial intelligence and also some regional parties.
That's the framing content wise.
The question here, what we would like to achieve is how artificial intelligence and similar technologies can or cannot solve content moderations in human rights perspective. And I'll hand it over to you.
>> First, I want to again, echo the gratitude for all of you coming not only on Tuesday morning, but your willingness to participate in a session that is a little bit different. One of the things that really struck us was a frustration with panels where we only really get the perspective of a few preselected people on the front.
That's a shame given the expertise on these topics. We'd like to focus on the end result. We're going to be able to produce from this session a crowd‑sourced level of confidence that AI can address, the three particular content issue we're going to be talking about in a way that respects human rights.
In order to do that, we really implore and begging of you for your active participation. We're working a little bit against the architecture of the room which is not well set up for a group of small group discussions. But the basic plan, and I'm, again, asking for your patience and your innovation here. We're going to divide up in three separate groups.
One group on hate speech, one group on this information. Within each group, we'd like to spend half an hour to 40 minutes, essentially, coming up with a list of the pros and cons of why we think A, I may or may not be able to address some of these content issues.
What I want to really stress here is we want to use the expertise in the room to develop not only an answer, but a nuanced picture of what machine learning and other technologies are likely to be useful for and any sort of risks they propose. I'm going to ask you also within your groups, not just to focus on the content of the particular issue. We want to provide in an abridged debate format, a two‑minute report on the pros you've come up with and a two‑minute report on the negative or the threats you've come up with in that particular domain.
Even if we have to play devil's advocate a little bit, we want to try to capture, and we'll be recording this and documenting this, that the breadth of the concerns we can present back to the group and we will have a voting session where, which we will also, then, capture the results of the voting session to identify the spread of confidence that we have in these three particular domains.
We think this is likely to be a useful outcome. Better than just a normal report from a panel session. But a reflection of right now, the sense in the room, how likely we are to be able to respond to the challenges that we heard in the keynote address last night with the current state of AI Technologies. And hopefully, that will be better to inform a more nuanced policy debate. Having said that, it's now 10 past 9:00. We're going to break into three groups, starting on the left here with Hate Speech.
>> The middle?
>> Yes, I guess on the left‑hand side. My left, on there are far wall, more towards the back on disinformation and over here, with this particular group, you want to flop your hand, over there with what about terrorist and extremist content. So thank you for your participation. And let's try to see whether we can come up with a good list and outlook.
Please, get up and move. One more point before. Just reminded me. Out of this, we'd like you to pick two reps, those able to report to the entire group. In two minutes each.
(Many speaking at the same time)
>> Can you hear me? Very good.
We're going to, actually, because of the room and the noise level, we'll try to have all rapporteurs speak into a microphone. And we're going to start with the hate speech group and go to the extremist content. You'll have two minutes for the pro argument and two minutes for the con argument. And we'd like to open the floor for a brief five‑minute question and answer session to the group. And then, we'll go to the next group. The plan is to have about thirty minutes of this.
Great, I will gladly hand them to the one of the two rapporteurs from the two hate speech groups.
>> AUDIENCE MEMBER: Hello, I'm going to number the negative issues, which are basically six key points. Sorry, they are positive.
[ Laughter ]
I have all of them here. The positives, only one. But will identify hate speech faster and on a much larger scale than human base moderation. AI can help identify content related to hateful speech. Much more complex. AI can help to field content. And then, society can provide input in terms of how to characterize hate speech but can be used to train young people, educate young people around the risks of hate speech. And this one, problematic, can help to reduce the number of people to detect hate speech. Good for companies and can potentially lead to higher levels of unemployment.
So the pros?
>> The negatives, which were slightly more focused around the three ‑‑ different phases that we sort of identified in the process. And the first one as we mentioned was around just identifying what is potential hate speech in that. The one pro. And then, the two layers that follow that was assessing the flagging whether someone is potentially hate speech and some extent, that could be done, but we should have some human involvement until the machines become better at language processing. That's a maybe negative. And then, we said they would probably be a need for human involvement.
And the beyond that second phase, in that process, and because a definition of hate speech involves ‑‑ and the same with AI will probably need to evolve. And we had a question about whether humans will actually be able to do this proportionality test better than machines.
And we talked about in terms of another negative was whether platforms will start adopting strict liability thresholds. And how we compare these sorts of questions around the thresholds of the balancing exercise. And whether machines are able to actually manage the sort of proportionality test. Whether humans can do that better. And then, again, humans being exposed to the kind of content which they're exposed to when they have to evaluate the content.
We had a broader philosophical question around transparency of companies and why we are not holding them more accountable to show how these things work and then, a constructive recommendation around the need of involving more society and broader and defining hate speech and designing these things. And something I mentioned, forgot to mention at the beginning was brought ‑‑ step back and looking at the starting point.
And how different processes are very different levels when it comes to using AI to identify hate speech.
>> Anybody else from the group that would like to add something?
>> The last point I raised was that after it's been flagged by human or something, AI might be able to build the case files of perpetrators. So to be able to file the law enforcement, liability can come ‑‑ is more follow-up in liability issues.
>> Enforcement benefits of AI. Anybody ‑‑ any question to the group? Anything that was maybe not considered in your view?
>> I couldn't hear anything about let me phrase it cultural dependent algorithm. What is hate speech in one country may not be hate speech in another part of the world.
>> So I think that what we try to ‑‑ I might have skipped over it, but talk about in the terms of the definition of hate speech, but how keep on evolving in the standard of hate speech. And as part of that here depends on the region and the sort of language and how even humans at the moment aren't able to identify that within one country.
Anybody else? Yes, please?
>> AUDIENCE MEMBER: I really would hate to leave this particular group without a better understanding of how the group defined hate speech. Well, I'm really puzzling because even though I come from the United States where we have a robust freedom of expression, freedom of speech, I still personally am very conservative. And I think I would know hate speech if I saw it. But I don't know how that would be generally defined within my country or within the world.
So I'd appreciate some helpful hints.
>> We felt it was somewhere out of our (inaudible). If, though, our group was terrific and the discussion was vibrant, if we had been able to answer that question in 30 minutes, we would have been terrific group indeed. We did get up some, I think, some of the balancing questions around how we answer that to your point. It's a question that we actually can't answer at the global level. It has to be answered within particular communities.
So it's not quite possible to say what is hate speech internationally? Also, the definition that we arrive at balances our number of human rights. It balances the rights of freedom of expression and access to information against the rights to privacy, human dignity and nondiscrimination.
So we want to achieve an information environment where people are able to access the content they want without it being buried in a sea of filth. But we also want to allow, you know, to have a preference for the content that people are putting out there to be available without being over censored by the platforms.
It's not really an answer from a content perspective, but more, kind of, the concerns we have identified around to how the definition should be drawn. And we did note from both of our presenters that there are interesting opportunities for companies and platforms to involve civil society groups at the local and regional and national and international levels in arriving at and realizing those definitions.
>> Usually, one important parameter is actually looking at the victim, the victim's point of view. That's usually where you have to start to look at hate speech and then you have different parameters. But you have to focus on the victim's subjective side. And that can be difficult if it's not the person. It could be a whole community. But that might be a good starting point.
Thank you, I think that about closes it for this group.
>> Yeah, so I think this idea is a bit crazy. But what I was thinking is if you have classifiers and models that can classify groups with high prediction accuracy and it's very clear that when you have the (static on the audio) you are more, you have high tendency, or you are more susceptible to hate speech. You have sensitivity to hate speech. This is the ideas I was insisting about here.
>> Thank you. And I'm going to abuse, for a moment, and mention the courts, which have not been mentioned at all and the identification of illegal content. What is in most European legislative frameworks crucial is whether there is an incitement element to speech. Whether we are inciting people to a crime. And that's an important element whether this is content that deserves to be taken down. That should be identified as illegal, or not. But thank you very much to this group. And you take your pick in terms of the order of rapporteurs speaking. Thank you.
>> Hello, I'm reporting on the negative aspects of AI deciding what is misinformation and what is not. So basically, I cannot judge context was the first thing we came up with. And cultural otherwise, on ironic or satiric article might be quickly labeled as misinformation, even though it is not. Languages are still a huge barrier for AI. So cross referencing articles in different languages would quickly be impossible.
AI is still not very good as translating from one language to the other. Also, big problem is that I cannot self‑reflect. So if it finds something that it considers to be misinformation, it cannot fix that mistake.
Also, AI will probably label new ideas as misinformation. It uses data to learn. So new ideas, new ways of thinking would be something it's not familiar with. And in its absence, truth is very subjective and unstable. Everybody has their own truth and AI cannot account for that.
Last point raised was group profiling and privacy. Classifying people. If anybody would like to add from my group, this last point was not ‑‑ I didn't quite get the gist of it completely.
>> The point I made was in the field of education. AI, also potential the potential to cluster children. By profiling, when we are talking adults. But when we are talking children, I think the high risk is of clustering children. And by that, ultimate risk will be predetermining the rights in a way.
>> So on the itemization, if you remove the person who quoted and just stated the words, it's easy to know who the person is. So we thought about standardizing the data in terms of speech and terms of, like, what issue it addresses.
>> So we were looking at the positives. And I have two or three points. And the first one, it's sort of the most important point, what is the alternatives? So we were entering into a time history where there's going to be too much information to be dealt with humans? There's no way we can have enough humans being sitting behind the humans? There is basically no alternatives? Something we're going to have to embrace in terms of disinformation.
We sort of took the view in our group, humans will come into play. We're not going to have a system that's completely AI, that AI is making the call. That humans should and always would be part of this loop.
So the second point that we had is that AI provides a lot of things that humans can't. So the breadth, depth and speed. So just the amount of data. We were talking about YouTube and how many, like, hours and hours of video would have to be watched. It's just not possible for humans to be able to watch that amount of video.
But I think, also, a point that was really interesting, our group was that so looking at not just the content of the article, but where it's coming from, who it was published by. What is the point? What is the context? That's not something that a human sitting behind the computer would be able to really see if they have, you know, ten seconds to say, this is misinformation, or not. But AI could. It could look into the depth of the article. And then, the last one was speed, of course.
In all of these things of disinformation, getting it down fast makes such a big difference because how many views, how many shares it gets with disinformation, you can counter it. But most of the time, this can be prove not to be effective at all. It's not seen and shared. And getting back to this idea of depth.
Assuming that AI could do is find out who is sharing this, who are the reTweets coming from? And if you can find the depth of the article, you can find the networks and the origins of the information. Which will be important. As we were talking about, a lot of the information is shared on our Facebook feeds, shared by people we know. Our friends, not people trying to share information or disinformation. They're just people who thought it was funny, that the meme was funny or something like this. Using AI to come back to the origins and take down the people who are creating the content and not the people that are sharing it.
And the final point, AI may not need to delete information. One thing is sorting the algorithms and what you see on your feed. Is this the most emotionally charged content that's going to get the most likes or reTweets? Or is this the most credible contents? It could be something that would promote content that is more credible and find sources and people who have higher trustworthiness and promote that content higher on the feeds instead of the content that is most likable.
Right?
>> Any question from the group?
>> AUDIENCE MEMBER: I thought your point about not just looking at the content, but looking at the track record of the people posting was important. Did you have any specific cases where that was being done? Any examples we can point to and say, yeah, that's working?
>> So, I don't have so much background in AI, but in disinformation, I know this is what's being done by humans right now. And if we take the same processes that are being done by human right now and use AI to scale them, then it could be a possible solution. So I know in the recent, the recent case where they're taking down twitter accounts from Saudi Arabia, this was looking at ‑‑ they were posting some political content, some nonpolitical content, but by looking at their track record were you able to say these are false accounts.
>> Thank you. Just a brief question. A follow-up on the interesting point. The promotion of ‑‑ demotion of, perhaps, lest trustworthy speech. When you go around to discussing it all, how the systems exploited by really quite sophisticated actors who are spreading disinformation. I'm thinking, particularly, of one of the recent reports from Data and Society that shows that algorithms that we built to sort content that were not really resilient enough to tackle systemic exploitation by people who are really keen on spreading disinformation and hate. And wondering whether you talked about the other side of those content curation algorithms?
>> It's not like you reach a static end. It's a continual arms race. And you're going to have to evolve it and make it better.
>> Great. If there's no one else, we're going to come to the last group, please. Thank you very much to this group.
>> All right. So our group was on extremist content. We didn't do the best of fulfilling in terms of pros and cons. But we had a good discussion around some of the issues with extremist content, which I think lent itself to the challenges of AI in dealing with this. And in many ways, we had very similar issues and challenges as the hate speech group.
So one of the ones is kind of definitional. Once you get away from the worst cases of extremist content. So we took a bit about things like beheading videos. What is extremist content? And whether the degree of violence or that it causes harm to somebody or that it goes against the values of a particular country. AI would have challenges in identifying the degree of what we would classify as extremist content. And where that bleeds into hate speech and even disinformation.
And we also talked about some of the jurisdictional issues that that brings up. So not only are the laws of each country different, but their policy and philosophical concerns of each country are very different. And there's that bit of a sort of ‑‑ I guess, EU/US divide sometimes around what ought to stay up and what ought to come down. So, again, at the beheading videos. There would be some people of the view that where content helps to inform people about things, then that should stay up in some circumstances and there would be other people that would think that work should come down in every circumstance. And so it would be ‑‑ AI tends to lend itself to hard line programming and not so much of that nuance.
And we also had concerns around transparency and where the AI and algorithms sit within companies. And who is making the decisions about what stays and what comes down.
>> I work for Cloud Flare and we protect about 12 million websites from DDOS attacks.
And P some of our customers are pretty strange and extreme people. You may have heard about the case with the Daily Stormer, we actually terminated a customer for their speech. Although, truth be told, we also terminated them because they were doing something that looked like libel and fraud and other things.
But in general, we don't want the hackers and the people launching DDOS attacks to be deciding what speech is allowed online. So we do protect a lot of different players. So I'm very glad to be part of this discussion.
We had a very multistakeholder group. We had industry, government, a lot of academics, international organizations. The one thing we were missing is we did not have any Asian representation. Looking around the room, I don't think we have been covering that very well. Our group only had two Americans, the first didn't get all that much attention.
But I did pick this group. I agree, it's good not to have American voices drowning out everybody else. But I picked this group because I thought it was going to be a lot easier than the other two groups. It wasn't. And the reason, I think, when I have problems here, just relying on artificial intelligence as the magic solution, have been mentioned. It's almost impossible to figure out the intent of the person posting it.
Three years ago, we got attacked mercilessly by a group that thought we were ‑‑ and we were told, look at this website, it's terrible. It's got a celebration of jihadist attacks. Turns out, it was a Kurdish website. And it was showing the atrocities against the Kurdish people.
Exactly the opposite of what these people thought it was. Likewise, several of you have mentioned the difficulty of deciding what's legal and what's not legal and how it varies from place to place. We run a global network. The whole reason we have our network is to provide content to everyone. So it gets very hard for people trying to do that to subdivide the internet. The other point, machine learning has a particular role to play. Is what you said. We don't have to just look at the content and try to decide what's good and what's bad, what's allowed and what's not. We can try to identify who is behind these sources.
So I mentioned in our discussion, a wonderful site, which everybody should write down. Makeadverbsgreatagain.com. It's even simpler than the Google interface. You type in the handle of the twitter account that you think might be. And it gives you a rating from zero to ten on the likelihood the behavior indicates it's a troll.
The reason I think that's a good example to share is because it's not about having the websites and the online service providers decide, it's about each of us to be informed about what we're seeing and dismissing the stuff that we don't want to see. And in the future, I think it's going to be on the content companies online to help give individuals better tools that allow them to make their own rules.
We do that at the network layer, we allow our customers to figure out what countries they don't want to see traffic from if they've been giving a lot of attacks and malware from particular countries. But I think the thing that came out in our discussion, and this is probably the most important conclusion was it's not whether AI can do magic, or not, it's whether we ask AI crowd sourcing, little data, other tools to do the right thing.
And I think there's a lot of concern that we've been asking the technology to do impossible things. So let's design the system properly. Let's have lawmakers who understand what can and cannot be done and not believe there's run technology that's going to be our magic bullet. So thank you very much. We've had a great discussion. And if we want the full list of case studies, we'll put that together in a summary. We spent about half of our time talking about case studies as a way to eliminate this issue.
>> MODERATOR: Thank you very much. And we have our question right here.
>> AUDIENCE MEMBER: Thank you very much for mentioning intent. Because it made me think about this proposal of the terrorist content regulation in the EU. And it might be an interesting notion that they left out the word "intent" from the proposal as opposed to what's in the terrorism directive. And of course, everyone reads the text because of the lack of intent as an encouragement to use content monitoring tools for detecting that content. And I haven't heard from any of the groups concerned very specifically addressed in the privacy area. And we know that some of the companies are already doing this for child exploitation, for instance. And I wondered if this came up in any of your discussions. Because I heard it here but not too much elsewhere.
>> There were several times when people said, hey, it matters why it's there. Showing the bad speech and debunking it. If the AIs cleans up all of the bad speech, the people doing that bad speech will go out to their followers and say, look, what we're saying is so dangerous that Google and the American government and the EU, they're all suppressing us. And that is a very powerful way to build your cult. I think that's ‑‑ thank you for mentioning that.
The other thing I did not mention, and we should have done this at the start. We did about three or four polls. Thumbs up, AI's working, it's not, sideways. Time after time, our polls came out one quarter up, one quarter down, one quarter sideways, and one quarter confused. And that was, I think, where we are at this debate.
>> MODERATOR: Okay. Thank you. Before we come to our group. Thumbs up or down, are there any other questions, one more question to this group? From the plenary? So you have a question to another group? Yes, sorry.
The point is now for someone to ask a question to you. Yes, please.
>> AUDIENCE MEMBER: This is probably a question to the first group that reported out. May I ask the question? I'm puzzling over cyber bullying. In the United States, we've had situations where young people have committed suicide because of the bullying on social media and the parents may go to the other parents and say, stop it. If, indeed, the child speaks up to say anything. Or go to a school system who is recalcitrant to get involved and court suits occur after the poor child has committed suicide and the courts are really struggling with this.
Is there any role for technology in cyber bullying? Or is this a situation where it's just one off and the victim has to deal with it? Or not? I don't know. But I mean it sincerely as a question to this entire group.
>> I think, if I understood correctly. It would have to be more transparent. The responsibility for what. I mean, sometimes, it might not be enough to remedy bad speech with more good speech. Perhaps, we should have a more transparent regulation as to what the responsibility should be placed.
And that's a bit unclear, perhaps, today. We have to be clearer with who is in charge and who is responsible. And the machines, perhaps, will be able to do that. The accountability that does rest with a persons in charge of the system that might generate the hate speech, for instance.
As an example.
>> Thank you. Just a quick. I hadn't thought about this. I can imagine that there's probably fairly familiar patterns of behavior in terms of cyber bullying that machine might detect. There might be potential, there, to harness patent recognition tools that will help to identify it. But when you're talking about, in particular, minors, it is important to remember that the state has a more sort of benevolent capacity to protect children because they are not fully autonomous and so, there is a stronger case for state intervention in those kinds of situations because minors cannot protect themselves. I raise that as something to think about in terms of governance.
>> Thank you. And I'm sorry, we have to stop this. Now cyber bullying is a specific form of crime, really. It's difficult to group that as hate speech, I think. We are now handing to Yen to our general vote.
>> You've all been very flexible today, but you're not off the hook yet. We said thumbs up, thumbs down. Confused. We actually want to do that, right now, as well. As a vote of confidence. In AI's abilities to solve our problems, we would like you to engage in one final form of gymnastics in here.
You don't have to get up on your tables, but we want to do the following. Do you think AI can solve hate speech? And be respectful of human rights at the same time? We want to ask the same thing about disinformation. And we want to ask the same thing about extremist content. So if you ‑‑
>> Is this 2018 or 2035?
>> Immediate future.
>> So what we want you to do is line up against the walls. Zero percent being that corner, your confidence being at 100% in that corner, along this wall. We will take a photo. We will not share it. It's just for internal reporting purposes. Can AI solve hate speech? Please ‑‑