IGF 2017 - Day 2 - Room XII - WS215 Selective Persecution and the Mob: Hate and Religion Online

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> GAYATRI KHANDHADAI:  Good afternoon, everyone.  I hope you brought your lurch with you.  It's against the rules to eat in the UN, but if you're clever about it, you can get away it.  You have to smuggle in your room and drink very carefully.  I hope the staff are not listening to me.  I'm the standing moderator for this workshop on selective persecution in the mob: Hate and religion online.  And my colleague could not come.  She did not get a passport and Visa matters sorted out in time.  So I'm here for her.  I'm a member staff and I live and work from south Africa. 

I want to start by truce introducing our panel and a little about the topic we are trying to address.  On that topic, I think we're all aware that hate speech online is spreading at an alarming rate, and I think it's become the topic of concern, a concern by ‑‑ for activists, for journalists, for secular voices.  It's a concern for policy makers as well, and often responses to this has taken place in a way that actually fails to address the problem but reduces the openness and freedom of speech that we actually want to have societies where we can eliminate and criticize hate speech. 

So this is a topic that has been dealt with to some extent before, very much in the human rights community.  It is also a topic that requires a different approach.  So the IGF is a very appropriate venue for it.  And just to tell you about our speakers, we have ‑‑ we'll start off with some research that's being done by ‑‑ we have yet a researcher that has done this for my organization, based in Malaysia, and she'll talk about this particular problem and the concept of let the mop do the job.  And she'll explain that when she looks at the research. 

We then have after ‑‑ just jumping here, on my left, David cane, who does not need much introduction.  On freedom of (?) Talking specifically about what can be done and how we can respond to this. 

And Grace will be joining us soon.  And Grace will talk about hate speech online and elections and how that's the impacted on the mobilization and speech in Kenya. 

And then on the far side we have Carlos from Brazil.  He's the director of the institute for technology and society, and he'll give us a case from Brazil focusing on LGBT issues.  After Carlos and Grace, we'll move on to looking more at what's the thinking that's the taking place in how this matter can be responded to?  And we have Schultz is replacing Orska who was originally in the workshop program.  He's the director of the Institute in Hamburg.  And you'll talk to us about intermediary liability and how we deal with- Do we need to rethink?  Do we think about it later? 

And we have joining us remotely, Susan Bennish.  She'll be joining us towards I think the second last speaker.  Hopefully she's with us.  So welcome, Susan.  It's good to have a remote speaker. 

Susan is the faculty associater of the Burton Cline Institute at Harvard and teaches Human Rights at American University School of International Science.  And her work has been really significant.  She's introduced the concept of dangerous speech to this discussion, which is actually been quite a refreshing concept.  And a note, when Susan does come on as our remote speaker, you have to put on your earpiece in order to hear her. 

Chen will speak immediately after Gia has presented research.  She will talk specifically about the Indian experience, and she will also talk about research that she's done.  She's the assistant profession of law at the National Law Universe of Dehli and they've done work on the regulation of hate speech in India, quite in‑depth research and she'll share some of the reflections they've gained from that. 

So on that note.

>> PANELIST:  Good morning, everyone.  I'm really pleased to be here as we're sharing the report and also to be on the panel with colleagues whose work has been cited in the report.  I draw a lot of findings as well as recommendations from the experts. Very briefly, the report was carried out for a process that was covering Malaysia, India, Pakistan.  The context for the report is we're seeing growing levels of religious intolerance and seeing it reflected, carried out in the online spaces.  The levels of threats that have moved between online spaces as well as physical spaces.  Asia hosts several quite devastating cases of individual bloggers, free thinkers, activists, human rights defenders, journalists, who have become victims of attacks for the expression. 

So the purpose is to look at how freedom of religion, freedom of expression is severely attacked.  In general, we're looking at how impressive it is.  The report looks at how the forms of expression often in the context of politics, in the context of critiques of religion that have attacks. 

And the title of the report is Let Them All Do the Job.  In the course of the research, try not to make it a click bit title ‑‑ click bait title. 

It's actually been demonstrated that in a number of countries, we've seen how reactions against free speech.  Reactions against critical, political, and religious views have actually seen organized reactions where we've seen the use of trolls, the use of groups of people who are against these kinds of critical comments that are mobilized online and some are mobilized on the streets. 

In all the countries, we've seen how the reaction sometimes come immediately or sometimes even delayed reactions.  We've noted how there have been agents of hatred or the perpetrators are ‑‑ what we would call as someone of opinion leaders or, yeah, the use of the term agents of hatred used quite widely, have actually mobilized these kinds of reactions.  The report looks at some of the legal framework where the four countries actually share a tradition of legal systems. We see similarities in terms of the loss.  Draw upon what I would say things like offenses related to the religion that is situated within the penal code for the laws.  The laws on the internet are very, very much used in order to curb speech.  So there's a lot of content regulation in the internet laws.  Laws on sedition are used quite frequently.  Laws on national security, especially in the context of terrorism.  So these are similarities I've noted in the four countries where you have seen political elites, governments in power, result to the use of these laws in order to stifle expression, particularly where it's related to religion.  So this is the general trend. 

And also I think that what is also important in the study is that just take a step back to say that there's been a sense of as I said growing intolerance, often facilitated by issues of nationalism, patriotism, or hatredism.  The use of this political mark in religious for expression, there's also been a growing sense of nonstate actors supported by the state institutions in order to also target groups that individuals see as being critical against the political elites as well as the dominant religious narratives. 

Of the four countries, three have Islam as a state region.  We have also seen in India, being the more established democracy of the four, actually also seeing ‑‑ witnessing the use of religion in order to target critiques, journalists, activists, and individual citizens.  I think you've seen enough of lynch mob for example to justify criticisms of the political party as well as this sort of ‑‑ the main religion.  So these are the context in which we've seen attacks being committed against individuals, against groups that begin online and then go off line.  It's very difficult to separate the two.  I think it's being carried out on all the spaces.  So I think I'll stop there, just to provide context for the report. Please check it out online and feel free to ask questions later.

Firstly, Gaiya, congratulations there was a lot that was put out there, but she just had a few months to cover something that is contentious, that does not tie together easily.  I cannot begin to describe to you the amount of material that exists just in my country.  So to cover that in others, it's quite a feat.  I see you found a great way to structure it.  I hope everyone can use this report. 

So I've been going through the recommendations.  I would add recommendations to use this as an ongoing process to engage in the global South more broadly.  I think that's really useful, specifically because if you track the institutions within the countries that you've studied, you'll find that they learn a lot from each other that's not very good.  This is a part of the conversation that people were having.

The second thing I hope this effort is leading towards, closer national monitoring. 

So if this is going to be a repetitive exercise, if we're working words an index by which these countries have sort of a clear record on dealing with hate speech both on the freedom of expression and ways in which it results in violence, I think that it would be helpful. 

So it's basically to say I'm excited about this report and where it's going. 

Second part of this is I wanted to discuss our own work and what we've learned about Indian law.  And then finally, what I'll do is I'll tell you a little story about why it’s very hard for all our platforms to understand hate speech‑‑ and give you an example. 

We've been doing a study of hate speech laws in detail.  If you wanted to understand what the standard for hate speech in India is, where would you go?  We have a patch work of legislation.  It's taken us years to map it.  And we find every time we take a break for two months, something new happens and we have to revise the whole report.

If you permit abuse of power, there's no way to tell whether a book that has been ceased or banned in India meets the hate speech definition.  The criminal procedure code is a huge issue on that front. 

If you look around for a list of books banned, that would be impossible. 

Tax law is used also to distract hate speech.  Using an arcade publisher of the customs act.  So this is just a flavor of the sort of discoveries we've had.  I don't know if there's time for the story.  But there's a little slide.  This is completely ‑‑ this is how hard it is for online platforms to figure out what is hate speech and what is not.  One of my favorite music places is called Piano Man where bands perform.  It was an event that they listed early this year.  I looked at it, didn't understand what that word meant as a privileged upper class person that hasn't heard a lot of cast abuse in my life.  But that's actually a really offensive term. 

Abusing that is a criminal offense in India.  And as you can see, even an Indian like me had no idea what the word meant.  I had to go and find out.  This to me is interesting of how hard it is for platforms to understand when something is hate speech.  This by the way would identify as criminal speech under one of the legislations in India.  But the people who owned the club didn't understand what it meant.  I didn't understand what it meant.  Sometimes hate speech can be a very local problem and handle (?) That's it for me.  Thank you.

One of the paintings that were part of the distribution was this one and the alter ‑‑ explaining this work apart, saying it's a match between like Jesus and Sheava and in the arms of this mixed God, you have like a ball in his words.  That thinks that western civilization and the bringing to the world.  So it's a criticism of western values and a consumer culture. 

You say you might be different reactions to this painting, but the fact is that a number of conservative groups end up going to this exhibition and creating lots of videos criticizing this work of art and exhibition as a whole and by the end of the day, they managed to shut down the whole exhibition and creating a debate in Brazil about standards for works of art to be exhibited in public spaces and museums. 

As we can expect, suddenly, we got a number of draft bills of law being discussed in the national congress.  On religious and tolerance, discourse on the internet, and we even had a public hearing last month on this very same issue and in the chamber of deputies, the house of representatives in Brazil.  So this is really remarkable because Brazil likes to portray itself as somewhat ‑‑ some will say a racial Utopia.  Some will say it's a country in which being jumped and kind is something that comes without saying for Brazilian people.  But we are seeing the discourse in internets and the whole culture of hate speech might be undermining this perception.  The question we pose ourselves is how do we react to this situation?  And just to conclude, we have two comments.  One on a legal standpoint, Brazil has a law concerning intermediary liability. It’s the Brazil and Internet Bill of Law. It was approved in 2014.  It has a specific provision on intermediary liability.  It's article 19 that says that providers shall not be held liable for content that they end up hosting, like third party contents. 

They would only be held liable if they failed to comply to with the judicial order requesting take down of this specific comment.  So this provision creates the environments that foster free speech.  And does not create incentives for providers to take down one specific content, simply because it has received like a private notification about this content.  But of course, this provision has been challenged and situations like this one that I have just mentioned creates the environment for new views of law that tries to change this provision of the Brazil Internet Bill of Right, creating either a notice and take down provision or creating incentives for platter forms to control speech and language.  I would conclude saying that hate speech together with fake news- now both of them are doing this tag team in terms of (?) that might heed to more content control.  We have elections next year.  You can see how those two issues might interplayed in creating the perfect scenario for congressmen to push forward for a change in the current legislation in Brazil, dealing with those two instruments.  Sometimes it's fake news and misinformation.  Sometime it's hate speech and extreme discourse online.  And bridging them both together here as instruments to change our provision in the Brazilian Internet Bill of Rights. Thank you.

One of the problems we have is the old definition of the term hate.  That definition is weak.  In a country that has more than 42 ethnic groups, that definition has been unable to pin down exactly what hate speech all about. 

So it's unclear, and each of these ethnic groups have stereotypes.  Stereotype other communities, stereotype other peoples’ behaviors, stereotype women.  And because of that culture elements that underpins some of those stereotypes, it has become challenging to actually legislate cultural issues and the commodity of hate speech. 

Now, during elections, we had a lot of hate speech has manifested itself in several forums.  One, there was the issue of political speech.  So we had politicians from the main party and the opposition all hyping their supporters along ethnicity. 

I think the culture in Kenya is that people support a leader from their ethnic group.  And the way this leadership grouped themselves, you found certain tribes aligning themselves to their ruling party and you found another certain group aligning themselves to the opposition.  And therefore, there was a lot of hyping along what is ours?  What happened is that both politicians then would be accused of hyping their supporters?  Because again that’ll translate into online platforms.  And what happened is that the national commission on hate speech, that's supposed to deal with such issues, was seen to be lenient on the ruling party. 

And you can understand because the chairman is a political appointee of the president.  So it's also balancing where your bread is buttered.  And so the commission was seen to be lenient to the ruling party as opposed to their opposition. 

So when it came to prosecution, whenever they wanted to prosecute somebody from the ruling party, then it also had to prosecute somebody from the opposition.  To try and see that they were balancing.  But of course, there was their issues around with that.  There was also the issue of just ethnicity in interviews on radio and on television where the interviews were conducted by speakers who wanted to show that they were objective but really, they were just pushing forward they believe along ethnic lines.  And along that, then the issue of deformation came along.  You're defending this person or that person.  But really, we have not had a very succinct case of prosecution where people have been prosecuted on hate speech. 

Now (?) the organization I am associated with did work with the national commission on hate speech.  And we produced a report around the elections.  Along hate  speech but we were calling it dangerous speech, fake news.  And some of the issues I'm highlighting from that report.  But what came out is that there was not a lot of correlation between fake news and dangerous speech.  Because fake news was being disseminated but also had a lot of ethnicity and hits and so there were all those challenges.  I don't think I should do ‑‑ since I don't have a minute.  But I could respond.  I'll shut up.

The first is that Germany has enacted in hate speech, fake news law.  It’s called the Network Enforcement Act that will come into effect completely the year 2018 in January.  And what the government did here and the parliament was actually to say that we have a specific set of codes in the criminal law and we take them and we require internet intermediaries, platforms as they're called, to take down content that is obviously illegal, that the term used in the law in 24 hours and the other illegal but not obviously illegal content in six days.  And at first glance, it seems to be good idea to focus on the criminal code and do not try and define hate speech in a new law.  Nevertheless, when you look at the German dictionary, it takes sometimes years to determine whether it is liable or not, for example because to interpret speech is not an easy thing to do.  And what you very often see is that the code at first instance takes another view than the supervising code at second instance in Germany.  So what does it mean? That it is obviously illegal or even illegal? So that’s not really clear what the intermediaries should do there. 

The second critique that is brought forward is that tough time constraints for platforms to take down content just when there is no request to take that down.  We will see whether that's actually the case or not.  David commented on that with a letter to the German federal government.  And what makes that stand out from the government's point of view, efficient.  Instrument is heavy fines and there is a state agency supervising that which has problems on its own because state interference is that thing that's not impossible, as critics say. 

There is one element in the law that most of the observers agreed to, and that is that there's an obligation for platforms to have a local contact person.  I think that's something that is relevant to other countries as well.  I hear the complaint very often that you do not know how to interact with your platforms, except for yesterday after a panel had talked with a representative of an initiative fighting against Islamophobic hate.  She said that we very often have the problem addressed to make clear in a short time.  I told you it was a huge problem.

Last point, I've had the pleasure of working with the council of Europe the last two years, and we came up for the draft recommendation on words such as intermediaries.  It's very much about clearness of the legal basis, the human rights implications of policies of intermediaries and adequate remedies.  And I accepted to sit on the panals because I wanted to use it as an advertising break to show that we have the staff recommendation.  Please feel free to dominate it and treat it.  Thanks so much.

(Audio fading in and out.)

.

So I think one fundamental question for the platforms and for us, and it's not an easy question because there may be differences with online hatred, online expression of hate, that may vary from offline expression.  But the question is should that excitement standard, which is a standard for governments of human rights law, be the standard that the platform should be adopting?  I'm not going to answer that, but I think it's an important question for people to be thinking through. 

And second part of this, and this comes through in Gia's report for APC.  And I think this is important is that when particularly when we're getting to the stage of hateful expression online, that law enforcement not monitor it but deal with actual instances that constitute harassment, that may constitute threats, imminent threats of violence, that law enforcement not treat online space as some kind of jurisdictional-free zone and allow real threats to flourish. 

So at least as a kind of base line issue, I think it's important for those threats ‑‑ and again, this is in Gia's report.  It's important for law enforcement to treat those threats as real.  And one possibility might be that the ‑‑ that law enforcement's selective approach to real threats may actually foster hateful content but also foster the sense that many people have that hate is rampant and certainly online and that it leads to harms off line.  So I think that that's ‑‑ I think it's important for law enforcement to take those kinds of steps. 

And then the last set of points, I would make it around the private sector.  It's very easy to throw around transparency as a response to this.  It's real.  Susan made this point, and I'm glad she did.  We just know very little about what kind of content is being taken down.  I'd like to see more than just transparency about rules.  I'd like to see transparency about application of the rules.  So my sense and I’m talking some of the companies, or at least one of the companies, is that companies, you know, they're sort of developing their own version of case law. 

It may not look like the case reporters that we're all used to, if you went to law school and actually practiced law if you go and look at cases, but they certainly collect the instances of content that they're taking down.  They certainly use it for the training of kind of their growing legions of content moderators.  And I think that kind of ‑‑ even if we got rid of some of the privacy implications, you know, and so the more hypotheticals.  If we at least have more of a sense of the kinds of cases, the kinds of content that the platforms are dealing with, then we would at least have a conversation with the platforms and with governments that is about a fuller sense of information so we're all operating at the same level of ‑‑ like this is the actual content that we're all talking about.  So I'd like us to move beyond the transparency of rules to translation of application of the rules and not just for the purposes of some vague sense of transparency, but so that the conversation can be, this is what's being taken down.  This is what's staying up.  This is what we're evaluating and actually deciding doesn't need to be taken down.  And I don't know if we have a very clear sense of how that's operating.  I don't.  Maybe some people in the room might have that sense. 

And then the last point is consistency of application.  Because it does seem and pick your platform, it does seem that certain kinds of cases get noticed and then the consent is taken down and other content stays up and it's not really clear why, but it's also ‑‑ seems pretty evident that there's not real consistency in terms of applications of the rules.  So those are just some of my reactions to the presentations that were made, and I think we're all very rich and probably spark some conversation that we could start.

Hate speech is related to the idea recognition, the social recognition.  Recognition is a fundamental concept of society.  It's related to help the people regarded as individuals.  Hate speech affects dignity and doing so attacks recognition.  (Static for audio.) Those platforms are recognized by the data and algorithms. One of the phenomena that had been an item that have been an item defined is the bubbles(?).  Those dynamics aside (?) Strengthening (?).  They're each called convictions, religious convictions.  Instead of videos and contact to diversity as internet at first, we believe that they would do so.  It's possible that those platforms are creating environment where the recognition of the differences of diversity is being threatened. 

While we should be patient ‑‑ because the influence of social networks and the public's fear is increasing and as we know the public’s fear sets the basis for the origin and the exercise of political power.  Then I would like- I mean, someone should comment on the idea that we should pay attention to algorithms and big data and how this is creating social dynamics that do not expect by society and by those companies that say those kinds of technologies to work on their platforms.

Number two, how the security agencies interpret it in terms of law?  Right

And third is a medium that has not come up, which is internet.  Internet like she said is a very powerful media.  Which in matter of seconds or minutes can inflame passions beyond a particle of control.  There is no two ways about it.  I have seen it happen in life, and it can be very, very destructive.  So it is becoming imperative that we now look at ways and means.  How do we counter it?  Because any heat speech that can cause physical or mental harm to anyone should not be permitted. 

Now the question comes in a very inflamed situation, if you shut down, you have got another set of people saying you are stopping freedom of expression freedom of speech.  So it is a dilemma which needs further discussion so we can arrive at some kind of solutions to it.  Thanks.

And David, you mentioned it, Susan also mentioned it ‑‑ do you think that should be approached as it is at the moment with generally a platform by platform conversation where particularly interest group for example in countering gender-based conflicts.  There's been conversations with Facebook, several conversations with Twitter and so on?  Do you think we need to have a common platform for conversations with multiple platforms.  So are we looking for a mechanism where we can begin to do it together or do you think this method of having these ‑‑ it's almost a multilateral versus a bilateral approach works best. 

There are lots of questions to respond to.  No one asked for speakers to respond specifically.  So we'll be glad to start.

Because even online you've seen those who had top painted the speech of the pies, also we need that level of power coalition. 

So I think if you're discussing the algorithm, you need to recognize that these qualities exist.  And that the speech has taken mace online or is affected online.  So one is the inflation of the different issues so where you've managed to make points about an individual' side chosen to be blasphemous for example.  And on the other hand make it quantities that exist.  Also makes it difficult to say okay social media is doing all of this.  But it's not really social media.  You have to take a look back and look at the issues of society that we also have to address.  Thank you.

I also happen to run into a certain interesting I provided last week which I'll beopy to scare with you after the session if you're interested in drinking it.

The fact that they pride extreme freedom of expression, we need to start lining ‑‑ dealing with an issue of the front also limits to freedom of expression.  And human rights also have that.  (?) That's junk.  In the last 2012 (?) First book was the platform, to each.  And most citizens are engaged on Facebook.  Then there are issues of (?) That it can be chopped what you're seeing.  And in 2008, everybody will be on to what's up.  So whether there are multipacks, hundreds or what's up.  And you can actually tell that some people get stressed out by their communications pause you keep seeing people lefty.  Some has left.  (?) This group is not allowing us to express ourselves so let's form another one.  And I think because of the nature and magazine of technology and social platforms and the word to settle people, let them feel free to engage.  I think it's something that you must also start conflicting.

(Static for audio.)

I work in the root.  My ‑‑ I'm going to try to make my question brief.  So one of the (?) One of the possible thing that's the happening is we don't have yet interrelation.  I think that's something if, not bad.  May main question is I saw yesterday and today a picture of a priest and a none that's a picture like having breakfast and I think it's very interesting to see this yearism what's happened in bay route, less than a month ago (?) Facebook account about (?) He was drunk and apologized but he was arrested and detained for 15 days. 

I've been trying to talk to the police officer, trying to talk to a few people.  I understand that a lot of people were berating him.  And the judge was really intimidated by these reports, so he actually said, let's put him in detention and a lot of (?) Were actually soothing him but every 1 or 2 days (?) Insulting violence.  Like a lot of things from the penical.  So how can we make the society from being very conserve, full of hate, to society that actually accepts different norms, different idea

And I think he still wants to believe that (?) That he would not allow that unless people share those (?) But felony you can always see the effect in connection and see the feet.  That does affect freedom of expression because there's some people who believe that the chairman can actually arrest them and they're afraid of that content.  One of the things were made (?) That because people know his phone number (?) And make him (?) So that as he arrests (?) Arrested.  So I think it's not sustainable and again like I had I have no hard and fast answer to that, but I think it's something which requires a very honest composition.  And to say, again like David said, it's (?).

The second one, the notion of the European (?) I think they have other interesting judgments likely.  Extremely interesting to what we are discussing here right now, especially these positive obligations are article eight.  And the other (?) Technical thing that privacy and protection becomes a kind of super human right and anything is succeed and he's making not of the best solution for the thing. 

Last element I advertised the consulate here.  Now I have another hat, and that is the UNESCO hat.  And what I would hike to suggest is picking one of the recommendations from the (?) And indicators to make that (?) Instrument and like to draw your attention to this if it's not (?) For observation already that UNESCO has (?) For freedom of the internet and that's under March, I think, and if he could link those instruments, I think that would be great (?) And when I can have that, we'll do that differently.  Thank you.

So Mohammed for president, maybe.  I don't know.  But I think that that's ‑‑ to put it another way, just engaging is really critical and engaging at the most local levels, we actually talk to people and understand their experience. 

The point was made earlier about a mob I think was in the title.  You know, talking to people who get caught up in mobs and understanding the relationship between the platforms and expression and how it connects to the evils ‑‑ and by mob, I think you're referring to both online and off line.  I think we need to understand that connection better, certainly before it's regulated and certainly before we get to the place of shutting down networks to deal with those things.  And then the last thing, I'm sure we're over time, I do think the bilateral approach, going company by company is relatively ‑‑ I can people need to broughten that out, and I think it need to be approaches although this is very hard and companies will push back, but certainly in an environment where regulation is coming, I think it's important for the companies to be thinking more about how they can think generally.  In a context where we've already got more in the U.S. and ripping consent, things like GI, that kind of response is (?).