The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> YIK CHAN CHIN: We'll start in another two minutes, just waiting for the clock to strike 3:00. Okay. Welcome to the workshop. You know that this is a workshop between the two workshop. So the workshop titles is about misinformation, responsibility, and mistrust. First of all, I would like to introduce the structure of our workshop. So you know how to expect. First of all, we'll have a short introduction of each speaker. I will give about three minutes presentation on behalf of our speakers which is Mr. Shu. Because he couldn't make the IGF due to the visa issues. So we have prepared three question to each our speakers.
So we'll open the floor to invite your comments and any questions. So we will get a response from our seven speakers, can and then we'll rev up the panel. We'll start with the introduction of each speaker. First of all, we would like to introduce Minna. Maybe Minna can say a little bit about herself.
>> MINNA HOROWITZ: Thank you very much. My name is Minna Horowitz. I am a researcher at University of Helsinki and (?) communication rights in the digital era. I'm also the advocacy expert at the Central European University that just moved to Vienna, as some of us might know, from Budapest. I teach also in New York.
>> YIK CHAN CHIN: Okay.
>> ISAAC RUTENBERG: Good afternoon, my name is Isaac Rutenberg. I'm a lecturer and intellectual property and information technology law in Nairobi Kenya.
>> AMRITA CHOUDHURY: Good afternoon. I'm the co‑workshop holder for this. My name is Amrita Choudhury. I'm from India. We work on research and policy and we have been involved at ground level in building capacity especially amongst youth on digital literacy and we're doing research on whether digital ill literacy has any effect on the perception of people on misinformation because India is having a lot of issues due to misinformation being spread.
>> YIK CHAN CHIN: Michael.
>> MICHAEL ILISHEBO: Good afternoon, everyone. My name is Michael Illishebo I'm from Zambia. I work from Zambia Police Service. I'm a law enforcement officer there. My specialization is cybersecurity. I'm involved in internet governance activities. The past government members are going after this IGF, basically my coming here is to speak on behalf of governments in Africa with misinformation, trust, and the responsibility. Although I may not represent an actual country or single government, but my voice on views will be that for general governments in Africa. Thank you.
>> WALID AL‑SAQAF: I'm a professor in Stockholm, Sweden. My specialty is mostly in media technology, Internet studies, internet governance and journalism. Of course this intersection between technology and journalism is where I am interested particularly in disinformation and fact checking as well as I have technical degree in computer engineering. So I can bring in both academic and technical capacities. Thank you.
>> YIK CHAN CHIN: And Ansgar, please.
>> ANSGAR KOENE: I'm a senior research fellow at Digital Economy Research Institute. For the last couple of years together with the University of Oxford and we've been running a couple research projects looking at how young people are interacting with the Internet and particularly algorithmically mediated services there. So questions about how recommended systems and such things impact young people's experiences. I chair a working group to develop an algorithmic bias consideration standard. Recently I am the global AI ethics and regulatory leader at EY.
>> YIK CHAN CHIN: Professor Xie?
>> YONGJIANG XIE: Hello, good afternoon, everyone. I'm coming from University of post telecommunications. My study focus on the cyber law, especially the personal information and data security. Thanks.
>> YIK CHAN CHIN: Okay. As everybody can see we have a diverse panel and they come from different backgrounds, computer science, social science, media, communication, and government sectors. The aim of our workshop is to identify the impact, the impact of this information of fake news both national state and individual. Look at what steps or measures has been taken already to refute this kind of information, fake news. We also want to explore or new ideas or possible solution to move us forward.
I will open the panel by doing, as I said, doing presentation, short presentation on behalf of the deputy chief editor Wang Shu. He couldn't make it due to Visa issues.
So, first of all, they would like to introduce ‑‑ most of you already know about the wave which is the biggest largest social media in China. They have more than 200 million users every day and they release 160 million messages per year. Besides text, they also have different function like live broadcasting and paid quiz and different applications, and photos.
So how do they refute the rumors in Weibo. Actually Weibo is different from the WeChat because it's more closed. It's like private. Microblock Weibo is open platform. The rumors is wide in the Weibo platform. They law firmed a platform for refuting rumors. They established some official kind of refuting rumors official account. They can collect the rumors on the public side. They also push the refutory message to the subscribers or the audience. The warnings of this topic is close to six billion. They have an official account which is to collect also push the refuting message to the subscribers. So they also ‑‑ the second mechanism they attack they call label. So for some rumors they're already identified. So they will label this as a rumors. Now this message will not be deleted immediate but there be the label of the rumors and become the message for refuting rumor.
Actually they give special privilege to some trusted or professional bodies. So if these professional organization or the high private media company identify these rumors, they will give the more privilege to label this rumors.
So also the mechanism they use a credit system. This one is a bit controversial because they also launched the user credit system. For the rumor propagators, so if the user has released the rumor once or twice, and the penalty will be reduction of their score. So they will identify this rumor propagators. When the score is reduced, there will be no six points but restricted to part of the content. Also they will give some users a correspondent reminder warnings.
If you look at the percent of the process by both parties. The other day they have 2,000 ‑‑ sometimes 2,000, sometimes 3,500 report about information with rumors. They say the false information which include disinformation and fake news. They look at the false information, they need to process capacity. They released the rumors of this information every day between like 200 to 205, so that's a range of how many rumors or disinformation is labeled in the process every day.
So I think this is a brief introduction about the mechanism they use by Weibo by labeling and identifying how to do the rumors. If you have any further questions, you can either ‑‑ sorry. Either email the Mr. Shu Wang or you can ask me later. He said I can answer some of the question on behalf of him. Thank you.
So this is his email address.
So now I would like to open the panel discussions. So the first question I would like to ask all the speakers is about whether ‑‑ why, what is the reason for the proliferation of the rumors or disinformation, fake news in different countries or regions, different platforms? Can you give ‑‑ I know there are different reasons, but there are some similarities as well. Can you give us a short introduction about the reasons for the proliferation of the rumor? Thank you.
>> MINNA HOROWITZ: So I come from Finland, Finland is often hailed as the poster child of a country that can actually fight against rumors, disinformation, trolling and so on so forth. I'm somebody who has been studying communication rights and I would like to say I come from the perspective that especially when we talk about misinformation and the trust, we also talk about social trust and democratic participation, not only technological issues. I just want to get it out there. Sorry I am so old fashioned but this is what I wanted to say.
I often think about this issue, why is misinformation different in different context? Of course, this is because different contexts have different vulnerabilities. I often think about it as macro, micro. Meaning macro vulnerabilities, if a society is going through turmoil, there's a lot of political economic turmoil, of course there's more societal possibilities and opportunities for misinformation. Then we have the meso level of media systems. I'm saying media systems, not ohm the internet and platforms but also the media. And then in those con testings vulnerability is also a microlevel vulnerabilities of those individuals or groups who might be targeted or might not be media literate. As a social scientist, I want to set this stage that I think we can think about reasons why different contexts, why different forms of misinformation and distrust take different shapes even in a global era.
>> YIK CHAN CHIN: Maybe you have some ideas?
>> ISAAC RUTENBERG: Sure. Just to mention. I'm sitting in for my colleague Arthur Gwagwa who didn't make it because of Visa reasons as well. I do teach in a law school. I think my comments won't be terribly legal at this point. In Kenya I think you have to look at the platform that is used in order to understand the motivation or the proliferation of fake information, and that is that in Kenya the most used form of social media is by far What's App. There is Facebook, there is Twitter and other social media platforms and Instagram but What's App is used by the vast majority of the population to spread information.
The problem with that is that you receive What's App messages almost exclusively from people that you know. And because of that, I think it's very easy for information that is not true to be perceived as true much more readily as opposed to being blasted on social media where you don't necessarily know the source. People assume when they receive a What's App message it's probably going to be true or there is some truth to it.
The other issue is that in a culture or in a society where it's very ‑‑ some of the craziest things you can think of are quite believable. The reports that I receive on my What's App are ‑‑ I can tell they're false, but I also see, yeah, the average person who doesn't study these things, might actually believe that the government is capable of doing something like this or some private company is out to do this sort of thing. So in a culture where rule of law is always an issue, I think it's even more difficult to distinguish fake news from real news. And that's something that adds to the proliferation, the rampant proliferation of these What's App messages.
>> YIK CHAN CHIN: Okay. I would like to also maybe go back to professor chin because I know in India, the What's App is very important in playing these kind of functions.
>> AMRITA CHOUDHURY: Misinformation or fake news has always been there. The only thing is technology is aggravating it. The spread is faster. As I said, it is from whom ‑‑ most of the messages which you receive in communication devices are from your trusted ‑‑ people whom you trust. So you kind of believe it.
For example, in the 21 lynching cases which happened in India the message was spread through the communication device, and in most of the cases people thought they were doing a social good. The message was there is a child kidnapper who is out to kidnap children. And that's how people spread it. It was a social good they thought they were doing. Unfortunately it was causing harm to someone or leading to someone's death.
So what we have seen through our work at the grassroot levels is the trust factor is very difficult for people to understand, especially when digital literacy or basic literacy is an issue and language is an issue. Most Indians are not English speaking. We have the second largest internet users. We have a population which is coming on to smart phones and using internet because the cheapest Internet possible in the world currently.
So those are certain issues. As he mentioned, the current situation, the political turmoils and there are actors who are actually spreading, pushing information with certain objective that is scary.
Maybe professor, you have this rule of law issues in China. I would like to know, also is there any impact of the legal system also.
>> YONGJANG XIE: As we know, decentralization of the internet means that everyone is a journalist and everyone is a edit. This has lead to a flood of disinformation of fake news. According to the source survey 48% of the respondents say that they had believed a story which turned out to be fake. Only 41% think that the average 10% can tell fake news apart. This directly undermines people's trust on the internet. As we know, there are more than 800 million internet users in China. Maybe fake news maybe in some countries there is only one person believe it, but in China there may be 10% believe it.
The fake news may be in China have a bigger market. So someone can grab the attention of the users maybe to become the (Chinese)
>> YIK CHAN CHIN: ‑‑ the celebrity on the Internet. Okay. Can we move to Michael and also ‑‑ because you have experience from Zambia.
>> MICHAEL ILISHEBO: From Africa actually. It's Africa. It's not country‑based.
>> (Off Microphone)
>> YIK CHAN CHIN: Thank you.
>> MICHAEL ILISHEBO: Basically looking at ‑‑
>> (Off Microphone).
>> YIK CHAN CHIN: Sorry?
>> MICHAEL ILISHEBO: It's fine. It's fine. Basically looking at fake news or misinformation or whatever fancy name we try to call it has always been there, but unfortunately initially it was coming from government and from propaganda because government had the tools of information dissemination. The tools we have right now which includes the internet, social media and new form of disseminating which is given power of citizens to have the freedom of expression.
Unfortunately personally I see three reasons why there is the proliferation of misinformation online. The first one is the lack of legal structure. In Africa basically there are very few countries, if none, that has laws that pertains to the control and flow of information. Meaning that there should be some, of course, penalties for spreading fake news. So the first thing is there is lack of laws.
The second one is the initial in terms of reaction to statements by most governments. Most governments are too slow to respond to stories that are flying all over our social media platforms. I'll give you an example. What's App is not like Facebook. On Facebook if I post everyone else deems to be fake, believe you me, the comments I'm going to see will dispel that statement. But on What's App because it's a one‑on‑one conversation, if it's not in a group, you receive it as my brother said, from a trusted source. So basically people out there just feel like they've read something, believing it is true, they'll forward it. The moment they push the send button, they'll save it. As a result, it moves within seconds. It moves within seconds.
The third one is on the aspect of the private sector meaning those ‑‑ these companies that actually hold the infrastructure and the tools for communication. I'll give an example of What's App. What's App has a policy now which controls how many times one can forward a message. As of last week, I know ‑‑ in a single 24‑hour period one was only allowed to transmit, to forward 20 messages. Beyond 20, you cannot forward. I hope they limit it below five. I didn't know about India. Generally in Africa it's 20 times you can forward the message. Basically that will limit the flow of misinformation.
So to look at all these three reasons I've given, it all goes down to the level at which us humans now have access to all the tools that initially were not in our hands. That's due to political reasons. That's for racial reasons. There are so various reasons that people send misinformation. Others know the information they sending is wrong. Others do it fun. Others do it to seem up to date on technology. Basically what they're doing is wrong. Among the various reasons is the availability of the tools. It's an ability of the government to respond and enact laws that will stop the devices in terms of penalizing people spreading fake news.
>> YIK CHAN CHIN: Thank you very much, now. Amrita, I know you are in the background. Maybe you can talk from both experiences.
>> WALID AL‑SAQAF: I may not look like a typical Swede, but I come from Yemen originally. And the Middle East is where I've been doing substantial research on how disinformation, misinformation spread. But I would like to draw comparisons, commonalities with the spread of information anywhere in the world. It's basically driving attention is the main cause. It's like how do I drive attention? In the past when you used to have regular print journalist products, a newspaper article would be alongside some text. So the body would represent the article. Now days on the internet to have the person click, you need to incentivize the person because it's a two‑step process.
So there's a Marshall McLuhan's popular phrase, the media is the message. Now the internet has become the message. The way the internet is composed, the web, forces us to think in new ways in which we incentivize the public to click on something. That's driving disinformation because it generates more opportunity to create income, to create sources of ‑‑ sometimes it's illegal forms of income, but eventually as my good friend mentioned, there aren't many consequences to it.
Accountability is missing online. So there's a number of factors that play into the hands of those that spread this information. They need to be done not only in a government topdown approach but from a bottom‑up approach looking at media literacy approach, number one. A lot of individuals may not understand what they received could be fake.
Another thing is also using professionals such as journalists. It's really unfortunate that sometimes journalists themselves fall in the trap of spreading false information.
I'll give you a typical example that happens during election campaigns when you have various political candidates promoting certain information as if it were correct. They promote this information and realize maybe not too late that this information was actually false. So you're now propagating the misinformation. So fact checking itself is necessary for the first stage in which you get raw information and ensure that whatever you provide online or offline is accurate.
So I would like to emphasize on that perhaps and make sure that we understand that the technology also allows us to fact check quicker if you have the skills to do so of the.
>> YIK CHAN CHIN: Thank you. You did my second question. We experience first before we tend to the second question. So thank you.
>> I'm based in the UK. There are some elections going on at the moment. Pretty much from the very beginning when these elections were started, more or less the first couple of stories that we started to hear were various political parties accusing each other of spreading misinformation. And an interesting aspect of this is the kind of defense we heard for certain of this misinformation is they were saying, yes, we slightly doctored this video, but we did it to make it more fun. We made it so that it's just more attention grabbing because the core of it is supposedly still true. By the way, we have a different copy of this video on our site where you can see the real one.
So what it highlights is the way in which making the message something that goes viral is a driving factor behind some of these things. An outrageous message is something that people will want to share more. So you're going to want to exaggerate things additionally. Now this is, of course, something isn't new. Tabloid newspapers, the UK is emphasis with tabloid newspapers and have always been doing this. But now you see more of a blurring of where this information is coming from. It comes from friends circles. If we look at interactions between young people online they feel strongly due to the gamification of the system. You are constantly basically being pushed to try and share many things to get the message out there before your other friend in the friend circle gets it out there. Whoever spreads the message first gets kind of a credit to ‑‑ within the group, which means it lowers the critical assessment before sharing something which is a contributing factor that effectively comes in due to design decisions which were made into the social media because the platforms want people to be sharing things in order to stay on the platform.
>> YIK CHAN CHIN: Thank you. Actually our second question is about actually the measures that have been adapted in different regions or countries. The measure could be as some speaker pointed out whether it be capacity building or platforms. I would like to invite all the speakers if you have ‑‑
>> AMRITA CHOUDHURY: I would like to add when we're talking about what measures are best practices which have been taken. Perhaps they would want to elaborate on what could be good as in what measures do they think should be encouraged more when they're talking about best practices. I think that would help.
>> YIK CHAN CHIN: Yeah. If you could also propose in your opinions what is the best practice for the future? Please, Minna.
>> MINNA HOROWITZ: If you've been following the misinformation and combatting misinformation conversations, what's often said about Finland, which I currently research, is that we combine press freedom and media literacy. Media literacy from school on. But I would like to add to that this is of course is not a foolproof formula, because we do need multi‑stakeholder or several organizations and institutions collaborating and in many Nordic countries as in Sweden, we do have public service media have been doing media literacy training as part of their remnant of public service institutions. Now they've taken on fake ‑‑ misinformation media literacy education also documentaries and such. So talking about these issues.
But what we also see in the country that is ranking top‑notch in all these ranks is that we've done some longitudinal research on Finnish people. What we unfortunately see is both in service and quality of research that Finns, each and every one of them, think of themselves very media literate. They are very sure they can tell apart false news or fake news and propaganda and the real thing but what they do not trust are platforms, legacy media institutions, and one another.
I think for us to try and start to tackle these problems, we also have to tackle it from the sociological perspective and understand how people truly experience distrust, trust, and its different forms.
>> YIK CHAN CHIN: Setting legal measures have been adopted across countries.
>> ISAAC RUTENBERG: So at my law school we've been aggregating the laws of technology and ICT laws from across the continent, the 55 countries in Africa and looking at the different laws that exist. In Kenya we have ‑‑ we have had actually a law that makes the dissemination of fake information illegal for almost 20 years now. In fact, last year or two years ago ‑‑ last year we passed another law that is now specific to cybercrimes and says that dissemination of information over the internet or over some computer network is also illegal. It was already illegal and now it's even more illegal perhaps.
So the laws are there. Across Africa we see that there are many countries that have these laws. I'm not sure about the implementation, though. And I think that I don't know of any examples of private individuals taking anyone to court over the dissemination of fake news or disinformation. Typically the government will do it only in very extreme cases or particularly where the information is against the government.
But just making up stories, making up misinformation is something that is illegal, but the implementation is hard.
The only thing I would add to that, though, is that it's not just a cybercrime issue. It's also an entity competition issue. It can be a private ‑‑ a market‑based solution saying, perhaps your dissemination is wrong and you're discouraging competition and that sort of thing.
Lastly, we do have a lot of people doing research in this area. I'll mention that a lady from Nigeria won the UNESCO L'Oreal science award and her PhD dissertation protecting misinformation with proof and deep learning models and algorithms. Research is definitely being done. I think it's a challenge of implementation.
>> YIK CHAN CHIN: Professor Xie is a local.
>> YONGJANG XIE: Regulating fake news the law or the measures maybe the technology measures. But I think in the ‑‑ on the internet because the lawyers are not the most important measure to control the fake news. I think maybe the technology will be the best one to regulating the fake news. Because as we know, the most ‑‑ part of the data and the is held by the company and they have the data and the technology to deal with the fake news. There's so much fake news on the internet. It's hard for government to do it with administrative measures to control it.
So just the technology as AI and others can be more efficiency to deal with fake news.
>> YIK CHAN CHIN: Okay. Thank you, professor Xie. Michael, you have enforcement how do you feel the enforcement of the law whether it's good or effective.
>> MICHAEL ILISHEBO: Basically what my brother says on the Kenyan experience. We have laws like in Zambia we have this law called publication of false news which is not ‑‑ which has now been struck from the penal code. If you look at it, those are more like offline laws. Because it says publication of false news.
To define publication in the era of social media and the internet in order to be the same way with publication in the sense that to publish, you have to go to publishing company, you must approach a broadcasting company to do it on your behalf. Basically that law has been outlawed. I'm speaking from a Zambian experience. If you publish false news, then you're in for it.
Unfortunately some of these laws now have been sneaked into the cybercrime news because of the nature of technology that currently is in use as it is put. Unfortunately most of these laws whether punitive or semi‑punitive attracts the attention of the Civil Society not so much of a government ‑‑ speaking from the enforcement side, inasmuch as you're trying to bring sanity to the cyberspace as much as you want to bring accountability to cyberspace there will be others feeling like you're trampling on their rights. Rights will be trampled. From the law enforcement point of view, it's not easy. It's not quite easy, because if somebody is defamed on Facebook, you'll see the disinformation but you'll need a data preservation order for Facebook to preserve that information so that on the day you go to court you present it in the court of law. Because if you don't get preservation order, the person who could spread false information about you can easily delete it because they have access to post and delete. Inasmuch as it can be a matter of seconds they can delete it. You go to court or the police station we don't have the tools to determine what was posted 15 minutes ago and has been deleted. Each one has control of what they post. It's a very difficult fight.
Of course, in the essence of laws, nothing can be done.
>> YIK CHAN CHIN: Thank you. Anyone wants to add? I know I asked from computer science background. Amrita do you want to add ‑‑ we'll come back to fact checking and algorithms.
>> WALID AL‑SAQAF: Well, I'm not a lawyer myself but from a technical perspective I can imagine there might be some solutions. We were yesterday at a session on the dynamic coalition of blockchain technology. Not many are in favor of blockchain these days. One interesting aspect of blockchain is preserving data, making it impossible to delete data. That might be problematic in some cases where you have privacy issues but technology is there. There are methods in which you can actually, for example, if you were cracking a particular thing and ended up in a position to store the data that's immutable, then you have that option.
It's difficult to see to it that this will be the main general practice. My point here is that if there are strong incentives to do something with technology, you can do it. The underlying question is it feasible, is it possible to do by the main stream? How much training do you need, capacity building? All those questions are an additional hindrance.
>> YIK CHAN CHIN: You want to come back to law or talk about AI and the blockchain.
>> ANSGAR KOENE: I'll try to bridge them. So briefly basically in the UK kind of context there isn't really any direct conversation about putting in laws to block misinformation. The focus has been more on questions of hate speech as far as potential new legislation is concerned. So there is a white paper on that which hasn't moved forward because of other political debates going on.
However, there is a new piece of legislation that it has coming to effect as part of the data protection act of 2018 which is specific to the rights and protections of young people, the age appropriate design. And what this does is that it limits the kinds of information that platforms will be able to collect on young people which is anybody below 18, and the way in which they would be able to use this to do targeting on that.
Now, this doesn't directly address the misinformation question but it does potentially change the dynamics of how the information will be flowing because it will affect the way in which the recommendation systems, the news and those things will be acting.
If we think about possible technical solutions to these, because often that is sort of the direction in which you're hearing things, when the political sphere doesn't really see how to address the issue, the first stage is to turn to the technology companies and say, well, please make it go away. Use your platforms with some kind of filters or something like that to make sure that misinformation is either removed as quickly as on or each is blocked from coming online to begin with.
This is quite challenging. One thing to keep in mind is that if we're thinking of misinformation in the form of text, which is somewhat different from the images and videos, two different kinds of challenges. Friendships, in the case of text, yes, we can use natural language processing to do analysis of text. However, we need to keep in mind that machines in natural language processing, the machines don't actually understand text but it's basically statistics of text.
So if somebody uses the same kind of pattern of writing as passed misinformation that is used, that is something that can be picked up. But if it's a question of similar kind of writing but in a ‑‑ it's actually a piece of text that is commenting about the falseness of the previous version, the system will have difficulty making that distinction.
So that's one of the reasons why for a lot of sort of crucial decision making, it's still important to refer to fact checkers, to human judgment in these matters. However, there is a potential to use automated systems to do a sort of first level kind of removal of things that are very obviously misinformation such as repostings of previous ones that have already been flagged which can at least reduce the flow, the quantity of this kind of information going around.
>> YIK CHAN CHIN: Just real quick point is about deep fake, is that also a serious threat, because it's difficult to use, the technology, algorithm.
>> ANSGAR KOENE: Deep fakes have been creating a lot of attention because of the novelty and the wow factor of it. Current defects certainly are not sort of the main kind of issue that is at play. The quality of deep fakes at the moment is still not that excellent, so it can ‑‑ you can train systems to look for the kind of artifacts they will find in deep fakes.
Rather, what is the more frequent occurrence of misinformation is things such as old images that get reshared and relabeled as a new kind of incident, mislabeling the kind of events, shifting geographically of things so you get an image that was in reality from 2007 or something in Iraq and getting labeled as a current event in Syria. These are the frequent kinds of things that are happening and can ‑‑ they can be detectible by looking back in to sort of libraries of past things that have been uploaded. But they are more difficult to detect from the point of view whether they have technical aberrations of them because they are effectively real images. They're just displaced.
>> YIK CHAN CHIN: Thank you. I think with the measure is a capacity building like Amrita has something.
>> AMRITA CHOUDHURY: I would like to share something, though it's not a best practice, before the 2019 elections in India, there was a lot of concern of the social media, communication platforms being used or abused, but surprisingly we found the technology companies also complying to certain extent or being more responsible, having the five limiting forwards to five and restricting ‑‑ if someone exits from a group in What's App, for example, making ‑‑ having steps so that the person has to be asked before the person can be again attached to the group.
Similarly, government also doing a lot of capacity building amongst people to look at news before reacting. In terms of curated online content and even media, they came together, especially the curated content providers came together to have their own self ‑‑ having some kind of self‑regulation so that they can check that fake news is not moving through the platforms. So those were certain self‑regulation which industry themselves tried to do. Obviously companies like Facebook and others put in advertisements either in newspapers and other media so people can at least be aware before reading such news.
It was actually a good move which actually helped but more needs to be done. That is one best practice which we saw. I just wanted to adhere that we had a similar kind of discussion in the internet governance forum where there was certain discussion which happened. Something which was interesting which was raised, there is a trust deficit. If someone wants to regulate through regulations, the intent needs to be clear. What is it you want to address? Is it fake news or you want to do it in a retrospective manner or practical manner? Also, what is fake to someone may not be fake to someone else. Those are certain questions when you talk of regulation, the current regulations which are prevalent are either too broad or too narrow. It's not just regulation that can help. Capacity building does help. That's whether a we've been seeing through our capacity building engagement especially among young users because they're ardent users from 13 to 189 under socioeconomic conditions they are concerned about the news they receive. Though they forward it they say we cannot validate. That's where fact checking as Walid was mentioning it has to be inculcated. Senior citizens are not checking. For example, two kidneys are available, please contact this number. Kidneys are not ‑‑ it's not a shareable thing but people are sharing it educated people are doing this and digital literacy needs to be done by governments and private industries.
>> YIK CHAN CHIN: I think Minna wants to add to trust.
>> MINNA HOROWITZ: As a one capacity building aspect is building capacity of policy makers because the understanding of what deep fake is or what information disorder means is a big thing. I was surprised that in 2018 the council of Europe I did some policy work for them on this issue and the EU high‑level expert group, very quickly queened a multi‑stakeholder group and started to think about this which then resulted to the pushback on regulatory or legal tools but rather collaboration, self‑governance, as you said, and quality journalism and fact checking. But then I would like to ask all the other panelists and maybe you in the audience also, I've been trying to find information of how fact checking works and what the impact of it is to follow up on that? And I think, yes, if I may, and then if anybody in the audience, I would love to hear experiences of fact checking.
>> YIK CHAN CHIN: Yes, we will go back to the audience later. Walid, you wanted to say something on.
>> WALID AL‑SAQAF: I'm glad this landed on my lap because this is my cup of tea. We discussed the process of fact checking the last few years. We came to the conclusion that the fact checking process in itself is evolving. There used to be very rudimentary steps, there's content. Let's go into the content. The thing is that this is really incomplete.
So we have now a three ‑‑ which is a good practice and best approach to fact checking is always have a three process. First you need to identify what is the medium that has been used to propagate the message? The medium allows you to understand the capacity of the individual content to be misinformation.
For example, if the medium is a repeatable news agency it's different than having a medium which is a parody website. Occasionally I've seen journalists who took a tweet. It appears to be true, but it turns out to be from a website that is a parody website and they kept going about. Yes, they didn't each fact check it. They thought, this is a medium we trust.
The other thing is sometimes we have mediums that are multipurpose and user generated. For example, social media platform. It has rioters and it has the onion. In this case we look at the next stage to look into the source. Who is the person or the entity behind this inversion. Then you analyze the source and understand what is the likelihood of this being misinformation.
Then if you come to the point now you've cleared these stages, you move on to the actual content. It's not as simple as jumping into the actual content. You have to look at the resource of the we have built a tool that allows our students and our practicing journalists to use the tool to analyze these separately, using various methods. If you're analyzing a medium, you can look at who is the information of the medium, when it was started, who was behind it. For a source you can look into their social media platform. You can look into their history. Content, you can do reverse image searches using Google images and others. This is a more method logical approach that we're trying to get students and journalists to learn. It's not easy but it's necessary if we're to improve the fact checking process.
>> YIK CHAN CHIN: How about the fact checking in China, is there any mechanism for fact checking in China, professor Xie?
>> YONGJANG XIE: There are a lot of users in China. Maybe the person, maybe they just know the internet but they have no idea about the true force of the information. There are internet users should have idea on the internet maybe not sure. So you check the truth of the information. But it's hard for the Chinese users to do that.
Maybe you can have to judge the information from different points. So it's hard to check that the true source of the information. I think the government maybe the sources have the obligation to give the truth to the public. And the platform also have the obligation to deal with the fake news.
>> YIK CHAN CHIN: Okay. This come to our question which is about the role of the government, coming from the panelists, what do you think of the role of the government in refuting misinformation or fake news because there's a bit concern about whether it's reliable, give the power to the government and also there's a concern of freedom versions and privacy issues. Can anybody please comment on the role of the government? Anybody wants to say the role.
>> ISAAC RUTENBERG: Sure. I'll tell you what I think what the role of the government is not going to be. In Kenya we just introduced a law that would require all bloggers and ‑‑ all bloggers to be registered with the government. A lot of social media monitoring and then What's App group members, group leaders also have to be registered, and the group leaders are responsible for all the content on their group. I think that is not the way the government should be involved in regulating this.
I'll talk later in my closing statement about best practices, and I think that that is overreaching way beyond best practice.
>> AMRITA CHOUDHURY: So I feel governments have responsibility because if there is an issue, every citizen goes back to the government, because if you look at private companies, they have their interests which is valid. They have their business interests in mind. If there is an issue, the last resort the person resorts to is the government. They should not be the only actors deciding thing because there are other players that have to implement or go by those regulations. In India we have the intermediary draft which is meant for the social media companies. Fortunately intermediaries in encompass cyber cafes or anyone publishing and transferring information. They're trying to narrow it.
There was also a particular clause which says prechecking content which actually is a concern for us. You should not be monitoring content before it passes through the pipe. So governments, I would say, has a role, but the other actors in this entire ecosystem have a role too and they need to be at the table when decisions are made or policies framed or implemented. It's not a 1‑0 game but it's a sharing and going together game.
>> YIK CHAN CHIN: I ask about the government role and accountability of the government. If they want to enforce this kind of check or responsibility how do we hold them accountability?
>> MICHAEL ILISHEBO: It's the accountability of the governance if they come up with a law, that law must be agreed upon by the general citizens. That law must not be one‑sided. It was just last week or the other week there was a vote at the UN where Russia, China, North Korea, Venezuela, and a group of countries are trying to push a bill within the UN framework on the cybercrime aspect which somehow contravenes the Budapest Convention. If you take time to read that vote and there will be another vote in the next two weeks or so. So basically if you look at it, it's like ‑‑ it's another way of stifling freedom of expression.
Just to come back to my point, despite government having a mandate to make laws, having a good mandate to bring everybody on board as my sister said here to sit down in a multi‑stakeholder manner where you discuss these laws to the inclusion and agree on all contentious issues before they're enacted. Governments should take a deliberate policy to ties to general citizens to tell them the dangers and the fact checking measures. If you receive this, if it doesn't sound as it is, you can go and check from trusted sources. But, again, you say trusted sources. Who are these trusted sources? It might not be for him because there the agenda BBC is pushing is not my agenda. It could be hard for her to prove a trusted source. It could not be for him or for her that site.
Basically if we define in terms of fact checking which source can be trusted? That in itself actually leaves everything not balancing up. Because if you say fake news, what determines what fake news is? We don't ask that question. Governments must react during election times. There you will see with threatening strike because there's something at steak. Beyond elections they go to sleep.
Fake news and other problems some of this misinformation is coming from the business sector trying to cripple the agendas of the products of the other competitor. As a result, we find ourselves as the tools and agents of spreading this disinformation. We've already received some of this information in the product because it will cause cancer but we don't question the source of this information.
The government will ensure that the general citizens are taught and explained to in terms of what constitutes fake news. Thank you.
>> YIK CHAN CHIN: Okay.
>> AMRITA CHOUDHURY: I think we need to ask the audience ‑‑ before that, we have a question from the remote. So Nadira, I would like you to ‑‑
(Overlapping Speakers)
>> AMRITA CHOUDHURY: One is if they know of any best practices. Also if there are any comments and ‑‑
>> NADIRA AL‑ARAJ: I will and them later. Because there a question directly about the comment from Kenya. It's how successful was registering bloggers and verifying the Kenya government helped in stopping misinformation? I just want to add something from my research on misinformation because it's similar practice was introduced in the United Emirates where they are fined if ‑‑ they are fined if they spread misinformation.
So the floor is yours.
>> ISAAC RUTENBERG: Yeah. Well, fortunately from my perspective, it was a draft only and never was actually implemented or voted into law. So it was in fact roundly criticized by Civil Society and pretty much everybody outside of government and so much so that the government withdrew it recently and said we're not going to do it like this. We'll go back to the drawing board.
I don't know if this has been done in Uganda. Is that ‑‑ I'm not sure if it's been done in Uganda. A lot of our social media laws seemed to be modelled after Uganda. I wouldn't be surprised if that's what it was originally propagated.
>> NADIRA AL‑ARAJ: But it's been done in the United Emirates by the way.
>> AMRITA CHOUDHURY: Anyone have comments or questions. You can raise your hand.
>> AUDIENCE: Thank you.
>> AMRITA CHOUDHURY: Introduce yourself.
>> AUDIENCE: My name is David Christopher from a global network of free expression organizations. My question is building on the discussion around laws that are designed to tackle fake news and also a bit about the use of AI in terms of content takedowns. There's a huge amount of concern out there within the free compression community about the spread more and more countries bringing in laws along the lines of the one in the draft one in Kenya. A report by freedom house which is one of our members had said it's like over 17 different examples, different countries bringing in these types of laws.
Clearly there's a lot of overreach there. I think sometimes these laws have been brought in with benign intentions, even though they're poorly drafted. They're sort of generally the aim is to tackle disinformation. But in many other cases the aim is to stifle criticism, stifle dissent, basically clamp down on freedom of expression. So I guess my question is I would love to hear the thoughts of the panel on what safeguards are needed in order to protect freedom of expression and also to ensure that legitimate speech isn't taken down, especially by AI content takedown mechanisms that are what our current technology is not cleverly enough to really understand the contents. Thank you.
>> AMRITA CHOUDHURY: We'll take three questions.
>> YIK CHAN CHIN: Thank you.
>> AUDIENCE: Somewhat building on that, my ‑‑ my name is Malcolm Hatti. The short version of my question is so old I could ask it in Latin, who watches the watchman. The second longer version of the speaker said the risk of fake news and rumors was particularly serious because it was plausible, people found it plausible to believe terrible things of the government and terrible things of what corporations might be aiming to do.
That prompts me to wonder is more dangerous to allow people to believe such terrible things of the government and the companies and the consequences that might flow from that? Or is it more dangerous to allow the government and such companies that might well plausibly believe to be acting in such a way to have the power to prevent them from knowing such things?
Now, since the real answer in this is probably we're looking for a bit more nuance than these binaries. I guess my ‑‑ the more subtle question is should we be looking to construct a mechanism to suppress fake news and disinformation and rumors and all these things, and then seek forms of mitigations and safeguards, such as the previous questioner asked? Or should we instead be not implementing such controls on information but instead be seeking to mitigate the negative effects of misinformation or fake news and rumors?
>> AMRITA CHOUDHURY: Thank you. Quick two minutes.
>> AUDIENCE: Thank you my name is Satish from India. Thank you for an interesting discussion. I have a comment on the best practice. Two years back Jimmy Wales, the founder of Wikipedia launched something called Wiki Tribune. That was taken forward one more step which is Wiki.social. It's a Twitter like platform. This is specifically targeted at fake news. It started two years back. It uses human curated sourcing of articles. It's supposed be a major step forward in the fight against fake news. Just for your comments. Thank you.
>> AMRITA CHOUDHURY: Thank you. And the last question from the lady and they can answer.
>> YIK CHAN CHIN: Last question.
>> AUDIENCE: Hello. Good afternoon. My name is Samha. I'm from Tanzania. There's been a lot of eastern African policies brought up. As a fellow East African, I care. As we know as the Africans who are here, we're not used to people spreading misinformation about us. So I feel like sometimes there is a natural skepticism or a natural mistrust when information comes concerning our government or our people. So my question is, do you feel like the common African or the common east African who isn't part of the media, because obviously the laws with the bloggers in Kenya, they're doing that similarly in Tanzania as well where you have to actually pay to be a blogger or an influencer. It's not just a job you can enter into. Do you think the common African cares about misinformation because of that natural skepticism about well, just another lie that I already know is a lie?
>> YIK CHAN CHIN: Yeah. Please. Ansgar, do you want to give response first and then Michael.
>> ISAAC RUTENBERG: I'll answer the last question first. Such a fantastic question. And one of the challenges ‑‑ well, maybe not challenge. One of the things that we realized in doing the research is that the average Kenyan, at least, I can speak of the average Kenyan, but maybe not the average African. The afternoon Kenyan, let me not say doesn't care whether it's fake news or not but would rather have restrictions on speech than the opposite which is anyone can say anything about anyone they want which leads to ethnic tensions and real problems like physical problems.
So I think when you approach these topics from the more free speech human rights kind of perspective, you miss that nuance and you don't actually get the support of the general public when you're trying to advocate for such things.
Yeah. I'll let my colleague ‑‑
>> YIK CHAN CHIN: Yeah, please, Michael.
>> MICHAEL ILISHEBO: Just to answer her question, as an average Kenyan or African care about fake news. Let's go back to six years ago to the genocide in Rwanda. Assume the genocide happened now. When everyone has a tool to check whether the news being spread is fake or not? Basically before then the news ‑‑ the misinformation was plaid through radio back then in 1994. Propaganda news here and there, almost a million people lost their lives.
Now imagine now that the same thing happened where images were putting imported from Nigeria where those Al‑Shabaab or whatever you call and they're linked to people in your area or these are the people that your government has hired to strike on those ones whose views disagree with those in government, it will affect you. Misinformation affects each one, each and every one of us directly or indirectly. We may not actually feel the effect now. But along our daily lives in one way or another, it will affect us. Basically to answer your question, an average African must actually take the responsibility to ensure that they're able to filter fake news and make the right decision of not forwarding that which they've received based on those models, what's the point of me forwarding this? What am I going to gain? Am I just being a tool or carrier of news that is not ‑‑ or news that's going to be bring tribal hate speech or anything that will bring division within the country? Basically every African must care and misinformation is and has always been important in the terms of the way we live. Without it everything goes.
>> YIK CHAN CHIN: Walid, please.
>> WALID AL‑SAQAF: Wonderful set of questions. Let me address a couple of them whether it's a good idea to mitigate or deal with the effects of misinformation. I find this very thought provoking because on both sides you ideally don't want to have disinformation but now that you have it, perhaps that's a matter of treatment rather than are prevention.
I'll give you an example on the study done on the French elections in 2017. It was a matter of understanding what would ‑‑ what was the main reaction of the public, especially this (French) the right wing candidate, what was the case where journalists fact checked and showcased that this was a lie, or this was disinformation and presented this information to the voters. Guess what happened? They still didn't believe the fact checks. They still believed, okay, my gut feeling tells me this is the candidate for me and disregarded the fact checks themselves. Later on they realized that over time politicians were able to use this characteristic of the echo chamber or the persuasion effect of speaking to your beliefs, confirmation bias, so they used that effectively through creative story telling.
The way you story tell may actually defeat the fact whether it's factual or not. You can say a fairy tale and if it appears to you very beautiful and genuine, each if it's totally fake, it would be used as a way to vote. Those unfortunately the reality. Again, the question is, would it ever matter if you fact checked is the appropriate thing to ask. I think I've run out of time.
>> YIK CHAN CHIN: Professor Xie I know this is a similar situation in China, for example, if people tend to believe the rumors and believe the rumors as truth where truth is a rumor. Is that right?
>> YONGJANG XIE: As I mentioned, in China, for example, just like in my family my uncle or my aunts, they're more than 60 years old, they always receive a lot of fake news, rumors just like they usually give the information to me to let me to check if it's true or false just like the government will get more money for the retired person. I always tell them that the information is easily to understand is false news, if the government will give you the money, they will give the policy to the public not just from someone else the information on the internet.
This is very important for internet users to know that the information on the internet is most ‑‑ there are a lot of fake news on the internet. So the government will give the true information for the public and the platform also give the control of fake news to spread on the platform. Yeah.
>> YIK CHAN CHIN: Thank you.
>> ANSGAR KOENE: Could I briefly answer the question about AI and the potential of overzealous use of these systems. As I mentioned, they can be useful, but they are severely limited and there's a large gray area where it's best not to let an AI to be the final arbiter. One of the key issues is what are the incentives here for the platform using it. Is it as in the case, for instance, for a lot of copyright takedowns that the platform will feel when it doubt, take it down. In the worst case, they will upload it again.
So one direction to think about is the need for transparency of what has been taken down and the cache of content taken down that can be reviewed will be reviewed by a public methodology, other journalists and the government, not just one or the other. And to have consequences for any company that is taking down too many things that shouldn't have been.
So it's about making sure that the balance of incentive is right and isn't completely skewed towards let's just keep taking everything down or on the other side, just leave everything on.
>> AUDIENCE: Hello, my name is Steven Wright I'm with the Canadian public service. I have a question about digital media literacy. I guess it's touted as maybe a silver bullet or maybe that we might not even need to get into all these complex issues about freedom of expression and human rights and regulation if we just have really good digital media literacy.
So I'm wondering for the panel the extent to which you believe that's true, whether solid digital literacy programming and curricula can actually solve this problem, and what does good literacy practice, what does that look like? And if you could just talk about maybe some best practices that you're aware of because that's something we're trying to tackle on the Canadian front.
>> AUDIENCE: Hi, Simone, I'm also from Canada. I'm at Charles university in Prague. I was just curious if any of the countries have any experience with instead of people starting who is sharing because this seems to be a problem in every country in the world but instead targeting the content creators and if there have been any best practices that have worked to target those people?
>> YIK CHAN CHIN: I think Amrita wants to answer the question.
>> AMRITA CHOUDHURY: So when you're talking about digital literacy, it's not a silver bullet. I'll give you an analogy. For our health you are told certain good things to do, for example, walk, eat a certain amount of food so that you lead a healthy and long life, but that's not you may still fall ill, you still need doctors and medicines. In the digital world there are best practices that an individual can follow if you can teach them how to practice that. But at the end of the day, you also need the people who are providing those services to be a bit more responsible, the government who looks after things to at least protect your interests, not only their interests. So it's not digital literacy or media literacy does help individual conscious of what their rights are what they're signing off to when they say I accept when they're taking a service. And if they have not accepted certain things but still their data is being shared, they know who to reach to. Unfortunately most of the people today don't know how the data is used. We get free services, but we don't know what we're exchanging for that. They need to be aware. They need to know that their rights are not abused.
That's why digital literacy needs to be there. If I'm conscious and that's my decision and my call, I have to face the music. If I don't know and someone is taking me for a ride, that's not my fault. It is important. Everyone can shrug off and say digital literacy is the only thing. No, that's one of the things.
>> AUDIENCE: (Off Microphone)
>> MINNA HOROWITZ: Just a quick ‑‑ actually I want to give you a quick response which is I wrote a policy document about public service and misinformation of media literacy examples around Europe. So I'll give you the link.
>> ISAAC RUTENBERG: I'll address ‑‑ this was going to be my closing statement but I'll use it here anyway. I'll cite to the New York Times in two unseeming ways. The New York Times is the poster child how good media ought to be done with a lot of fact checking and in‑depth looking into stories before you publish and et cetera, et cetera. Well, in my generation and older people came to rely on such media houses as the ultimate definitive source of information. So it's all the news that's fit to print, fit meaning they have really looked into it. You can rely and trust this information.
So I'm in several What's App groups. I've noticed the older someone is the more likely they are to believe what comes through the What's App group. I think that is because they've been trained and conditioned to believe that if it's published it's probably true. We don't have any young people on this audience. I've got ‑‑
(Laughter)
>> ISAAC RUTENBERG: I think if you asked anybody who is under 30 ‑‑ I know if you asked people under 30 fake news is curious and interesting but not really so much of an issue because they've grown up with that as the problem. They know how to go and check multiple sources. They know how to deal with things that are not true. They're not fooled by most of this stuff. That's just the way that they've grown up dealing with these things.
>> AMRITA CHOUDHURY: But in the developing nations the country I come from, they are ‑‑ in India young people are confused. They really do not know where they need to check and how they can validate what they're even is right. In the last two months we found we don't know how to validate if it's true or false. We get it.
>> ISAAC RUTENBERG: I think that shows granularity of the situation. In Kenya, the young people I interact with, they don't have this problem.
>> AUDIENCE: (Off Microphone)
>> AUDIENCE: Regarding the fake news refuting the news, I want to bring the case in the Middle East where most of the fake news kind of bring some phrases from the Koran which has emotional effects. It's hard to refute it. I'm not sure if Walid can have a fact check about that if you have an answer on it.
>> AMRITA CHOUDHURY: Religion is a dicey issue these days. You have to be quick.
>> WALID AL‑SAQAF: We have a popular website, an Arabic website, the section on religion is one of the most popular because it's one that has so many rumors around it given that they use that extensively. So, yes, it's possible to get this done through a systematic fact checking method. And it also plays into in support of Amrita there are many who simply believe because it's aligned with their own religious thoughts. It's a psychological issue around confirmation bias. If it looks like it's almost true, I would believe it each if I'm not sure.
>> AMRITA CHOUDHURY: No more questions though there is a lot more discussion we can have. A quick one liner from all the panelists on their wish list of what they think would be the ideal situation to deal with misinformation.
>> YIK CHAN CHIN: Go to online questions? Yes, so I would like to ask each of you speakers to summarize the way we move forward in one sentence. Thank you.
>> MINNA HOROWITZ: I think that my previous, the comments from previous colleagues showed what we really need to do. We really need to understand different contexts, different age groups. We live in a global platform society at the same time we still have our national, regional, and local context. We need to understand that better if we want to solve this problem, not only technology.
>> ISAAC RUTENBERG: One sentence. In the ‑‑ recently we have really put technology majors on a pedestal. I think we need to go back and say, wait a minute, these other social science majors, history, English, anthropology, all of those majors, political science, we need to make those sexy again. We need to make those desirable to people because those are the majors ‑‑ those are the majors where people learn how to interrogate these things. If we have more of those people in society at some point one of a critical mass of people who are skeptical of this sort of information.
>> MICHAEL ILISHEBO: There was a question, should we go to the source of the fake news or the (?) of the fake news which I feel was not answered. Basically it's both. If you know what you're transmitting is fake, you go ahead because you may not be held responsible because the source is not viewed as good as the source. You read what you receive. Then you decide to transmit it to another person either through Facebook or any other media platforms. It simply means you're as good as the source. There are measures in place for the source should actually affect you. The moment we start fearing, should I transmit this, the next person who receives it will tag you as the source. Thank you.
>> WALID AL‑SAQAF: I would like to simply approve the message of the multidisciplinary, multi‑facetted approach. No one size fits all. We come from different backgrounds and different countries. That's where we believe strongly to collaborate among us as well as technologies and academics only then will we find long‑term solutions.
>> ANSGAR KOENE: So what I think that there's a place for technology to help with reducing the size of the issue, but I would say here is, one of the main things to look at, I think, is the impact that a particular kind of misinformation from a particular kind of source has. So basically thinking about the authority figures and misinformation from authority persons and the kinds of consequences that should come from authority figures abusing their kind of messaging. So fake news from a politician should weigh differently than fake news from your uncle. Fake news from someone who is seen as a representative of the medical profession should weigh differently from someone who is not. Similar with a credited journalist and so forth.
>> YONGJANG XIE: I think the law is the last measure to deal with the fake news. I hope that technology where there is fake news. I think it's very important for the government, the education public from the children to tell them how to differ the information from fake news.
>> YIK CHAN CHIN: Thank you for the speakers. So we finished here. Just a summary. So there must be a multi‑stakeholder, multidisciplinary approach. And the way is complicating the issue. We produce a report on this panel and upload to the IGF web page.
>> AMRITA CHOUDHURY: In case you have any comments, you can always send it to us. We welcome your comments. Thank you so much for being here. Thank you.
(Applause)
>> YIK CHAN CHIN: Thank you for all the speakers. Thank you.
(Applause)
(Session concluded at 1627 Local Time)