The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> VIKTORS MAKAROVS: Good afternoon. My name is Viktors Makarovs. I'm the special envoy on digital ‑‑ the Ministry of Foreign Affairs of Latvia. And welcome to this discussion on combatting disinformation without resorting to online censorship. If you look at the picture behind my back it says censor. But, of course, what we wanted to say is censorship. So combatting disinformation without resorting to online censorship and those of who have followed the discussion here at the IGF I think will recognize that this has been one of the prominent topics. And indeed, disinformation threatens public safety, security and Democratic stability nations. It has been exacerbated by the COVID‑19 pandemic. But there is also a recognition of the need to address this issue while upholding Human Rights, especially freedom of expression.
There is a tendency for implicit or even explicit online censorship to be used as a way, seemingly to address real or presumed threats by disinformation. And a particular challenge here is that technology companies have failed to build in sufficient free speech safeguards in to their policies and procedures to combat enforcement of their platforms. The Special Rapporteur on the right of freedom to opinion and expression, who yet has participated in the IGF remotely this time, said that the responses by states and companies to disinformation "have been problematic, inadequate and detrimental to Human Rights". So the idea at the core of this session is simple, disinformation can and should be addressed without resorting to censorship or Internet shutdowns. The real question is how do we do it.
I will discuss this issue today, we have an excellent panel that is partially present here and partially online with us. At this table with us I'll quickly present the speakers before I introduce the question. Next to me is Mr. Allan Cheboi who is senior investigator manager at the NGO Code For Africa. Next to him is Mr. Rhards Bambals who heads the strategic communications coordination department at the state chancery of Latvia. We have two speakers joining us online, and I trust they already are ready. Mr. Lutz Guellner, who is head of strategic communications task forces and information analysis division who will be joining us from Brussels. And Ms. Anna Oosterlinck who heads one of the research arms of the international nongovernmental organization Article 19. Now we also have a prerecorded message from the Under‑Secretary General of the United Nations for global communication, miss Melissa Fleming. She was not able to attend in person but was kind enough to record a message for us.
Now the way we want to do it is like this, we take the first I would say main question and I would ask each participant to give their views on the issue of ‑‑ within the time frame of about five, max six minutes. Then we have the next speaker and then we take the next question. We will always start with the speakers who are present in the room and then go over online, and after ‑‑ at the end of the first round, we will hear the message from Melissa Fleming.
So the first question we asked our participants to address is this, what is the state of art, the most relevant at the moment way to conceptualize disinformation? How do we understand it in a way that's relevant for practice ‑‑ for practical policy making for actually addressing this issue in real life.
For example, what are the implications of distinguishing between misinformation, disinformation, and information manipulation. So this is a theoretical question. But still has some very real implications when we have to think of the way to address the challenge so that we also preserve and safeguard freedom of expression online.
So how do we do it? How do we conceptualize disinformation so that freedom of expression doesn't suffer? Allan Cheboi, you are the first to speak.
>> ALLAN CHEBOI: Yeah, thank you so much. Yeah. So just to do quick introduction as mentioned. My name is Allan Cheboi. I represent civic tech organization called code for Africa. It is a Pan‑African organization currently operating in 23 African countries and the primary goal is to provide factual and actionable information for cities to make their own decisions. And it is for that context that we have been primarily involved in researching disinformation in itself and misinformation on the continent for the last about three to four years. And I think for this specific questions I like to illustrate it using an example that I'm going to showcase on the screen. It is good to walk back and for any layman in the room to understand that disinformation is part of a larger ecosystem of terms. And this basically shows that misinformation in itself as you can see there is a red circle and a green circle. So if you want to contextualize it it is misinformation is false information that you see on digital platforms. But person who is sharing that information has no ill intention to harm the recipient of that information. However this information that's in the intersection between the gray and red on the screen is now information that is false in the first place. But the person sharing that information has an ill intention to deceive or to influence the decisions of the recipient of that information.
So it is clear to have that in mind because how you approach actually researching and countering these two elements is very, very different. Mala information on the other hand, can be true information. It doesn't have to be false. But the information is shared so that the person who that information is about is actually going to be harmed. So this ‑‑ when you are talking about clicks, when you are talking about someone being harassed, hate speech, so all that is categorized as malinformation. But I think there is one thing that people forget when we talk about disinformation and this is what we refer to as information manipulation or in other terms some people refer to it as influence operations, and it is a really huge component of this because this primarily falls under the disinformation umbrella. It is on a larger global scale. Information m manipulation was used by military or Governments to influence.
And it is important to actually highlight that recently because of the social media, you know, adoption of social media technological advancement, even the normal citizens, someone seated in their own home can influence thedy significances of millions of people in another country or another jurisdiction. They can actually use false information to do that. Right? And I'll give a quick example of how that specifically happens. And I use the examples from the African continent. As I mentioned we have been looking at disinformation on the African continent for a number of years. We started picking up targeted information operations originating from, you know, different jurisdictions, different other continents trying to influence decisions in different African countries. And these are usually very high budget operations. They are not ‑‑ when you talk about a local disinformation or influence operation, or a domestic influence information, these are the ones that you consider to be things that are targeting at elections, political candidate, using disinformation against an opponent to win an election. So it is important to know that that domestic one, there is a lot of advancement that is actually ‑‑ these are a lot of investment in that.
But there is also global political situations where different countries try to influence decisions in different other countries. And also the investment there in the budgets that are used in these are really massive. So we need to also be curious on what happens. And I use this specific example that we see recently, this is in the DRC, in the DRC just recently where we started seeing a lot of coordinating campaigns, disinformation campaigns, targeting Mensoco which is the peace keeping arm United Nations. They are tailored to against the peace keeping operation. And it led to the deaths of so many people. Because this is just basically people seeing things online for very long time that they end up believing it and when that happens, at the end of the day they are influenced to take action up in a particular way. So it is really, really important.
And I will also just show one more thing. So the one more thing I want to show is a quick video that we identified on the Internet. And the reason I'm showing this video this is kind of from a global political situation where we saw a video start ‑‑ which started trending in the West Africa side ‑‑ the Western Africa side. It is very high budget. It is actually a comic that had been produced showing as you can see zombies eating normal individuals in the Central Africa Republic but it was kind of a perception that, you know, this is what is happening in the Central Africa Republic. The President calling people from foreign Governments to intervene. As you can see on this video this foreign people come in with jets and they kill all these zombies. The President thanks them for the intervention. And the citizens are very happy. But at the end of the day, what is presented is that, you know, we have this angelic saviors who can come and help you. It is an influence type of campaign that we need to be contextualizing. Yeah.
>> VIKTORS MAKAROVS: Thank you. It is a very interesting illustration. But a very quickly, a follow‑up question to this. So there are different types, you operate with misinformation, disinformation, and more information. Will you choose the methods to address these that will be different or is it the same? Very quick answer, please.
>> ALLAN CHEBOI: Very different. Because misinformation is dealt with using fact checking because it is content based. Is it false? Is it true? That's why you are looking at it. But disinformation the approach that we use at code for Africa we have a network of like data analysis and data scientists who actually audit social media platforms and the content on these platforms to identify content that we believe to be concerning or being shared in a coordinated way. As I mentioned, disinformation has an intention behind it. There is a massive investment behind it. And for you to identify it we need to go alittle bit in look at the drivers of that disinformation content. What are the narratives, tactics being used by misinformation operatives.
>> VIKTORS MAKAROVS: We are looking at identifying and analyzing as the first step to any practical solution. Perhaps we can look at the other steps. If we go from the Civil Society perspective we can now go over to the perspective of a national Government that will probably deal with all the types of phenomena that were just mentioned here. What's your conceptual framework? Based on that conceptual framework what are your tools as the Governmental team so to say, to address this?
>> RIHARDS BAMBALS: Thank you. Well, I will give both a short answer and a bit more elaborated answer as well. The short answer is we have taken the disinformation from the European Union, especially from the European action plan adopted exactly what years ago and according to the plan disinformation is a false or misleading content disseminated for the purpose to mislead to gain political benefit. So the emphasis as you can see and also explained by the previous speaker is on the motivation and intention to do harm and to do it on purpose.
To do it intentionally. So the longer rounds, I want to approach this question more broadly and give an answer on the meaning of the disinformation from one side and why Latvia knows a thing or two about disinformation, about this phenomena. Yesterday at the opening of IGF and the UN security Secretary‑General with in his address outlined two key challenges we all have to tackle in the 21st Century, namely climate change and security. However, I have to argue that we are also facing a global manmade disinformation disaster. Yes, you heard it right. Disinformation in our opinion is a manmade disaster on a global scale.
Essentially every disaster needs two elements to qualify as such event. To be called a disaster first it needs hazardous event or pressure. And second it needs a vulnerable people or society is affected by that event or pressure. And disinformation holds both these criteria. WHO two years ago called this phenomena infodemia. In the context of COVID‑19 pandemic, authoritarian regimes, including and Russia and China ‑‑ similarly since Russia's full scale war on Ukraine, disinformation has been used as a weapon of war in many different ways.
The Kremlin back channel serves weapons of mass destruction. And we see the societies that are vulnerable and not harm enough due to lack of safeguards. Not introduced fast enough, responsibly enough from social media platform side. Our emotions serve as catalysts and work against us. We are in this event where we have pressure points and vulnerable sides on the other. And we are facing a global manmade disaster called disinformation. So know a bit more elaborate answer and why should one listen or learn from Latvia. The first and foremost we are in a small country from the north of Europe and we don't have any hidden agenda. We come here to share our knowledge accumulated over the years. And we are a nation that not only we have chosen to champion a few of them, especially when it comes to media freedom and security of information space. Because we also live next to Russia for centuries and especially since restoration of our independence '90s, last century we have seen the full spectrum of misinformation and being targeted by that in different ‑‑ by different manipulation methods. We know how to recognize them. And we know how to counter them. Because we want to share this with other countries that are ready to listen and learn to how protect ourselves against this information disaster, and we have used every multilateral fora to do this to champion and share our knowledge including United Nations. And I would like to say that it should come as no surprise that the last year in 2021 Latvia together with Australia and Jamaica championing and spear heading UN resolution will. We have used other form masses and we have built Centers of Excellence on strategic communications in our capital. Centers on excellence on of protecting media freedoms. And supporting media throughout the region and others (formats) nationally we have chosen a wholistic approach based on Government and whole of society strategy. We have my team doing the job at the central Government at the Government office. And what kind of functions do we have.
For the Government decision makers just to see the full spectrum of information space, including potentially malicious behavior. We also coordinate Government coms. We do a lot of capacity building. We do crisis communication strategic wise during a crisis. And a lot of international collaboration and cooperation including the tech platforms.
However, we understood quite early on that as single unit, single team will not be able to stop all the billions of dollars being spent in a year by the Kremlin and even more like China and spreading disinformation and we will never be able to compete in the human power from the ‑‑ come from the other side.
Therefore, we need to choose a different strategy not just to react, and debunk everything but we have to be proactive. And we have to build our strategy on empowerment and on an open and giving agency, giving agency to other Government institutions, to municipalities but also to the society at large. So we our national strategy and approach to information space security is built on three pillars. All three pillars are equally important and the whole system is only as strong as the weakest link. Effective Government communication is one pillar. Second pillar is quality independent journalism and media and the third pillar is the societal level which in my opinion is the most important one. So we are building a lot of capacities. We are investing in media and information literacy and we have a lot of know how on how to do it. I will stop here. But I'm also willing to share how we have applied this framework to the this year, for example, in the context of Russia's war and Ukraine since the 24th of February. Thank you.
>> VIKTORS MAKAROVS: Thank you. It was a little bit over the allotted time but I think we got part of the question to the second question that follows how do we do it. We can come back to that. A whole arsenal of different tools. From Alan we heard monitor and analyze. Fact check I understand is your contribution as the Civil Society. So we haven't used the censorship tool, which is great. Now we can now go to our online speakers. And I would like to go to Lutz Guellner first of all, because the EU indeed has recognized that disinformation is a great challenge. It has been developing tools. There are plans and documents that we have already mentioned. But at the same time the EU has always been very clear that addressing disinformation must be done within the framework of Human Rights fundamental rights, including freedom of expression.
So Lutz, what's your framework that you work with every day? And how do you use it in this right complaint manner? Maybe you can tell us what your limits are, things that you do not do as the EU has to be part of this rights compliant framework.
>> LUTZ GUELLNER: Thank you very much. And very nice to speak to you and sorry that I cannot be there in person.
But I think the key question that we need to answer is exactly the one that Alan put on the table, what exactly is the problem because what is happening very often is that we design kind of responses to a problem before we have actually precisely said what is the problem in there. And if you allow me, I would love to share just one slide with you. If you see this, I think ‑‑ can you see it? It's a slide. Let me do this here. Which ‑‑ Volia. Which shows you basically five things that we need to keep in mind when we speak about disinformation. And as Allan already said beforehand, there are so many different forms of malinformation, misinformation, et cetera, the key thing is what is it that we want to stop. Do we want to stop information pollution in general? I think this would be a very dangerous path. Information pollution exists everywhere and it is nothing that we can solve with any regulatory or Governmental kind of approaches.
So first distinguish the different elements from each other. And I would follow to a large degree exactly the categorization of Allan. Misinformation being unintentional and therefore a different challenge.
Disinformation already a very, very clear intention behind, but I would also like to highlight that disinformation in that sense can be very much focused on economic gains, for example. It is not only a political enterprise always. But we have a third category, and that might be very, very important for us to keep in mind when we speak here about Internet Governance, if we speak about United Nations actions, et cetera, that also state actors are very active in that field. State actors. Using this tool, using these instruments for their own strategic aims, for their activities.
And important in this is that what they are doing or what we are seeing every day is that it is not necessarily the classic perception of having a constructed and false content but that as you see here on the slides, we basically need to look at five different components. The first one is it needs to be harmful, otherwise we shouldn't care. If there is nonharmful content, it would not be a big problem. Then our next challenge is that it is not illegal. So we don't have clear laws and we should not clear laws for the reason we are discussing here. There needs to be an element of manipulation in there. Manipulation. And then exactly what Allan said, the intentional element needs to be there.
And last but not least also the element of coordination. Because if it is just a single event, a single element this is happening then it wouldn't be a problem. So my proposal so far our discussion is let's really distinguish these different things because the policy flowing from that will need to react to that.
Our approach has been to focus on what we call the ABC model. So not only to focus on the content. Why? Because very often the content has been taken out of let's say ‑‑ out of ‑‑ the content in itself sorry, is not necessarily false. It is not even verifiably false. So a lot of the disinformation and information ma niP Pewlation that's being used is operating with facts. The presentation, the way this is presented is the disinformation activity, the information manipulation activity. O so that's very important that we don't only focus on the content but on the B of the ABC, and that's the behavior. How it is produced. What techniques are they using. What manipulative tactics are they using. They allows us to move away from focus. Which is a dangerous path in particular.
So ABC model, A, I should have also mentioned you need to understand the actor also behind his or her intentions in this. Otherwise you cannot complete the picture.
And let me close by saying manipulation cannot only happen at the level of content as I said, but also the manipulation of, for example, identities. We often don't take this enough in to account that false accounts in networks, et cetera, is as important as falsified content. Because sometimes these techniques try to amplify specific narratives, try to focus on existing let's say debate that exist already in a society. And just by amplifying they reach their strategic aims. This manip puflation of identities is important and manipulation of reach is important. We have tried to put all this together, focus on this behavior. And have developed a four kind of area approach that I'm happy to come back in the next question to. But for me it is very important again to underline not to play the let's say the role of the police, you know, that can give an indication, what can be said and what cannot be said. What is good and what is bad. But to identify these practicics or what we call the TTPs, techniques, tactics and procedures that are being used to manipulate. And that will enable us to move away from these tendency, sometimes that some members even in the United Nations systems have taken to enact so‑called Fake News laws or disinformation laws that focus on content. That prescribe what is good, what is bad and what cannot be said. It is very often used as a pretext for censorship. The approach that I just laid out would allow us to do in an objective manner.
It would allow to tackle a lot of the problems in there. And in the next round of questions I show you how we do this with different tools.
>> VIKTORS MAKAROVS: Thank you very much. We will come back to the tools again in the next question. But I must warn that the next round is going to be very quick. So we are running short of time. But we now go over to Anna Oosterlinck to address the same question, the conceptual thinking behind disinformation and how do you also conceptualize responses that are based on respect for freedom of expression. And it is obvious for an organization called Article 19 this must be a topic that you have an opinion on. So Ana, please the floor is yours. And please keep it to five minutes. Thank you.
>> ANNA OOSTERLINCK: Thank you so much. The pleasure of going last I have to stick to the time. Thank you very much. Thank you for the floor and thank you to Latvia for organizing this session. So my name is Anna Oosterlinck and I speak on behalf of Article 19 which is an international Human Rights organization promoting freedom of expression and related rights.
And we have been working a lot on the right to freedom of expression and information vis‑a‑vi disinformation and in particular during the COVID‑19 pandemic. I'm going to try to answer the question in five quick points. And then leave a little bit for the second round. So as we all know, disinformation, misinformation, propaganda they are not new. As an organization we fully recognize that recently the issue as emerged in to a digital society and has triggered debates over politics, social media and the exercise of freedom of expression. Especially on social media and can cause some significant harm. Clearly evidenced during the COVID‑19 pandemic with wild claims about alleged remedies and conspiracy theories over origins.
But the first point I would like to make is the scale of the problem is, of course, not only digital technologies. Disinformation must be seen in a wider context, including one reducing pluralism and diversity of information that we can access online. Two, the challenges connected to the digital transformation of the media. And three, the underlying social causes including economic and social inequalities, leading to mistrust and polarization. In all these factors combined ultimately create an environment where disinformation can flourish. Now the second point I'd like to make is that although we know that these different concept are being used, we do like to remind everyone as I'm sure we are all aware that there is no agreed definition in international regional Human Rights law. And that's where we would start as an organization called Article 19, is what is an international law. So there is no agreed definition. The third point I would like to make is that although we have seen calls to address disinformation intensifying in recent months and years and particularly during the COVID‑19 pandemic we do ‑‑ we would like to point out that the issue has already been addressed to some extent in different legal fields.
Think, for example, two laws on deformation, for example. So restrictions on false statement of facts, that cause substantial harm to person's reputation. Or laws on election fraud or laws on misleading advertising or sales of certain products and so forth, just to name a few examples. There is already some protection out there in terms of harmful effects of disinformation. The fourth point I would like to make that then if states feel they need to make restrictions on freedom of expression is to remind them of Article 19. And the way how to do this is and obviously we know there is a three part test in Article 19 of the ICCPR in terms of how to restrict freedom of expression and. In what we have found is that restrictive legislation on disinformation typically fail this three part test of Article 19 of the ICCPR. Meaning they do not meet the principles of legality, legitimacy and proportionality and necessity. We are very concerned about attempts to enact a legal duty of truth. And we don't agree with using the concept of disinformation or related concepts in legislation.
In the interest of time I want to highlight one of the three principles in how our concerns play out. As we all know no universally agreed definition. So consider then the principle of legality which means restrictions on free speech must be formulated with sufficient precision in order to foresee the consequences of your actions. Given disinformation is a very complex problem. We believe that any attempts to define disinformation or capture all its complexities in one catch all definition will be inhair rently broad and vague. And then I can talk more about the other principles but I wanted to focus on one. We don't as an organization don't advocate to restrict disinformation through specific legislation or to do it in isolation. My fifth point is what we do recommend is wholistic positive measures and I can talk about this more in the next question. But just to say that Special Rapporteur Irene Kahn summed up very well our primary position on disinformation. The right to freedom and opinion and expression is not part of the problem. It is a mean ‑‑ that would be our answer to the problem is a range of positive wholistic measures by a range of actors. But I can come back to that in the next question. I thank you.
>> VIKTORS MAKAROVS: Thank you very much. There will be a next round where we will ask the speakers to very quickly give these name the tools that you believe have to be used. But before we go to the second round, we have a message recorded by Ms. Melissa Fleming for us to share about ‑‑ with us, how the United Nations tackle the issue of disinformation about the most important development, processes are and what examples of cooperation within the United Nations framework on addressing disinformation we should look at. We can now go over to the message from Melissa Fleming.
>> Could you please play the video in the back?
>> MELISSA FLEMING: Here at the United Nations we have been monitoring for years how lies are poisoning our societies and how mis and disinformation spread online and are causing real harm to our world. We certainly saw it at the height of COVID‑19 when the pandemic hit all kinds of conspiracy theories emerged placing public health in grave and imminent danger. And we also saw claims that the pandemic was a hoax, claims that it planned to trigger a new rise of a new world order, claims about miracle fake cures or that vaccines are a plot to depopulate the planet.
And we're also seeing this in relation to the climate emergency where vested interests are, they say financing the deliberate or actually vested interests are financing the deliberate undermining of science. Doing it to delay climate action and preparedness. These actors are using tactics that range from direct climate change denial. But also to so‑called woke washing, namely the framing of climate action as being corrupt or elitist in order to spread doomism or fatalism.
We are also seeing the harmful affects of online disinformation in many conflict situations around the world. Back in 2018 the UN found that disinformation and hate speech spread online played a significant role in stoking horrific atrocities against the Rohenga population. They pushed ordinary citizens to commit unspeakable acts. And similar stories have emerged in many other conflict settings. For example, recently in Ethiopia there were Facebook posts that spread hates and inspired attacks. In Iraq militant groups are spreading Secretariat hate on Youtube and Facebook. They are playing out in Ukraine where information is also being used as a weapon of war.
And meanwhile in Ukraine's neighboring countries, we're seeing the spreading of lies about refugees making the most vulnerable once again to suffer.
So analyzing these phenomena we realize in less than two decades their design flaws are clearly turbo charging real harm inflicted to our world. Social media platforms are hard wired to drive engagement, but this engagement puts profit above civility. They amplify procative material over facts. And they can generate outrage and division and downplay informed nuance debate. Inform a nuanced debate that we need so desperately for our world. Of course, platforms are also crucial tools for those working to make the world a better place.
For example, in autocratic states they allow people to seek out banned news. In war zones they allow uprooted people to keep in touch. And also movements have been born on social media that improved Human Rights. This is why in several countries digital platforms are pressed by the authorities to take steps undermining free speech either by taking down entirely legitimate content or by using upload filters.
The UN has urged the platforms to respond to such demands by standing up to the rights to privacy and free expression and by reporting on pressures that infringe those rights with full transparency and speed. The Secretary‑General himself has underlined that a human‑centered digital space begins with the protection of free speech, freedom of expression, and the right to online autonomy and privacy.
But free speech is not a free pass he also noted. In the era of mis and disinformation free speech is much more than to say whatever you want online. Free speech is not about being un‑Orthodoxed for the thrill of it. Stops at hatred. Platforms must face the fact that they are constantly being abused by bad actors and live up to their responsibility to protect human rights and save lives.
For this reason at the United Nations we are constantly engaging with the platforms and advocating that they do their human rights due diligence but also review their business models against the UN guiding principles on business and Human Rights.
We want platforms to offer a robust framework to reduce the speed of harmful false hoods and establish mechanisms to remedy them.
Especially in conflict situations, platforms need human Moderators review content in realtime, also attuned to the local and regional contexts. Moreover we want to see platforms move, policies that limit the monetization of harm.
I want to also say that my social media team works day in and day out to distill trustworthy UN information in to accessible posts for our millions of followers which we share through the social media platforms.
An example of this effort is an initiative called verify that we launched together with the social imimpact agency Purpose in response to the COVID‑19 pandemic. And this was to get accurate life saving information out to communities and around the world and compete in those very same spaces where disinformation actors were having such an influence on people's decisions.
We are also working to strengthen the capacity of social media users to identify on their own and avoid the lies by promoting media and information literacy and by creating our own teaching tools. So among others my team has launched two free online digital literacy courses on mis and disinformation in collaboration with Wiki how in multiple languages that are being taken by students of disinformation all over the world and hopefully improving their ability to spot mis and disinformation and not become part of the spreading problem.
Last but not least, we are advocating to states as well to promote various measures to encourage the free flow of information to enhance media diversity and to support independent public interest media as a means of countering disinformation.
>> VIKTORS MAKAROVS: A clear and strong message from Melissa Fleming speaking on behalf of United Nations. She also mentioned some things that the UN does to address the issue. Now the second round is where we want to hear from our speakers what the most important instruments are. And I think also we can attach the third question to this one, what in your opinion could be done to help other countries mobilize the knowledge and resources that are needed to address this issue effectively. So what do you do in your domestic area and what do you think can be done to cooperate better on the topic of disinformation? We go in the same order, starting with you Allan. Please very concisely.
>> ALLAN CHEBOI: Thank you so much. I want to actually insist on what Anna said which is I think one very important thing when comes to the research that we do. Someone mentioned that they want to know the tools we do and how we do it. We do use social media monitoring and digital monitoring tools. That's just a means to an end to give you access to the information that is available in the platforms. So it doesn't necessarily mean it tells you this is misinformation. So one clear strategy that everyone needs to know is that the tools suggest a means to an end. You need to actually have human intervention in any type of analysis.
One thing is clear is that we do malline actors did we need to be safeguarding this specific spaces that we are having the conversations, like social media and through the media. Because this is where our young children, this is where our youth get their own information. There is an interesting research that I got recently saying that 40% of young people GenZ go to TikTok for first time information. That was Google. So we need to be thinking about that. And just one final thing that I wanted to insist on I don't think we need to as Anna said we should not be regulating disinformation itself. We need to be regulating the ‑‑ because disinformation actually thrives when rights are being abused, right? We need to be looking at regulatory frameworks that address things that are going against the rights of individuals in different countries, right? So we need to be customizing this legal frameworks. We need to be customizing the laws to actually include disinformation, you know, monitoring as an element but not monitoring disinformation itself because people have other intentions around it. Thank you.
>> VIKTORS MAKAROVS: Thank you. And over to you Rihards. Very quickly.
>> RIHARDS BAMBALS: I think I outlined them already in previous intervention. All three pillars are important. So we have this approach based on a whole of Government, whole of society approach. So every Government ministry and institution has to get involved and every member of society has to have a minimal level of awareness on disinformation, on manipulation techniques and recognize how to counter them and report to the social media platforms themselves, abuse of content. How to report to the state police and other services. The main strategy behind this is to limit the space of spread to create a vacuum conditions for disinformation, if there is no air or oil for this fire to continue, then the disinformation will not be able to spread and jump from one platform to another, from one interest group to another. We have to turn off the air by working and strengthening the pillars. Effective Government communication everyone was working greatly with their target audiences, there will not be a misinformation and no confusions. If we would be supporting more in independent quality journalism then we would not have to deal the influence of ‑‑ as I pointed weapons of mass destruction I wouldn't call them necessarily media. The ones that are Kremlin state sponsored. And the third is media and information literacy societal resilience level which is key. And we have built a lot for this. We have created, for example, a digital manual available not only for civil servants but also to every citizen of the country has a manual on how to recognize misinformation techniques. And what are also the content wide the most often used disinformation narratives against Latvia and how to recognize and counter them with facts.
>> VIKTORS MAKAROVS: Thank you. There is a manual. You can get one but I guess it is in Latvian. It will need to be translated. Lutz, over to you. I know you love toolboxes and tools.
>> LUTZ GUELLNER: I love toolboxes. In principle and we did a lot of work in thinking and I'm very happy also that this seems to be very much in line with the thinking of the Civil Society community you in particular, what Anna has said, also what Allan has said. The first one is situational awareness. We need to have also tools to understand what is going on. We need to make it public, we need to expose. We need to get access to data in this field. It is crucial to have all this. And to be able to analyze it. Because seeing very often is a good antidote already. If you know what's going on and how these techniques are being done. The second point is maybe the obvious one, that is very often mentioned, it is difficult to do. It has many different elements. It is what we call building societal resilience. It can be fact checking initiatives. It can be support to a more diverse competitive and highly professional media system. It can be support for media literacy initiatives, et cetera. The list is very, very long. And of course it always depends on the mix of the specific situations. But there I'm also very eager to underline that we work with a lot of UN countries, a lot of UN members, you know, to help either the Governments or Civil Societies, et cetera, to develop these skills.
That would be the second box. The third box is something that Ms. Melissa Fleming mentioned we need to think about regulation. Not of the content. But regulation of behavior, for example, of Internet companies or of social media platforms. The European Union has put in place a law on this one. It is called the Digital Services Act. And it does not regulate the content. It does not say what can be said on this platforms or not. But we turned it around. We looked at the risk actually that emanates from the very prominent position that these platforms have and discussions that are happening on their platforms bear for society. And we put a risk management and risk mitigation approach in there. So clear rules also for the platforms in this area.
But I really have to underline and that brings me to the fourth box, regulating the platforms will not solve the issue and it will be only one of the elements. And that's why we also need to look in to a broader issues in our UN family, for example, we are clearly thinking also about rules for state actors in this field. We should think about this. What is responsible state behavior in this field? Is a deliberate strategy to manipulate another country with ‑‑ by using disinformation, information manipulation or technical means, this is justify a legitimate way of conducting international relations. And you see in the way I put this rhetorical question out. Last point very, very important is that we do not regulate what can be said and who say what, et cetera. That would be the wrong way. We want a to protect the freedom of speech from malign interference, from ma niP Pewlation, from outside ma niP Pewlation. This this approach will safeguard the freedom of speech and not restrict it. Thank you.
>> VIKTORS MAKAROVS: Thank you pretty much. We will eat a little bit in to the break time between the sessions. So we can go to Anna for a very brief response to the tools and then we will try to take at least one question from the audience and one online.
>> ANNA OOSTERLINCK: You can imagine I do have a lot to say on this question. But we believe that it is wholistic and positive measures that are firmly grounded in the right to freedom of expression and other Human Rights, of course, that are the best solution. So I want to give six suggestions for states. And this is not an exhaustive list but in the interest of time I will try to keep it as brief as possible. But six clear ideas. One ensure diverse free and independent media environment in particular through clear regulatory frameworks that ensure self‑governance and independence for the media and broadcasting sector. We also need strong protections online and offline for journalists and media workers. We know that media can facilitate the free flow of information. Second implement comprehensive right to information laws, including by complying with the principle of maximum disclosure of information and releasing information of public interest. And, of course, Governments should not be spreading disinformation themselves.
Third, ensure connectivity to accessible free, open reliable and secure Internet, the topic of this Forum. The digital divide remains a huge barrier ‑‑ four, invest in digital media and information lit ratesy as mentioned by others. Five adopt positive policy measures to combat online hate speech in line with Human Rights Council Resolution 1618, the robot plan of action and all relevant Human Rights standards. And work with companies to ensure they respect Human Rights. In terms of companies, digital companies in particular dominate social media platforms have to be a key part of the solution. And Article 19 believes that addressing disinformation on social media platforms must be considered as Melissa Fleming said in the larger context of their business models and also in terms of the defish shensys in content practices.
But and we are seeing this increasingly happening already, so social media platforms can utilize a variety of flexible responses that do comply. Rather than Banning using or deleting inaccurate content they can modify algorithms. They, affix laebls or warnings. They need to ar cue late clear policy. In line with Human Rights. They also need to ensure minimum due process guarantees. And full trarns parentsy. Ab in terms of states, states need to promote oversight of social media platforms by independent multi‑stakeholder institutions. Which in our view offers the best solution to be adaptable to the ever changing context of online communication. Rather than trying to regulate the content that should be restricted.
>> VIKTORS MAKAROVS: Thank you. I think we have to stop here to be able to take one question. I can see one over there. Please, sir. There will be a mic coming to you, I hope.
>> My Rob plumber. I'm from Finland. I wanted to ask would it be possible for the UN or these other organizations that are concerned about misinformation to help creating a course maybe for like internationally for schools to educate the children on media critique.
>> VIKTORS MAKAROVS: Good question. Who would take it? So how ‑‑ yeah. Maybe it is a good idea. Please yeah. We'll take three questions and then we will ask speakers to respond.
>> The this is Alysha from Access Now. I would be interested if we put the issue of content moderation of disinformation and misinformation online and look maybe a bit deeper in to content recommender systems and how it is being curated. Where we see that that's one of the main reasons how the disinformation is being amplified but my question is connected to another tool that is currently being involved by a number of legislations, including the EU and that's that so‑called media exemption. A number of media are often bad actor ‑‑
>> VIKTORS MAKAROVS: Please phrase your question.
>> My question is about the efficiency of media intervention. So the online platforms do not moderate the media content and deserve certain privileged treatment.
>> VIKTORS MAKAROVS: Anyone who wants to reflect on the media exemption, welcome. Please your question.
>> Thank you so much. Starting from February 24th before we have faced an enormous amount of ‑‑ as delivered dug.
>> VIKTORS MAKAROVS: Excuse me. First of all, this is a session for questions.
>> Yeah, I had a question.
>> VIKTORS MAKAROVS: Phrase the question.
>> Global platform instead of keeping neutrality shows on side conflict and was helping delivering this fakes and disinformation and it was Banned in Russia. Do you think that platforms must be neutral in conflicts or they have a right to choose a side? Thank you.
>> VIKTORS MAKAROVS: That's an interesting question. Shall we start with you Allan? Would you like to reflect on any of those questions? Please do it within one minute, max.
>> ALLAN CHEBOI: So I will start with the first question and maybe respond to also the last question. So about the cost that is available out there. Is a lot of content. We need to have a unified ‑‑ unifying organization, collect all these information that is available out there on how to counter mis and disinformation in itself. So that we can create a curricula that can be seated across different countries. I do agree on that with you. And I'll ‑‑ I would like to hear from my counterpart. On the last question, I think ‑‑ I think basically that is why the platforms need to counter this information. They should not have a preference on any type of information. I do agree with you that platforms need to be strategic enough to look at information as ‑‑ it is not specific to specific actors. It is actually just fighting this information in itself. But state actors need to be held accountable whenever that happens.
>> VIKTORS MAKAROVS: Teaching about disinformation, media exemption if you have a comment on that and should the platforms be neutral.
>> RIHARDS BAMBALS: I will leave it to the EU panelists from the ES. On the media information literacy I think the implementation is key. We have so many national and international already initiatives. We don't have to create always invent the bi cycle from scratch. It is on who ride the bicycle and just ride it. Implementation is key on the platforms. It is a challenging question. I mean we could devote the whole session to this. Platforms are not media themselves. So they don't necessarily have to follow the traditional media rules.
>> VIKTORS MAKAROVS: Important comment. We go very quickly to our remote speakers. Lutz, so teaching about disinformation and media literacy, if you have something to say about the media exemption and should platforms be neutral?
>> LUTZ GUELLNER: Very, very briefly. Media literacy is key. We should not centralize at the level of United Nations, there is only one size fits all. It needs to be tailored to specific audiences, et cetera. So I think that's something we need to push. Should also be support. Be kind of material, et cetera. But please not the one kind of and only kind of approach to it. Because that would be quite challenging I think.
Second point about media exemption, it would even take some time to explain what it is. But we do not have the media exemption in our digital services act. Even if wide discussions. There are some arguments in favor and some against. Those that are against and I'm also in this camp recognize that a lot of media are actually not really media. They have to use as instruments, for example, also of other states look at RT, for example, which has been used as an instrument also to support exactly what is happening in the Russian war against Ukraine. Platforms should be neutral or not. They need to comply with law. But, of course, that can become very, very complicated for the platforms at the moment when they have ‑‑ when they become subject to very problematic laws in some of the countries they are operating in. For me kind of key principles is the core element and that is freedom of speech upholding but, of course, also as everyone said freedom of speech is not a free pass for everything and that is ‑‑ and there we we be very, very close to what also the colleague of Article 19 has said beforehand.
>> VIKTORS MAKAROVS: Thank you very much. And really the next session is starting now. So you have 30 seconds to respond to what you believe is the most important in this discussion.
>> ANNA OOSTERLINCK: Okay. As quickly as I can one, of course, as I said we talked about literacy aspect, various initiatives exist. More can be done. Second I'm going to park that can't do that in 30 seconds. The third I would like to refer to the recent report from the Special Rapporteur which has a lot of very good recommendations in terms of how to deal with disinformation in the context of war and I would like to refer to these excellent recommendations. Thank you.
>> VIKTORS MAKAROVS: Now we are really short on time. We have to stop here. I would like to thank our speakers for the excellent ideas and thank you the audience for being here and I hope we can return to the topic again. Thank you very much