IGF 2024 - Day 0 - Plenary - High Level Session 1 Navigating the Misinformation Maze Strategic Cooperation for a Trusted Digital Future

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> BARBARA CARFAGNA: Hello, everybody. I'm proud to share the Italian perspective.  Now, the cyberspace has become a tool not to get information or cause physical harm, but influence public opinion, terrorist and criminal.

(No English Translation on Audio.)

We will welcome Esam AlWagait, Director of National Information Centre, Saudi Data and AI Authority.  Welcome.

Then Mrs. Deemah Alyahya, Secretary General of the Digital Cooperation Organization.  Welcome.

His Excellency Mr. Mohammed Ali AlQaed, Chief Executive Information and eGovernment Authority.

Assistant Secretary General Ms. Natalia Gherman, welcome.

Mr. Pearse O'Donohue, Director for the Future Networks Directorate of Digital Governance, European Commission.  Welcome.

Mr. Khaled Mansour, member of Meta Oversight Board.  Welcome.

Okay.

We can start from Ms. Deemah Alyahya, what are the most prevalent digital services contributing to the spread of misinformation today, and how has the landscape evolved in the recent years?  Maybe we can use that mic.

>> DEEMAH ALYAHYA: Thank you very much.  It's great to start a morning with a subject so important and so profound at this point of time with the increase of us celebrating new innovations and the progression of emerging technologies, but also looking at how can we safeguard the use of the Internet.

We've seen that the Internet has opened amazing opportunities for prosperity of humans and people, either by increasing productivity to increasing the quality of lives.  And we do see that the platforms as well, like social media platforms have been tools to help in finding jobs as well as education, but then we come to a very big issue which is the harmful part of such, of the Internet and using social media platforms which is the misinformation and harmful use of information.

What is very much alarming as well is that for us to benefit more from AI, for instance, these algorithms are built online and the data if it's false and fake news, and that is a very big challenge in terms of the resources for AI.

And this is why we truly see that such issue when you ask me that question, you cannot pinpoint one independent institution or entity or a person that is responsible for such.  It is a collective responsibility from Governments to Private Sector and the innovators to human capital as well and the civil society.

And, therefore we have to look at this challenge with a collective eye, and a united force for collaboration.  And this is why in the DCO, what we have created is a facilitation of bringing in the Governments with the innovators and the civil society to think together, to cocreate and codesign initiatives and the way forward to reduce such kind of issues.

>> BARBARA CARFAGNA: Thank you.  Mr. Khaled Mansour what are the sources that contribute to the spread of misinformation today.

>> KHALED MANSOUR: Don't run to translators. I will speak in English.  It's a week from today that the regime of Assad has fallen down in Syria, and there came a flood of people coming out of prisons and flood of images on social media, jubilation, happiness, mothers embracing sons and daughters in many cases, but in parallel there was also a flood of lies, rumors, and misinformation, what we call it.

And many of our friends, colleagues, politicians and journalists have swallowed this in a gullible manner because the main sources of digital spread of misinformation are as old as humanity.

Since people started to communicate there have been lies ignorance, deception, willful and non willful, there has been self interest, exaggeration, biases, et cetera.

What has changed in the last 15 to 20 years is the exacerbation and the acceleration of this trend.  Access, all of us are glued to our smartphones from Bangladesh to Mexico.  The flood, I mean, there is a flood of information all of the time and all of us wake up in the morning and first thing we scroll and the last thing we see Meta alone on the oversight board I serve, has three to four billion users.

This means three to four billion pieces of content.  Everything is immediate.  A long time ago in this country when you write a bad poem and you are bad mouthing somebody, a month later he will write a poem in return.

Or in The New York Times if you write an Op Ed, a week later there will be an Op Ed.  Now, it's immediate.  People don't wait to check to see if the information they are using it misinformation or not.  And technology, the AI technology makes it much easier to make misinformation look far more believable.

And finally, coordinated campaigns by Governments, by corporations, for nefarious services.

So it is important to know what is the source of misinformation if we need to address it and address it well.  And I think all of us in various ways, individuals, Governments, cooperations, are implicated into this.  And the main victim, one of the main victims that we don't really speak about, it's not that we are deceived by misinformation.  Over time the words endeavor effect is undermining public trust and information we receive from social media platforms.

Everything appears and becomes fake news.

Misinformation, I mean all of us know was catapulted into our debates, public debates in 2016 in the U.S.  Because of the elections in the U.S.  And then for all of us because of COVID, and the high level of misinformation we all had.

There was harm, perceived harm to the elections as well as harm to health.  But there is another very important concern that we pay, we don't pay attention to, which is the lack of accurate information and good media all around since then.

While you are trying to fight misinformation and trying to conclude, you have to do this by avoiding censorship in repressive environments while avoiding exacerbating violence because I started by the flood of happy images of Syria and misinformation kills.  By spreading misinformation in conflict times from Myanmar to Sudan, to Syria, this can be murderous.

It's very important to think that one of the very reasons of the spread of misinformation is lack of access to accurate information.  What is currently called information integrity.  Information integrity is in trouble.  Accurate information from credible media sources have declined or faced tough times due to economic reasons.  Users also have mounting challenges by the flood we spoke about because there is persistent need to cultivate critical, all of my politicians that believe misinformation coming from Syria, they need to cultivate their ability at reading, understanding and analyzing media much better.  Thank you.

>> BARBARA CARFAGNA: Thank you.  So as we can see and as I told you in the introduction, is something like if the attacks that before were on infrastructures, now we have in our minds.  So it's very important to understand it that this sort of new situation where we are getting in after GenerativeAI introduced this technique that is so fast, so fast, can't make a hold on them.

So Mr. Mohammed Ali AlQaed, I ask you the question about the most prevalent digital sources.

>> MOHAMMED ALI ALQAED: Sure.  So when we talk about disinformation, we have to understand two things, first of all, how fast it propagates and second of all, the coverage or the reach of this misinformation because let's face it, misinformation has always been there, but before it used to be within a limited reach, and it used to propagate much slower.

Nowadays with the Internet, with the social media, misinformation propagates much faster and almost everywhere.  So let me give you some stats.  By a report or by survey by UNESCO, 68% of the people say that they get their news from social media.

38% they get it from online messaging.  And this is a shocking fact.  Only 20% they get it from the online media.

So it's obvious that the main source for propagating misinformation is the social media.  So what does misinformation or what are the forms of misinformation.  I would like to classify it in two parts.

The first one is the intentional misinformation where you have people or groups trying to actively trying to push misinformation.  And nowadays, misinformation is not only text as it used to be before.  Nowadays with the help of AI and there are a lot of tools, you can generate video and audio, and it is so realistic that the average person cannot realize that this is actually fake.

So with a combination of AI tools and the social media, we have a very dangerous ways of spreading misinformation.  And the information integrity is more important than ever today.

So if you ask me what is the way to spread misinformation, I would say that the main source of spreading misinformation is the social media and online messaging apps.

>> BARBARA CARFAGNA: We have seen now with GenerativeAI, we can build also vertical GenerativeAI, and this vertical GenerativeAI if you, if you go to profile someone before we could through social media profile groups, now with GenerativeAI we can profile one person exactly with their activities and their orientation and desires and also vulnerability.  So it's like we have a precision weapon and capability to build very fast through a bot, for example, an information, fake information for him.

So it's even more and more convincing and this is what the experts call super persuasion.  You cannot persuade someone like we could do before, but super persuasion.  It's just for him.

So this is, of course, very important for terrorists for recruitment of terrorists, and that's why I ask the question to Natalia Gherman so we chart  which are the most prevalent digital sources.

>> NATALIA GHERMAN: Thank you, good morning, ladies and gentlemen.  It is a great pleasure to be here, and as just as Madam Moderator mentioned, I represent the United Nations Counter Terrorism Committee entity, and my office is focused on the spread of terrorist and violent extremist messaging online and offline.

However, there are great similarities on how misinformation as well as disinformation and terrorist related messaging are being created, posted and propagated.

And in terms of changing landscape, I should say that COVID 19 pandemic led, of course, to a passive rise in people joining the cyberspace.  And the sheer numbers of people using Internet and social media now is staggering.  And we have also seen an explosion in the number of gaming and social media platforms, messaging systems and online spaces.

So in terms of malicious content online, we in the United Nations highlight that unmoderated spaces are major hubs for misinformation and terrorist content.  And these are, first of all, social media platforms and messaging systems with deliberately less content policies than there are, of course, small platforms like in capacity to effectively moderate content and hidden chat rooms and sites.

I also want to draw your attention to the rise of influencers with millions of followers, and when combined with algorithms pushing content, they flooded the social media and messaging services with misinformation and worse content.

So we are in a time when just a handful of people can see widespread misinformation.  And two trends, we in counter terrorist Executive Directorate have noted while assessing the Member States' capacities to prevent and counter terrorism, and they are ironically at the opposite ends of the technology spectrum.

On the one hand, in dialogue with the Member States, we have seen the ever increasing use of new technologies like chat bots, GenerativeAI, and other AI powered tools to generate and spread terrorist related content, and other malicious messaging.

This has by the way led to the creation of credible avatars and deep fake video and audio used for criminal purposes and for the spread of misinformation and also to incite violence and on the other hand, we have seen an optic in so called old fashioned technologies like the use of terrorist operated websites and human support networks to help spread messages to followers across diverse platforms.

These methods rely on hiding content in hard to find channels very often, and also delivering it to selected audiences, but in both cases, detection, tracking and countering the spread of harmful content is posing to the Governments, to the Member States and to all professionals ever increasing challenges.  Thank you.

>> BARBARA CARFAGNA: Thank you.  Mr. Mohammed Ali AlQaed, same question on the most prevalent digital sources?

>> MOHAMMED ALI ALQAED: It's a great pleasure to be here on this fantastic event.  I would like to congratulate the Digital Government Authority for organising this and inviting me to be part of this distinguished panel.

Of course, the technological advancements being the or the connectivity on social media platforms and the speed of spreading information change the behavior of the society, and many people going for information in the traditional media, they use the new channels I think due to the fact that, and the behavior of people they cannot wait.  They want instant news to be there.

I recall once there was a news which was aired and somebody called me and said you don't know what's going on?  And I said what's that.  He said this and this happened.  I told them, and when I looked at it, it was maybe 30 minutes ago.  I was late for 30 minutes gap.

That, I think, and the reaction of the traditional media and the Governments to this change of the behavior and the way that the people are dealing with the news I think created this kind of new way of spreading the misinformation because everybody became a producer and new creators, you know, being social media activists or many others became the new producers of the information.

The technologies, you know, attributing to that, for example, algorithms used by social media make the information which is more clickable or more sensational to come on top, I think that is one reason for the misinformation, and the other thing is the more the people try to distinguish between the fake and the true information, deep fake and AI, you know, create more sophisticated content.

And, of course, the encryptions of the of the social media platform make it more difficult because you cannot control who is spreading what kind of information.  It's between people themselves.  The societal and psychological side of it, of course, many people whenever they receive an information, that's for them is the absolute truth and it's very difficult for you to try to navigate, negate or, so it's easier to convince but more difficult to change the mind.

And, of course, when the messages comes from your friends and from your network, again, it's more credible for you, so that's another reason, and then the fear and emotions usually spread more often, and economical side that anything that spreads more are more clickable people will try to spread that information to get more followers and sometimes to monetize that content.

So that's, I think, in a summary, I think, what happened to change the behaviors of the society.

>> BARBARA CARFAGNA: It's very important at this point that once you convince someone it's hard to come back.  So you have to act against misinformation and disinformation, you have to act before with different, of course, different methods and take the problem from different points of view.  That's why I ask the same question to Mr. Pearse O'Donohue, and how the rules work for this and how they can change because technology changes so fast that how can the rule follow the speed?

>> PEARSE O'DONOHUE: Thank you, yes.

You have already heard several insightful answers so I will, perhaps, complement that taking a slightly different approach, which is to identify the most prevalent sources is to say that simply the volume, the number of different sources that exist contribute as a whole to the existence and prevalence of misinformation.

And that is an issue to do with everything that has been said about its prevalence, but also its scope and range and speed.  The second point is that we do at some point have to make a distinction between misinformation and disinformation, but it is the existence of disinformation which is targeted, untrue statements, facts, even now videos with AI support to actually mislead the public or individuals that when in some cases that is so obvious that a lot of users are unaware of the more benign but nevertheless nefarious misinformation which is affecting their choices, which is affecting their daily lives.

So we do have to accept that the most prevalent sources are the social media platforms, but I would say they are protecting those platforms which are not sufficiently moderated, do not have sufficient safeguards in place in order to information for or in some cases prevent.

One of the rules is that we have to see what can be done to guide the industry, what is it that can be done to protect the user, but, of course, allowing the user to choose, and that is very important when we come to an open Internet.

If a Government or a regulatory authority decides to step in and decide what is misinformation and what is not, then who moderates the regulator and that becomes an issue.  So the rules have to be focused on allowing the individual to choose, but protecting them from disinformation, from ensuring the platforms, et cetera, are capable of moderating content and that that is done in an objective way, and that dangerous material is actually flagged to the individual when it comes to terrorist activity, criminal activity or activity which puts at risk the lives of individuals such as disinformation about vaccines, then, of course, there is a role for Government to step in, but they should put the onus on the platforms to actually achieve that rather than directly intervening onto the content platforms themselves.

So this is the context in which we have these discussions in the Internet Governance Forum in order to stop from going too far and that we have an understanding of how it might work.

>> BARBARA CARFAGNA: And what should be the key priorities for Governments when developing policies and regulation to combat misinformation?

>> PEARSE O'DONOHUE: The first thing is in a forum like this the Governments must work with the other stakeholders, many of whom are the experts in the running of the Internet, whether it is the technical community, Academia, civil society through their NGO's and, of course, business.  They can learn and work with them.

It will always be more effective if it is done in that multistakeholder way.  But what Governments could do taking their roles and legitimate responsibilities is, as I have said, to ensure that the framework exists, that the platforms, the providers of the social media platforms know clearly what they must do and have put in place sufficient mechanisms of moderation, independent content moderation, but also in the ultimate cases which hopefully should be relatively minimum that they have the ability to rapidly take down dangerous material, criminal, terrorist or other similar material for the protection of individuals, but also that there is sufficient transparency and redress in the mechanisms that Governments or regions put in place so that we can learn from any mistakes we make.  We are not going to always get it right.  So that errors or mislabeling can be corrected and redress is effective to the individuals or to the companies or to the groups who may feel that they are being unnecessarily or unfairly censored and that is an important part.

So, again, an independent monitoring is critical, independent of Government, academic and other experts who can objectively independently give a view on the function of these processes, but also in some cases actually be the experts on the content itself.

>> BARBARA CARFAGNA: The European Union has introduced regulation for Misinformation in Digital Services Act.  Will other countries follow and make a similar rule like it happened for the GDPR in your opinion?

>> PEARSE O'DONOHUE: Thank you.  We do think what we have done can be of interest and use to other countries and regions.  You mentioned the GDPR is a very good example.  Perhaps one of the reasons why we could help others to develop their systems is because while Europe and Europeans are major consumers of the major social platforms, most of those platforms are not actually European based or of European origin.

So we have had a different objective than some countries.  Secondly, because we have within Europe very different cultures, very different experiences, even the linguistic element is very important actually in this area that that will help others.  It's not a uni lingual system that we have to put in place, and we need to deal with multi culturalism and differences of culture, religion and ethnic tradition.

So, yes, but in all honesty as I said before there is an opportunity that if one area we do isn't quite right and we have address, we can learn from mistakes and others.

Secondly, of course, for the very reason that he with want the Internet to be localized, to address different linguistic and ethnic cultures, there will always be a need for some modification adaptation, for example, of the rules that are appropriate for Europe, for other countries and regions.

We do feel, again, in the IGF and other fora, that that is exactly where we can have these discussions so that others can learn not from us but with us from the experiences, for example, of the Digital Services Act as we implement that and find tools that achieve the same objective, which is an open, safer Internet where there is free access to information, but there is also a clear barrier to disinformation and misinformation.

>> BARBARA CARFAGNA: Thank you.  Mr. Mohammed Ali AlQaed, what emerging technologies are proving most effective in identifying and mitigating misinformation and how can their adoption be scaled responsibly?

>> MOHAMMED ALI ALQAED: There are many fact check tools by Google and many others and even the image and video verification tools.  The problem with those tools is that it requires the recipients to have, put some effort to go and look for each piece of information which requires time and effort to verify what is going on.

And those tools, of course, they are using different technologies, but mostly they are changing to use AI and machine learning.  The legislations and the process to make them effective, I think, is the most important thing.  Because if we cannot identify the source of information, there is a misinformation that is spread around and then you cannot source who started that, I think that makes it difficult because the harm is there and you cannot find out who is behind that piece of information.  And I think if we cannot reduce the verify by design tools that, you know, tag the information, not hiding or not letting the information going out but tagging the information so at least the user can see the tag and then look at what is the perceptions, let's say, of the tools about that piece of information, I think that might minimize the harm of the data which is going out.

And Government of India, you know, they introduced legislations which as a mandate on all of the social media platforms to identify the source of information, so that kind of thing I think organisations acquire with many other measures that I think we have to collectively work together with the society and the stakeholders.

>> BARBARA CARFAGNA: I have seen that you start each technology and try to examine which is the best one for you.  What are the emerging technologies that are proving most effective in identifying and mitigating misinformation?

>> ESAM ALWAGAIT: Sure.  So to fight misinformation, you have to first of all know how to detect the fake media or information, and the second part, what do you do when you do that?  So detecting misinformation, there are a lot of technologies and we have the case of fighting fire with fire, so if AI is used to generate misinformation, then you have AI tools to detect that.  For example, we have machine learning and NLPs that could analyze the linguistic patterns and detect manipulated text.

There are AI tools to analyze video and audio, to detect, for example, the pitch of the sound or the facial movement and so on to detect the fake generated content by AI.  So that's detecting it, but the most important part is what do you do?

And I think that there should be a collaboration between tech companies, Governments, Academia, and international organisations to come up with innovative regulations to combat misinformation, and when I say innovative regulations, I mean the kind of regulation that does not hinder the innovation because we all know that sometimes too much legislation will slow down the innovation.

And the lack of regulation will allow cases like misinformation.  So innovative regulation is the sweet spot where you have the balance between having regulations and enabling invasion.

  Saudi Arabia we worked with the global AI Advisory Body by the UN to create more regulations for ethical and responsible AI.  We have also established the international Center for AI research and ethics here in Riyadh allowing these kind of regulations and enabling ethical AI.

So to combat misinformation, you have the tools, but you need to have the regulations that enforces these tools.

>> BARBARA CARFAGNA: Do you have a system to monitor the behavior of the message, the fake message?  Because I think that in the speed maybe this is the most effective thing to do to stop because like we heard before, if you have someone convinced you bat.

So how you try to    combat.  So how you try to stop the message.

>> ESAM ALWAGAIT: A lot of social media have AI fact checking tools so when you have the content, it will be flagged based on how accurate.  If there is something alarming, it will have an automatic flag.  Other social medias, for example, the they crowd source this.  So they would allow the online community to flag misinformation and provide ideas about it.  So as we mentioned sometimes the misinformation could be stopped even before it starts using these tools.

>> BARBARA CARFAGNA: Thank you.  So starting from your consideration, how can Governments, company, media and civil society work together to create a unified strategy for combating misinformation.  I guess this to Ms. Natalia Gherman.

>> NATALIA GHERMAN: Thank you.  I believe that one way Governments, tech companies, media and civil society can work together is to use international mechanisms such as Internet Governance Forum for such purpose.  And we have this week a fantastic opportunity here in Riyadh to put our heads together and to develop a unified strategy.  And key players must take advantage of focused global or regional events for that purpose.  That are many more, and I can bring a good example since back in 2022 the United Nations Security Counter Terrorism Committee organized a special meeting in India in New Delhi that gathered Governments of the United Nations Member States, technological companies, civil society, research, Academia, media, and all were researching and analyzing a very important issue of misuse of new technologies for terrorist purposes, international capacity building, and, of course, that all through the lens of respecting human rights.

And the outcome of that meeting was the Delhi Declaration which led to the development of the so called non binding guiding principles for all United Nations Member States on new payment technologies and unmanned aircraft systems and, of course, information and communication technologies.  My office was tasked with the development of the drafts of those non binding principles.

We had to work and collaborate with more than 100 partners from the governmental sector, civil society, Academia, and, of course, we have learned a great number and a host of good practices, lessons learned, and effective operational measures.

So some of the ideas and suggestions that were put forward by our partners included ways to counter mis and disinformation through digital and media literacy campaigns that still remain extremely relevant, teaching critical thinking skills and building resilience at all levels of society to violent extremists and to terrorist messages.

So there were also suggestions to develop guidelines for strategic communications and counter messaging algorithms as well as developing cross platform reporting mechanisms, and similar efforts both global but also focused fora and platforms do help to build consensus and trust among relevant stakeholders.

And, of course, our aim should be the development of an operational plan to combat misinformation globally.  So the United Nations Security Council has always took the lead and highlighted the need to develop public private partnerships through voluntary cooperation to address the exploitation of information communication technologies in no less than six resolutions on counter terrorism since 2017.

And in the United Nations we are increasingly consolidate our cooperation with much partners Aztec against terrorism, the Christ Church call, and the industry led global Internet forum to counter terrorism.  There are many other good examples on private and public partnerships.

So the key actors could draw on the playbook for countering terrorism narratives online as I said in the Security Council, relevant Resolutions and in the comprehensive international framework to counter terrorist narratives.  This framework that the United Nations offers to all Member States in the world lays measures for legal and law enforcement measures, cross sector collaborations and the development of strategic communications among other things.  Thank you.

>> BARBARA CARFAGNA: Thank you very much, Mr. Khaled Mansour.  We have also seen there are protocols that we are trying to build like the CDPA or others.  Do they work or in your opinion how can we create?

>> KHALED MANSOUR: Let me take two steps back I'm the only member that doesn't represent the Government or multilateral organisation.  We will take a different track at how we can have strategies to counter misinformation.  Firstly, I don't think we have to have a unified strategy.  I think we are different actors, all of us.  Governments have their own responsibilities.  Every actor has its own interest, its own interest and priorities.

Global regimes are not necessarily the only solution.  Global transparency, a venue like that, a forum like that is a step in the right direction where we speak to each other but hold each other accountable, because at the end of the day, we come from different frameworks.

Secondly, people like us in the oversight board and oversight board is a self regulatory body for Meta platforms.  So we are independent, funded by irrevocable trust from Meta.  We can tell Meta to remove content or return content they have removed and give advice.  Our guiding principle doesn't start from safety and protection.  It starts with freedom of expression.  That's a different approach to misinformation.

So for misinformation to be labeled as such or to be removed, it must be very clear and legitimate laws in place to remove it.  Not ambiguous definitions to achieve unjustifiable ends and suppress views in the absence of imminent and likely harm.  This is a key word because attacking misinformation, and a lot of misinformation is harmless use and should be left.  But it should be proportional to the likelihood and imminence of harm.  That's an important distinction we have to make.  I'm not talking about clear criminal activities or organisations, child trafficking, all of that is clear stuff that should be handled.  We have to admit there is a balance that we need to make between allowing people to express their views freely and protecting users from harm.

Not all misinformation and I will repeat again is harmful.  But when misinformation can insight violence or undermine public safety or directly harm individuals, we need to act, and I would claim this is not the majority of pieces of misinformation.  There are various ways to handle misinformation, removal is not necessarily the best way to approach it.

For example, earlier this year the oversight board we told Meta to leave content in place with a manipulated video of President Biden.  We advised Meta to stop removing manipulated material.  That's video, audio, text which is manipulated by AI or otherwise unless the content clearly violates policies.

Again, pornography, child trafficking, terrorist activities, et cetera, et cetera, or violates human rights.  Now, it's important to tell users that this content is manipulated and our advice, our approach is that Meta should then enable that content as significant    label that content as significantly altered.  This is being transparent without having to remove content.

Meta indeed started labeling all AI content that they can detect using tools like were pointed out using tools to the user understands that the video of the President is actually manipulated or the video of the candidate in a campaign or elections is manipulated.

AI is not the challenge.  AI exacerbates, speeds up, accelerates, but it's not the challenge.  States, corporations and humans like us are the ones who sometimes abuse the system using AI or other tools.  Our strategies should be focused on fighting, not on fighting these tools, but on actually using them, again, as was said, to expose and sometimes even label or remove this harmful content.

>> BARBARA CARFAGNA: Thank you very much.  So I ask, we have just less than nine minutes.  I ask each of you a quick remark, final message to leave to our public.

>> ESAM ALWAGAIT: Sure, so misinformation is very dangerous.  Cases like COVID 19 costs life, people.  We need to fight it.  We need to have collaboration between the Government, Academia, the tech companies, the international organisations to come up with proper regulations to combat misinformation.

I would like to reiterate our commitment in Saudi Arabia to work locally and globally to reach to such regulations.  Thank you.

>> BARBARA CARFAGNA: Thank you.

>> DEEMAH ALYAHYA: I just wanted as well to build on Natalia's comment on the differences and how can cooperation help.  Before this session, because of the power of platforms like IGF and with the support of Saudi Arabia hosting IGF this year, we conducted a roundtable where we had our Member States and also states that were not members to come together.

And what we found is that the challenge is similar.  The challenges are the same.  But the way how to tackle the challenge is different.  And by just one seating together, all shared best practices together which reduce time to expedite solving the problem, and as consensus of all members mandated by our Chair, the Minister of Kuwait, we are going to have another meeting as well to invite Private Sector and social media platforms to be part of that discussion to create as His Excellency mentioned the right regulations and standards that within consensus all Member States can adopt and, therefore, respected by the social media platforms.

My message is let's continue cooperating and we have to act now, but in a way where we put hands in hands rather than working independently in silos.

>> I just wanted to highlight that not everything is negative with technologies and what's going on.  The technology allowed the globe to come closer and having a similar information at the right time.  So having a common ground between everybody.  But usually the Governments and the society reacts to the misinformation only those that are spread most or are having a higher impact.

The problem that the misinformation which goes into smaller groups that tent to spread so much in smaller societies or smaller groups, that nobody is taking care of where it could build to a deeper beliefs from those people and making it much difficult and in the future causing bigger problems.

That we have to address.

The other thing that countries alone or smaller countries, they have less influence on companies, tech companies.  This is why I think regionally we have to work together to put regulations and the mechanisms to combat the misinformation.

>> BARBARA CARFAGNA: Thank you.

>> NATALIA GHERMAN: I would like to reiterate that the threat posed by misinformation but also by terrorists and violent extremists narratives is rapidly evolving.  So should the response by states and all different stakeholders.  And the stated, of course, have to be technologically agile to understand the nature of the threat and to counter that, but this approach should involve all of the society.

It has to be Government, civil society, non governmental partners, Academia, research, and the Private Sector.  Only in this case will we be successful.

I also want to draw our attention to sometimes unintended consequences of the effort to counter both terrorist narratives and misinformation, particularly when it comes to human rights, freedom of expression, freedom of opinion, also journalism and privacy.

And the human rights cannot be compromised.  Solutions for the spread of misinformation elicit content online must be grounded in a shared commitment to human rights principles.  Thank you.

>> BARBARA CARFAGNA: Thank you.

>> If I may be so bold to talk in terms of principles which we have heard on the panel.  The first is the protection of the individual which we must strive to achieve.  But the second is the preservation of freedom of speech, and there are times when those two first principles can be perceived to be in conflict, but it's particularly in the mechanism or the weight of the procedures that we put in place to protect the individual which can if misused actually hinder and block freedom of speech.

And blocking freedom of speech, freedom of expression is itself harmful to the individual as it is harmful to society.  So that's why we have to get it right.  So the issue of accountability, a principle, the principle of transparency are critical to achieving the right balance in tackling misinformation.  I do agree that not all of this information can be classified as bad, and that we, therefore, need a gradual response, and, of course, we preserve our most direct and intrusive measures for those content which is clearly supporting terrorist, criminal, or other dangerous philosophies.

With that in mind, cooperation is, therefore, the way of doing things.  We do agree that individual countries cannot achieve the same as regions have and that is within of the reason why the European Union has acted and we are keen to discuss and share in the recognition of diversity with other regions who may seek to achieve the same objectives.

That is bringing me to my last principle, which is that it's not just freedom of expression.  It is the open Internet which is available to all, which is so important, great importance to economies and societies throughout the world which we are here to seek to preserve and develop.  Thank you.

>> BARBARA CARFAGNA: Thank you.

>> PEARSE O'DONOHUE: We spoke about misinformation, how to counter and deal with it.  Let me conclude by talking about good information because supporting good information, accurate media, credible exchange of information is paramount if we are to counter misinformation root causes, and this should be our major objectives as Governments, as the tech industry, which is not represented on this panel, and as civil society and content moderators because we have to submit that there is a fine balance as was pointed out.  There is a fine balance that we have to make between respecting and supporting freedom of expression and human rights and accurate information on the one hand and how to address harmful and I underline harmful imminently harmful misinformation on the other hand and in reaching a good fine balance between these two overriding objectives lies the challenge that we all face even if we have different roads to reaching our objective.

>> BARBARA CARFAGNA: Thank you.  So we have finished at 0000 so we are perfect speaker.  I give my conclusion that as we know we are facing a revolution that is not industrial, it's a human revolution.  For the first time as humans, we are acting in the world together with artificial agents.  The agents are organising also our lives, and they are acting with us.  In this space we are seen for    seeing for the first time how they act and are testing my GenerativeAI will talk with her GenerativeAI and they organize a meeting together for us.

So this is a huge Resolution that we can't face with the tools we had before.  That's why we are building an ecosystem, a new ecosystem and it is the governance that can lead an ecosystem, not the single vertical domains.  That's why I thank you once more Internet Governance Forum for this panel and also for starting this event with this panel because this is the most important topic probably we have to afford in building our next society.  Thank you.