The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ANNOUNCER: Please welcome to the stage the moderator, Natalie Becker-Aakervik.
(Applause)
>> NATALIE BECKER-AAKERVIK: Hello, everybody. And welcome, and to some of you, welcome back. It's lovely to see you here again to the high-level session, Losing the Information Space, Losing the Information Space, Ensuring Human Rights, and Resilient Societies in the Age of Big Tech, in this session presented by the IGF 2025's proud host country, Norway. I am Natalie Becker-Aakervik, your moderator. Also a huge welcome and welcome back to our online global audience who are watching from all corners of the world.
Now, in this session societies, as we see, enjoy immense benefits from their participation on platforms, but there are also threats as ethics, safety, and negative social impacts may be neglected for leverage in the global AI space or the global AI race.
Now, disinformation represents an imminent threat to fundamental freedoms. It can induce polarization in society, distrust, and instability, as we have seen. So, fighting disinformation really requires measures to ensure media and information freedom and literacy and transparency and accountability on behalf -- on the part of the online platform to really mitigate these risks of potential misuse of platform power.
So, at the same time, the fight against disinformation must protect freedom of expression. And this requires a balancing act, a really delicate balancing act, if we talk about it between security and fundamental freedoms.
So, as big tech assumes an even greater role in our communication and in our communication infrastructures, are we as citizens and as nations losing the information space? That is the question that has been asked here today. And hopefully our esteemed speakers and panelists will be able to answer that, in part.
You are also welcome to continue these conversations and try and connect with people and try and find answers in ways of working collaboratively together to create and maintain that delicate balance that we spoke about.
How can we ensure a transparent and responsible information ecosystem with an informed public conversation, a protection of human rights, free editorial media, as well as resilient citizenries? Those are the questions we hope you will be keeping in your mind as we dive into these panel discussions and hear from our speakers.
So, this high-level leaders track session will really discuss the ethical and governance challenges posed by platform power. Panelists will unpack how user attention capture leaves open risk for mis and disinformation campaigns and explore the capacity of algorithmic priorities in the global AI race, as we mentioned.
And also we will see if we can debate some strategies to bolster transparency and accountability in this space.
So, I am going to be introducing our speakers, and they are going to come on stage, and we are going to get into the conversation. Are you ready?
(Applause)
Are you ready? Fantastic. That's great to hear.
So, first we have Ms. Lubna Jaffery, Minister of Culture and Equality in Norway.
We have Liisa-Ly Pakosta, Minister of Justice and Digital Affairs in Estonia.
We have Mr. Thibaut Bruttin, Secretary General for Reporters Without Borders.
We have Ms. Lisa Hayes, head of Safety for Public Policy and Senior Counsel for the Americas and TikTok.
Mr. Bjorn Berge, Deputy Secretary-General of the Council of Europe.
And we have Monsignor Lucio Adrian Ruiz, the Secretary for the Dicastery of Communication in the Holy See.
Please give us -- or join me in giving them a warm round of applause as they join us on stage. Thank you.
(Applause)
Yes. Let's welcome them on stage with some more warm applause, ladies and gentlemen.
(Applause)
We are very privileged and blessed here to have our representatives proudly from Norway and from all parts of the world who have all flown in to be here with us today and to have these very important conversations that will carry forward the work. So a warm welcome to you, our panelists and thank you so much for joining us.
Now I would like to invite opening remarks by Minister Jaffery, Norwegian Minister of Culture and Equality. Minister, the stage is yours.
>> LUBNA JAFFERY: Excellencies, colleagues, experts, distinguished guests, I am delighted to be here on the first of five days of what I am sure will be constructive and intriguing talks about how we can work together to ensure an open, safe, and free Internet.
Some of these talks arise from great challenges. Seven months ago, the results of the first round of the Romanian presidential election were annulled, amid allegations of widespread influence operations and social media disinformation.
This incident is not isolated. Manipulation on online platforms has become a well-known challenge to countries around the globe. According to reports from the Norwegian defense research establishment, similar tactics have been used in attempts to mislead citizens in countries such as the United States, France, Georgia, and South Korea just to name a few.
The advancement of generative AI has only intensified this challenge. Today disinformation can be produced and spread at an unprecedented scale and sophistication. AI generated content that mimic real people are now widely accessible. Thesis manipulative efforts are often subtle and can be deeply harmful. Propaganda and false narratives can spread like wildfire across social media, undermining trust in our fellow citizens, institutions and the very fabric of our societies.
The goals behind such campaigns are clear, to sway elections, erode solidarity, disrupt public discourse, and create instability. The consequences are not abstract. Disinformation can inflict real democratic, physical and economic harm. While it is not new that actors who aim to destabilize societies and manipulate individuals use information as a weapon. Information is also our strongest defense.
Independent use media offers reliable sources to information and disinformation thrives when there is a lack of an independent and diverse media landscape. However, news media's abilities to perform their function as watchdogs and provide us reliable information are challenged by big tech platforms.
Journalistic content is an important part of what the platforms offer. But they are unwilling to share data and the terms for media companies are still not satisfactory.
Through transparency, knowledge and free expressions, we need to ensure that the truth is easily available to those who look. When this is properly ensured, it is a recipe for success. In other words, the solution is not to prohibit expressions or untruths. The fight against disinformation needs to save, not suppress freedom of expression.
We must also ensure that inequality and discrimination is not transferred to or even amplified by the technologies we rely on. To ensure human rights and proper resilience across sentences, the age of big tech, we must safeguard equality and inclusion.
As the Norwegian government has stressed in our widely published -- newly published national strategy to strengthen the resilience against disinformation, a robust media system, high levels of trust and high levels of media literacy and source awareness are important tools.
Studies have found these factors to be of a major significance in the terms of how vulnerable a country is to disinformation. The strategy also emphasizes that we will hold big tech companies accountable and demand that they accept that their central role in our informations base bring great obligations.
Addressing this crisis require more than national action. It demands global cooperation and strong regulations to hold tech companies accountable. But regulation alone is not enough. We need a united front, civil society, industry, academia, politicians, and decisionmakers, working together to combat disinformation, while safeguarding freedom of expression.
Only through collective effort can we build resilient societies that protect human rights, foster innovation and preserve integrity of our democracies. Thank you very much.
(Applause)
>> NATALIE BECKER-AAKERVIK: Thank you. Thank you so much, Minister Jaffery for delivering those opening remarks and we are looking forward to having a meaningful conversation, stakeholder, multistakeholder approach to Internet Governance is vitally importantly as we said before and the context within which these conversations challenging difficulty conversations where we really are looking for ways forward and solutions to take the work forward in a good way are vitally important so we thank all of you for being here today and all the representatives representing the various voices of the engagement.
So, our first question is, digital platforms have transformed how information is accessed and shared. What are some of the broader societal impacts you have observed from data-driven and engagement oriented systems and how can public Internet Governance address these dynamics. I'm going to ask that question only one more time because I know you only have three minutes to answer. So, Ms. Liisa-Ly Pakosta, over to you.
>> LIISA-LY PAKOSTA: Thank you so much. And it is really an honour to be here and thank you for these opening words. I fully agree what you said. And I also think if we could do everything that you said, that would be great. So, the issue we are actually talking here is how to fully this very good perspectives we have as normal human beings in the era of democracy that we want to protect and the freedom of speech that we want to protect, while having new technological opportunities for also the bad guys available. But also not only the technological variety is out there, but also we can see that nothing from the physical life has also changed. So, let us remember that also the false information before elections was widely spread in Pompeii which was found from the archaeological excavations and it was 79-year, so it is nothing new in the way we are dealing with. What we are dealing with this in the era of huge information revolution.
So, what has changed. I come from Estonia, which is a fully digitalized country where people have high trust to digital services, digital government, but we are also neighboring country to Russia. And what we see what has changed is that we see constant hybrid attacks. So, the information wars are spread on many more platforms than it has happened so far. And you very well took out the example from Romania where the elections actually were attacked by Russia. So, this is something that we clearly see on just new technological levels.
The second thing is that what has changed is the education. So, normally we have been used that we teach the children, they learn to read books and newspapers, and we have some time with the children to discuss these issues.
Now we are in a totally different situation, and what we believe in Estonia is that we have to teach the children to be good users, clever users of the information that is out there and around there.
>> NATALIE BECKER-AAKERVIK: Thank you so much. Thank you so much, Liisa-Ly. We wanted to also, perhaps, continue a little bit on that point in our next question as well. So thank you so much for your contribution.
Then, Mr. Thibaut Bruttin, Secretary General Reporters Without Borders, what is your response to that question?
>> THIBAUT BRUTTIN: Coming from a movement that historically and internationally defends journalism, I think we need to acknowledge the progress made in collecting and spreading information via this new tech platforms that have been built. I mean, that's the obvious thing that we may forget about in the current time, the history of tech companies today, because there is so much dissatisfaction and so much talk about disinformation that maybe we tend to forget that in order to collect, to disseminate news, there has been huge opportunities and the news media have seized it to some extent.
But that being said, we also see the risk, because today the economy of the media, the news media has been deeply weakened by the tech platforms. We have also seen a decline in the trust of the public. We have seen political movement weaponizing social media to their benefit, and also we have seen tech companies not endorsing their democratic role.
So, I think the main statement I would like to make is there are three things that we need to rethink. First of all, democracy shall win. I mean, I think nobody believes that anymore. We know that we need to protect democracies, it's a long-standing, continuous daily effort. The truth will not prevail if we do not protect our information space.
And third, it's a fantasy to believe that digital space is a private space run by tech companies. We have delegated it to them. It's something that's owned by the public to some extent. It's a public utility. And we need to claim it. We need to restore democratic guarantees in the digital space that what RSF believes in and what the forum on democracy which is a sister entity of ours believes in and I am also president of the forum. Thank you.
>> NATALIE BECKER-AAKERVIK: Thank you so much, Thibaut. Ms. Bjorn Berge what is your response in a three minute answer?
>> BJORN BERGE: Very good afternoon to all of you. And, of course, some of the more concerning impact is, of course, related to what has already been mentioned by other speakers, the spread of misinformation, disinformation, hate speech, and so forth, which really aims very deliberately to manipulate public opinion and to destabilize, even, societies, and certainly also can have an impact where it actually undermines democratic processes.
And the Minister also referred to some of the recent examples we have from elections. And then you really go to the core of democracy. And this type of disinformation campaigns, this type of manipulation, we have seen in several elections over the years. So, it's a huge problem.
And so the question is then, next, what are we going to do to combat disinformation? I come from the Council of Europe. Of course, we focus very much also on standard setting, legal approach to many of these issues. And we have issued last year, as a matter of fact, a guidance note to all our 46 member states, all of Europe, and here we give some very concrete advice, how to go forward. It's related to fact checking and platform design, and also user empowerment.
We also have to deal with a more criminal aspects of this, the cybercrime convention which over 70 countries not only in Europe but globally has joined, and they have activities now in over 130 countries. So, I think this is also an important part of the work we do in this regard.
But more concretely, how to address this, there's a number of issues I can mention. We will maybe come back to it also. I don't know about the time. But, of course, it related to media literacy. And this is certainly not an overnight. I mean, here we need to have long-term perspective, five, 10 years, 15 years. It's related to fact checking, as I already said. But there we are always too late. It's already had an impact. But still it's crucial. And then it's also associated to how we can suspend or withdraw content. But, again, by who? Yes, social media themselves, platforms have a role to play. But we have also seen a trend where this is getting more difficult and less a priority for some of them.
And it's also about legislation, as I said. Of course, the EU Digital Service Act is very useful, but it's also then an issue of enforcement and implementation. So, there are all these as well. But to start with, you really need to go deeper. We need to know how systems are used or misused and how the algorithms really, how they are behavior of it and also how they are used in this. I don't know if I'm close to three minutes or should I go on.
>> NATALIE BECKER-AAKERVIK: Thank you so much. We will come back to that in the next question. Perhaps we will have time to round off. Thank you for your contribution.
Now I'd like to go over to Monsignor Lucio Adrian Ruiz, Monsignor Ruiz, how would you answer that question?
>> LUCIO RUIZ: I think that we need to understand that the digital culture touch many areas of the human life. But the deepest one come from the field of the anthropology and also ethics. Because the digital culture touch the relationship within the persons, also the perception of the reality the time, and also in the way where we find answer to the questions. That is an approach to the reality that is really different in our culture.
It means that also the ethical things are different, because if we see the reality in a different way, we act in a different way. It means that everything that is digital is not just instrument that conform and realize a new culture. That is really important. Because we are used to us to think that the technology is neutral and it's not. Because all technology, born with intention and after, we can apply another intention with the user use the technology. Used to have to say the technology is neutral, we can use to good or to bad. But it's not. It's born with intention.
It means that it's necessary to act well the formation, the legislation and the research also in the field of the anthropological fields and also in ethical field.
>> NATALIE BECKER-AAKERVIK: Thank you so much for that input. Thank you, Monsignor Ruiz.
And Ms. Jaffery, would you like to respond to that question?
>> LUBNA JAFFERY: Yeah, sure, I hope you are also going to ask TikTok about this, because we are all talking about the big tech platforms and TikTok is present here. First of all, I think it is important to acknowledge that the digital platform, in a sense, is a part of a democratization because it gives and it allows a lot of us to speak what we want to say and express what we want to say. But the problem is that this is also -- there is also a lot of people that are silenced and not even heard. That is also the problem. And we know also that these platforms, the algorithms, they make us live in certain universes. So, I am a part of one universe. My neighbor is part of another universe. And we don't meet because we are part of echo chambers that enable the things we want to enable and the things we want to mean.
So, that is also a problem, because the common ground is lost, in a sense, when you also use a lot of time on the digital platforms.
And we also know that digital platforms have had incidents where they have spread hate, radicalization, that is a problem.
So, I think what we need and the Norwegian government's response to this is that we need to support the media, for instance. This is important. And I know a lot of countries thinks this is very strange that the Norwegian government support editorial-led media, but this is one of our main responses against disinformation. Because we need to have media literacy. We need young people to understand that social media can be positive, but it can also be negative. And you can't use social media to know all the facts every time. You need editorial-led media and that is one of our strongest in response that in the strategy we have launched last week.
>> NATALIE BECKER-AAKERVIK: Thank you so much, Lubna. And, of course, thank you, including TikTok in this conversation, which is very important. It's not an easy space to be in. We thank you for showing up in good faith and for being part of this conversation where all of the voices need to be present in order to take this forward. Thank you, Lubna.
And Lisa, the next question directed at you first. Emerging technologies are impacting both the spread and efficacy of mis and disinformation. So, what is the potential for identifying and mitigating mis and disinformation and how can the adoption be scaled responsibly? So, each speaker has four minutes to answer this question. Lisa, please go ahead.
>> LISA HAYES: Thank you so much for having me here today and for TikTok being part of this really, really important conversation.
I am not sure that's a four-minute question, to be fair. I think it is more like a four-day question or at a minimum of four-hour question so I hope this is just the beginning of the conversation for those of us who are here this week, because I'm only going to be able to scratch the surface but I will give it a shot.
There must be something in a name because I do want to echo something that Liisa nodded at, the other Liisa, in her opening comments. This is not new. I mean, some of the issues we are facing in the digital space now are new platforms, but every time we have had a major technological leap forward, we have had problems of misinformation and disinformation. We started with the printing press and we wound up with tabloids at the supermarket telling you that the celebrity is having alien babies and we started with radio and wound up with shock jobs. We started with commercial broadcast television, and we wound up with cable stations on the edge.
We have engaged in this digital literacy, we have figured out how to spot truth from lies, and we have learned how to digest all of this information. I am not generally seen as an optimist, but I am a realist. We will come to grasp with everything that is happening with the new emerging technologies and I think that we will, as societies, benefit from them more than we are harmed by them.
You know, I also want to nod at the role that mainstream journalism has brought to technology. I know on TikTok, for example, if you want to learn from the "Wall Street Journal" or "The Washington Post" or the BBC or the guardian, all of those new entities have verified accounts and they are putting out new information that is reaching a whole audience and community that they would not otherwise be able to reach. Dare I say it, even the United Nations has a verified TikTok account and is using that account to push out information to a billion people around the world who otherwise might not be looking for that information. So, that's the positive sense of what we are doing.
But beyond providing authoritative information, emerging technologies need to prioritize safety by design. We need to prioritize security by design. At TikTok, our policies prohibit harmful misinformation, regardless of the poster's intent and we remove accounts that repeatedly post this type of information.
To go to your question, we do detect this misinformation using automated technology, user reports, proactive intelligence gatherings from experts, as well as our fact checking partners. Technology is used to help us do this work at scale, to help us do this work rapidly even, so that we can catch problems before they begin.
Currently 80% of the content that we remove from our platform is identified through automation and technology. And as videos gain in popularity and start to gain more views, they go through additional rounds of content review to make sure they comply with our platform guidelines.
When we pair technology with human moderation, we have found that we are able to remove violative content 98% of the time proactively, before it's reported to us as a platform. And we would not be able to do that without some of these new forms of content moderation. AI is enabling us to identify harmful misinformation, disaggregate it as part of the video and to send it to human review, to send it to fact checkers for independent assessment. Those are some of the benefits of the things that we are able to do with automation right now.
And as more creators explore AIGC, it is imperative that companies continue exploring new ways, new methods of advancing safety, and I know I'm at my four minutes because there is a large clock in front of me but we will continue this conversation.
>> NATALIE BECKER-AAKERVIK: Thank you so much, Lisa, for that input and we are looking forward to responses from our panel.
Mr. Bjorn Berge, over to you, how would you answer that question in your four minutes?
>> BJORN BERGE: In terms of meeting and challenges of disinformation and misinformation, I started really focusing a little on it already, but this is such a fundamental issue. I mean, for Europe, for the world. And what is important that we also have a clear strategy. And some of the elements in such a strategies, of course, related to media literacy, as I said, also the issue of fact checking, and also how we can ensure the suspension or the withdrawal of certain content. And also the legislative part to this with certain need for regulations in this area. Already the EU Digital Services Act is, of course, very useful, but here, as I said, there is also a question of how we can help enforce it even better and secure its implementation.
And maybe also there's time. You know, developments go so quickly here. Maybe there's time also to reflect, what are the lessons learned so far when it comes to this area of legislation and regulations.
And so we need really to go into also how the systems are actually used and misused and maybe we also need to understand human behavior better, particularly young people. And here we need actually more research. It seems today that young people go elsewhere, and today influencers have a huge impact. And there was a survey in the United States of young people, and 40% of them said that they go to influencers. I don't know the situation here in Norway or in other European countries. That's why I call for more research and evidence on this and also a certain focus on youth and young people in regard to this.
And also could be even be more creative. I mean, we talk about the negative sides of this. How can we promote the reliable, positive information, news information to combat in a way lies and disinformation, and how can that be lifted up?
I know we have the Secretary General here of the Reporters Without Borders. They have taken a very important initiative, I think, which is called Journalism Trust Initiative. And this is a method of how we can have a system of giving you and me trustworthy and more reliable information. I think this is a good way of going.
And could we also oblige the social media platforms to do more of this, to focus also on reliable, positive news, to help people, actually.
We also had a big hackathon where we had young people from all over Europe and we challenged them on this issue about disinformation and what were their views, how will they tackle this. And we also had some experts there. And one of the concrete examples and recommendations they made was do we need maybe also a new set of -- a new convention, perhaps, on disinformation and foreign influence that combine all governments, and bring all governments together. That was one of the issues.
>> NATALIE BECKER-AAKERVIK: Thank you.
>> BJORN BERGE: And also the convention on the right to information is essential today, I think. And this is related to what I already said.
So, there are many other ideas around this. But I think this is such a fundamental issue and we need a multistakeholder approach to this. It's not enough that government sits and discuss this among themselves. We need to have all the users, academics, the young people, the researchers, the main providers themselves, the main actors themselves.
>> NATALIE BECKER-AAKERVIK: Thank you so much for listing that up again, the multistakeholder approach, of course, a core tenet to this conversation. Thank you so much, Berge and for lifting up also the work that Reporters Without Borders is doing. We are going to come back to it in our next question.
But to round off this question, how would you respond, Ms. Jaffery in three minutes.
>> LUBNA JAFFERY: From a policy perspective, I think scaling the adoption of emerging technologies responsibly is crucial. So, first of all, it's important that emerging technologies do not reproduce or even strengthen inequality and discrimination. We have, unfortunately, seen examples of technologies that maintain biases and discrimination such as facial recognition, systems that cannot detect people of color or algorithms that reproduce gender biases. So, the tech companies need to be very careful of the many ways they can engrain deep-seeded prejudice in society into their technologies. This is also part of combating disinformation, a part of strengthening resilience to disinformation campaign is an inclusive and our just society. This facilitates trust, disability and the ability of citizens to take part in open and informed public discourse. It's also crucial that technologies do not undermine human rights such as the freedom of speech when they aim to mitigate disinformation.
>> NATALIE BECKER-AAKERVIK: Thank you so much for that input. And thank you for answering that question all of our panelists.
As we move on to the third question for which we also have four minutes each to answer, time keeping. However, these are such big questions, we truly appreciate the input that you are managing to summarize in these pockets of meaningful conversation. And also Minister Jaffery, thank you so much for joining us. We really appreciate your time. We understand that you have more obligations to take care of today. So, so, thank you for joining us and a big round of applause for Minister Jaffery. Thank you.
>> LUBNA JAFFERY: Thank you.
>> NATALIE BECKER-AAKERVIK: Liisa-Ly, the third question we will start with you. There are increasing calls for transparency in algorithmic systems. In your view, what constitutes meaningful transparency? What kinds of transparency practices, technical, operational, or communicative, can help to build public trust?
>> LIISA-LY PAKOSTA: Thank you. Estonia stands for full transparency. And this is because, I said before Estonia is a fully digital state. All of our government services are digital. And people give their data to government for interoperability. This means we need a huge amount of trust from our citizens to make it fully operational. And we have guaranteed this through the full transparency. People can check from their mobile phones who has checked their data. You can go to the Internet and you have full picture of who has taken a look on your data. And you can ask why this person had this look on my data. And if he or she didn't have any legal grounds, one gets punished. So everything is fully transparent.
Even we have e-voting and even the voting system is fully open and transparent. So, the transparency is the absolutely needed for the trust, but also for whatever controls there might be needed. There's not only from the government side control, but also the society itself. So, we definitely stand for full transparency also by the media companies because what you rightly said, this is a public space. And it has to be fully transparent. This is the only tool with which we can guarantee that there is no way for discrimination or other bad things, and also misinformation. Of course it's much more tricky if we are thinking of machine learning and things like this.
But, again, we have to find ways how also this is fully transparent to everybody. And all the arguments against it, like there are business purposes, et cetera, we wouldn't agree. So, we have a real experience how the full transparency helps for trust.
>> NATALIE BECKER-AAKERVIK: Thank you so much for your response to that.
I'm going to go over to Monsignor Ruiz to answer the question, the same question, increasing calls for transparency and algorithmic systems. What kind of transparency practices, technical operational or communicative, do you think can help to build public trust? Please, go ahead.
>> LUCIO RUIZ: Well, accessibility to code, sources, sustain architectures is the first answer, because it is knowing what it does and how it does. But we need also to know the planning, the implications, consequences, interrelationships and also the vision of the future.
Because this aspect can clarify the why and the how of the technologies. We cannot have a partial view of the present. We need to know the genesis and development in order to evaluate the trust of the systems.
Precisely because a system act in a systemic way, in a related way, it is necessary to know the other actors that make it happen.
The presentation of the system must be accompanied by the ability to understand it, interpret it, which require knowledge and in the same dimension as the person that produce it for equality to understand the language.
But there is another aspect that of the possibility of moderation, the moderation must involve the participation of the government in hand, and on the other hand, all institutions that represent the collective, the society, and have the appropriate authority to do it. The development of the system that shape the society and also the culture, as I said before, cannot be autonomous business model and activities. Because the whole society, culture, and also the present, the future, and also the sign of the humanity are the res publica, that means society for all, that needs active contribution of those who receive the services.
>> NATALIE BECKER-AAKERVIK: Thank you, Monsignor for your contribution. We really appreciate that. And with that same question, I would like to hand the word to you, Mr. Thibaut Bruttin. What would be your response?
>> THIBAUT BRUTTIN: I think we have to acknowledge where we are and it's obvious that today disinformation is not like downside of tech companies. In our perspectives, there is massive disruption of the way the public conversation is happening. So, there is disinformation and anybody can actually put what it wants in disinformation. These are like word that we need to better define.
What we see are lies, propaganda being spread intentionally by actors with the complicity either willingly or unwillingly of some of the tech companies. I'm giving you an example, for example, between Christmas and New Year's Eve I was on holidays and I received a video where I was presented, it looked like a very legitimate legacy media video spreading on TikTok which explained I had committed suicide. Which I had not. I'm alive. I'm still alive. But this was spread on X. And how is that possible that after claiming that it was false, after giving my identity, providing all the information, this video is still on on X. So, that's why I think we need to revise the pact that is uniting these tech companies with the civil society and the government.
With owe transparency to the public but I'm afraid I disagree with media literacy with being one of the main solutions. I mean, the choices need to be made up front. We need to have a systemic change within the way we relate to this digital space.
And when talking about transparency, we need to be clear about the fact that it's not about chasing the bad. It's not about taking down propaganda, which can equate censorship to some extent. It's about also promoting the good, about rewarding journalism worthy of that name because what is a conversation worth if it's not based on facts? And media are not perfect. News media are not perfect. They are never going to say that. But still, they are the closest attainable version of the truth. And we need to reward that by the algorithm. That's why at RSV we champion a provision which means that tech companies should onboard necessarily news content, identify it as such, but also they should go further. They should give due prominence and increased visibility to news content that show responsible and ethical use of journalism.
And they need also to reward them financially by neighboring rights and appropriate moderation. If you go to Ukraine, for example, and you see the number of content that's taken down and that is labeled as propaganda or infringement of users' rights and terms of use, it's insane. It's not the journalism content that is not compliant with terms of use. It's the world. Yes, when you report about wars, you need to show bodies and you need to show war scenes and so on.
So, we need to have a media exemption, a news media exemption that truly reflects what we need in a public conversation. And governments that are strong enough and good willing enough to engage in that direction, I think they can only prepare for the future of a restored public conversation.
>> NATALIE BECKER-AAKERVIK: Thank you so much. Thank you so much for your response to that.
Lisa, over to you in the last three minutes remaining. Thank you.
>> LISA HAYES: Do I think transparency is important? Yes. Should I yield my time? No. First, I am incredibly sorry there was a video created of you and I hope that we took it down in a more timely manner than X.
>> BJORN BERGE: It's not on TikTok. X.
>> LISA HAYES: Yes as to transparency. I agree with Monsignor that it can't just be transparency. It needs to be meaningful transparency that people understand. Algorithmic systems are not magic. They are math. But if you don't have a sufficient understanding of that math, you are not going to understand what our disclosures are. My 15-year-old daughter is somewhere in the audience here and every time she downloads an app I get a notification asking if she can download it and I have to go read a 40-page privacy policy. I'm not sure that's meaningful transparency.
So, what we have been trying to do at TikTok is figure out ways to communicate to everybody who uses our platform how the algorithm works. And so we use -- we tell people what the signals are that the algorithm relies on and we tell them in plain English so you don't need to be a mathematician. You can understand if you're engaging in videos, if you are liking videos, sharing videos, commenting on videos, those are the signals that our system is going to use.
In, of course, it's more complicated and we give a lot of examples in it. But that is step one of transparency, is to make sure that it is meaningful transparency about how the system works.
For us, step 2 is reporting on that transparency. Are there fake videos created that were posted that should not have been posted? How long did it take for us to get them down? How many of them did we take down? Did we identify it or did somebody else identify it? And all of that information we produce voluntarily on a quarterly basis, and we do it in a machine readable format so that people who are researchers trying to study changes quarter over quarter can download that information and compare trends across the different graphs to help better inform public policy.
And the third and final thing that I think is so important is that we want to make sure that we are giving people tools that allow them to have transparency about their experience. At TikTok we give them content controls so that they can manage topics, they can use key word filters, they can say they are not interested, they can refresh their algorithmic feed. These are all ways that people can control their own feed and their own algorithmic experience on TikTok.
And, finally, we have touched on media literacy. But to the point of videos that put up false information, I think it's important that at TikTok we have prompts that discourage people from sharing content that may be unverified. We label content created with AIGC. And we try to put forward authoritative sources in resource centres, partnering with credible fact-checkers to make sure we are getting people searching for harmful health information, to accurate and reliable health information. And with that, once again, I have a time-is-up sign.
>> NATALIE BECKER-AAKERVIK: We know we cut slightly into your time. Is there anything you want to add in a minute or so before we go to the next question? Lisa.
>> LISA HAYES: For me, these are complex issues. They are tough issues. And the platforms are not a monolith. I don't know as we are designing systems that we are able to do much without a lot more research and a lot more tools. And so we are here and we are committed to partnering with everybody in this room and to having those conversations, for getting constructive feedback, for hearing suggestions on how to improve and working to implement those and I am really excited to continue that conversation.
>> NATALIE BECKER-AAKERVIK: Thank you so much for that.
Bjorn, we will come back to you, and we are hopefully going to have time for everybody to give a short closing comment but we are going to go to the next question. And then you are able to also to respond.
I wanted to give Lisa a fair amount of time that everybody else also on the panel also had and to be a good timekeeper as well, and fair and all.
The next question that we are going to go to, that we are going to ask you to respond to is how can governments and tech companies and media and civil society work together to foster public resilience against information manipulation in digital ecosystems. And maybe some overlap but we are going to ask you to answer the questions, again it's three to four minutes, Ms. Liisa-Ly Pakosta, over to you.
>> LIISA-LY PAKOSTA: Thank you. As you mentioned that the governments could have a good control and then you gave an example of how the pictures, real pictures from Ukraine are not allowed in social media because they include dead bodies, if there is anybody left in the world who still believes that Russia is a democratic country, then we have an issue with misinformation anyway, and we had it actually beforehand also, before the social media. So, there are governments that are very much interested in to the whole control of the information that is out there. And this is the value also why we have gathered here, to find the ways how to protect our democracies, how to protect our values, how to protect free speech in the era where I do not agree with my own husband always, what is allowed to our kids and what is not. We have totally different views sometimes on this fact.
So, what can governments and nongovernment organizations do together in the situation where there are some governments who totally want to have a dictatorship control over the citizens, and we have to admit that there is a lot of misuse of nongovernmental organizations as well. So, at least in Estonia, we see that some of them are fully financed by an aggressive country, Russia, and they do not operate as a nongovernmental organization should.
So, we have a lot of issues around these questions as well. But if we have democratic government, real nongovernment organization and good innovative companies, then this can do wonderful things together. So I fully believe in humankind in a positive way, that we find a way out. And I really believe what we also have a lot of experience in Estonia that really the private companies can have very good innovation that is extremely useful for the governments.
So, let's work together in this way. But, as said, there is a lot of questions around this and a lot of things that are not working well.
>> NATALIE BECKER-AAKERVIK: Thank you so much for your input, Ms. Pakosta.
Lisa, we are going to go back to you for this question. If you would like to respond to that, please.
>> LISA HAYES: Line the Lisas up going down. At TikTok, we believe in the partnerships and are incredibly important to us. We also believe in the importance and the need for experts and for fact checkers. You know, on our partnerships and our collaborators, several of whom I see in the room, I can't thank you enough. These partnerships with civil society and global institutions are critical to the work that we are doing and discussions like these at IGF are critical to informing our policies and our products.
Let me just touch briefly on fact checkers. Through TikTok's global fact checking programme, we work closely with more than 20 IFCN accredited fact checkers. They are accessing the credibility and performed responsible moderated decisions. Together we have more than 20 global fact checking partners and they cover more than 60 languages and 100 markets globally. I am incredibly proud that at a time when other companies are pulling back, we are continuing to lean into this space.
We have also started safety and youth councils. These are our advisory councils made up of independent experts in different parts of the world who know the regions that they are giving us advice about. Sitting in North America, I don't know the right solutions for Estonia. I can't see what's coming around the corner. So, it's critically important that we have these regional safety councils with people who do have that expertise and will bring it to us and share it with us so that we can try to solve problems before they find their ways onto platform.
We currently have 10 of these regional councils, including in Sub Saharan Africa, Southeast Asia, Europe, Brazil, and we are deeply grateful for the advice of those experts.
And finally, we have talked a little bit about election misinformation and disinformation today. And that is an area that we are really working hard on. We have consulted with and launched several different media literacy campaigns ahead of elections specific to the market. We have provided users with trusted information and media literacy resources. For example, in the recent German election, the electoral commission actually made a video with information about how to vote, key information for voters, which we then promoted inside the TikTok election resource centre, directing people who are searching for information in that market about the election to authoritative information.
One new feature I will call out that we have actually recently launched before wrapping, because I see my countdown going, we have recently launched Footnotes in April. And this is a new product feature that will give our community more context about content on TikTok. It will draw on the collective knowledge of the community by allowing people to add relevant information to content on our platform. Right now we are just testing Footnotes in the United States as an intervention, but it's important to note that it is intended to complement, not to replace our existing content moderation and global community guidelines.
Our content moderation and our fact checking work will continue with Footnotes adding additional context. So, look forward to continuing with that conversation as well.
>> NATALIE BECKER-AAKERVIK: Lisa, thank you so much. Thibaut, over to you. How would you answer this question, how can governments, tech companies, media, civil society work together, foster public resilience against information manipulation in the digital ecosystems, and you have four minutes to answer the question.
>> THIBAUT BRUTTIN: We need to think ahead and also to build a coalition that goes beyond the sole interest of each actor. I don't believe so much in multistakeholderism. Every time you put people around the table and they represent like government should not sit on the same side of a table as tech companies. I mean, governments are here to governor, and legislators to make laws and we should not too much mix all that. The formal information democracy that RSF created, for example, is a civil society-led organization feeding legislators and regulators and governments. And I think we should really organize the conversation.
That being said, my message to tech companies is clear, pre and legislation, do good, provide due prominence to news media worthy of that name. Do take into account as a signal of authority the trust initiative that was mentioned earlier, which is a standard endorsed by about 2000 media from 130 countries.
Governments should not be afraid of regulating and legislating. We need to have policymakers understanding that their responsibility is to build the framework, not to go into the nitty-gritty details of everything happening in the media field but really building these frameworks that enable media to flourish. Media themselves needs to reflect, they need to reinvent themselves. They are facing the Gen AI move, and frankly not done a great job at facing the digital disruption of the early 2000, 2010.
I think it's important to try and gather around what is the added value of journalism. And finally, I would like to put the emphasis on the ecosystem in which the news media operate. It's more than the media and the tech and government and citizens. There are also multiplicity of actors and especially thinking of advertisers who have decided, and it's their right, to relocate most of their advertising, about 80% toward tech companies. They have totally fled the legacy media field to some extent. And maybe tech companies can provide them with a granularity, brain time of the people that the news media has never been able to provide. But is it quality? And is it also a democratic responsibility to shift totally towards digital? I see the need to reflect on that.
>> NATALIE BECKER-AAKERVIK: Thank you so much, Thibaut, for that answer.
Coming over to you, Mr. Bjorn Berge, what is your response to that question in the four minutes that we have?
>> BJORN BERGE: Thank you very much for raising that question, because the issue of coming together and work together is essential in really addressing these type of fundamental challenges, and a lot of this we are discussing also, it goes really to the core of what are really our fundamental rights as citizens and human beings. And coming together, we have -- there's good news I want to mention here in this sense.
And together with the European Union, with United States, with Australia, New Zealand, with academic researchers, with civil society, with the industry itself, with the actors themselves, the Council of Europe has now agreed and concluded a new international treaty on artificial intelligence and human rights democracy and the rule of law. The first international treaty of this kind.
And why do I say this? Because it brings certain obligations and one of them is directly related to what we are discussing here, is, of course, there's a specific reference to digital literacy and skills and an obligation for countries and with the assistance of others to promote that.
And I also referred earlier to the guidance note that we have issued to all 46 European countries on this. This also goes into concrete action to support fact finding organizations and ensure their dependence and also that they can operate transparency and also have the ability continue in a sustainable way over time to do what we are doing. It addresses the issue such as platform governance and here it's important that you have these human right aspect part of the design, there's a compliance to that.
And also how you can, the obligation also how we can go in and limit activities or accounts or remove them. But again, here we have to be careful also because we have to respect principles of proportionality, but also it should be a last resort. We all are strong believers in freedom of expression. So, this is really important.
It also is about a commitment of how we can give users more empowerment and how we can build resilience and altogether it's about how states, platforms, and media come together and ensure reliable and quality information.
>> NATALIE BECKER-AAKERVIK: Thank you.
>> BJORN BERGE: Yeah. Thank you.
>> NATALIE BECKER-AAKERVIK: You wanted to wrap up with a sentence? You have 15 seconds?
>> BJORN BERGE: Are not an app you can load down. This is a social contract. This is a contract where we come together and we want to address this issue seriously.
>> NATALIE BECKER-AAKERVIK: Thank you so much. We will have the opportunity because we have been really good with time. I want to thank the panel, that after we get an announcement from Monsignor Ruiz, we will have a minute to give a final remark so I will ask you to reflect on that and see what are going to be the parting words that you would like to leave our audience with, our local and global audience as well.
And to pose the question to you, Monsignor Ruiz, how can governments, tech companies, media and civil society work together to foster this public resilience against information manipulation in digital ecosystems?
>> LUCIO RUIZ: First of all, I think that the government, technology companies, media is one part. But I think that all the educational institution, academics, other institutions that make up the society need to take part. Institutions that in various ways represent and act on behalf of the community need to take part. In other words, everyone must participate and take interest, not to be just consumers, but be architects of their own life.
How to do it on one hand with the legislative way, because it's the way to promote for everybody, and also to accompany and control, because also it's necessary to do it.
But also mobilizing the education to accompany all the society to walk with a culture. We have experienced in the Holy See that is wrong to call ethics, that it's a paper made in February 2020, that it's a call to reflect in three ways, ethics, education and rights, and we invite the different states and also institutions to signed to be together thinking about this culture. The first to sign was Microsoft, IBM, also Italy, to participate in this thing. And we had in this paper six points to reflect that was the transparency, inclusion, accountability, impartiality, reliability and security and privacy. And that means that altogether, in the same table, we are thinking about how to apply really in the facts proposed in the algorithmics, that means is the ethics inside the algorithms. That is a concrete case how we can do as government and governance, change the reality or accompany the change in reality.
>> NATALIE BECKER-AAKERVIK: Thank you so much, Monsignor Ruiz. We truly appreciate your input and your response to that question.
Now, we do have some good time left. So, I would like to invite our panel to really give your final messages and just to share what you would like the audience here, the global audience to take away from this conversation, whether it is an insight, an inspiration, a call to action, what would you most like the audience to take away from this and what is your final message on this? We will start with you again, Liisa-Ly Pakosta.
>> LIISA-LY PAKOSTA: Thank you. I will pick up from Mr. Ruiz the expression that everybody should be the architect of their own life. And the question what we are discussing here, actually, is that, yeah, if some architects are used to build stone houses or Norway has wonderful old wooden houses, but then there will be spread an information on social media that it's a good idea to build hay houses, and they won't stand there, everybody knows who knows something about architecture, but it becomes a social phenomenon, which is actually bad for many people. This is the issue that we are discussing here.
So, who would be the good guy to say that this is not a good information and what would be the administrative or ethics around by the companies to take it down as quickly as possible so that it would not be harmful for many people, maybe only for some.
So, the question, how we find good solutions that are not dictatorship governments taking down all the freedom of speech and that is not the regulations that take down the information that we spread about the war in Ukraine, or maybe there are some new brilliant ideas. Maybe somebody has invented a good way to build hay houses that actually are better than wooden houses or stone houses.
So, I really believe that we as humankind can globally find good solutions. Although, we do not know very many good ones at the present moment. But we have to act together, especially the democratic countries have to act together to find these solutions.
>> NATALIE BECKER-AAKERVIK: Thank you so much, Liisa-Ly. And just for our panelists to know that you have four minutes to answer this question. Is there anything you want to add before we go over to Thibaut?
>> LIISA-LY PAKOSTA: Yeah. Thank you for this possibility.
>> NATALIE BECKER-AAKERVIK: You have one more minute.
>> LIISA-LY PAKOSTA: Trust is the main thing. And I started with education. Estonia is the first country in the world where we start AI-driven education from 1st of September this year. So, we will experiment a lot. But what we really believe is that people have to have the knowledge. It's not only the media literacy, but it's also the literacy of how we live in this technological world so that we would be the best users of technology also of the AI. And I absolutely certain that in this way that we are open for education, we are open for innovation, we can beat all the bad things from the technology side. We as human beings.
>> NATALIE BECKER-AAKERVIK: Thank you so much for those parting words.
Thibaut, what would be your message here, your takeaway, your message that you want to leave as well as a call to action?
>> THIBAUT BRUTTIN: Maybe I think it's important to understand that we need to put ourselves in solutions mode. And it's obvious that if you are an expert in problem, the more complicated the problem is, the happier you are. When you are an expert in solution, you need to find, you know, easy tasks that you can perform in order to succeed. And we, obviously, are facing a moment in the history of societies where we have to ask ourselves, what is the model of societies we want? Do we want to leave digital space to private interests? Do we want to leave it open to propaganda? Do we want the public conversation to be destructed, destroyed maybe, and the ability to keep civil Concord as a reality?
Obviously, this is a very political choice in the most political or the most interesting sense of meaning of the world politics, and that's a solution and the question that we have to ask ourselves collectively, because nobody is totally entitled to make that choice. Governments, citizens, not users, must make the decision. And the very end question is, do we want a public debate based on facts, or is any fact another opinion.
We do face today globally an offensive which is largely triggered from the United States of America to try and present freedom of expression as being endangered by journalism. This is totally preposterous. You can have both freedom of expression and freedom of the express and it's actually the meaning of the First Amendment in the U.S. constitution.
So, let us not be confused by some of the preposterous statements made currently, which are just an expression of what is the interest of those who want deregulation. Freedom of expression is not survival of the fitness extended to the public conversation. We need to be very clear about that. If we are not, I mean, I'm totally okay with people voting about it. But I don't think it's the interest of the majority to have a total deregulation of the public space that just favors those who can pay, who that speak the loudest. That's not how we have structured societies in the past.
So, let us be careful about not taking away any rights to anyone and to preserve freedom of expression and freedom of the press. There is no contradiction between the two. And people that want to make you choose between one or the other are just people that have an interest in just selecting one and not the other. Thank you.
>> NATALIE BECKER-AAKERVIK: Thank you for your contribution there, Thibaut, and for a powerful message as well.
So, Lisa, over to you. What would be your takeaways that you want our local and global audience to take away? What would be a call to action or something you would like to share?
>> LISA HAYES: Yeah. I guess I will close with a reflection on the topic of this panel. As a reminder, we were here to talk about ensuring human rights and resilient societies in the age of technology. From an industry perspective, I think that the only way that the digital space can be fun, entertaining and useful for all of us is if we carefully consider the balance. And by the balance, I mean the balance between the rights of the people who are using technologies, saying that human-centered focus and design on the rights of the people, balancing that against the safety of the community online, and also building resiliency in the technology itself to new threats. And when I say building resiliency to new threats, I mean those threats that have always existed in society and with new technologies, but that emerge differently in the digital space and that they can manifest very quickly. We need to continue to do those three things: Rights, safety, and resiliency, in the design of all of these online systems.
And frankly, that's a job for all of us on this stage. It is a job for civil society. It's a job for industry, for government, for other experts. And we are only going to succeed if we work together in whatever format, whether multistakeholder or sitting at your roundtables but if we work together to get the best ideas in one place and to agree on those ideas and to drive them forward.
At TikTok, we are committed to doing our part to help protect our community. We remain committed to fighting harmful misinformation through strong policies and enforcement, and our continued work with more than 20 global fact checking organizations, again, that covers more than 100 markets in 60 languages globally. And we are also aiming to empower people by connecting them to authoritative information. We label unverified content. We partner on media literacy campaigns that help people think critically will the content that they are engaging with online.
And I hope that our team, we have several people here this week, I hope we can connect with all of you while we are here on the ground. If you have not had a chance to drop past our booth, I hope you will do so. We have more information on all of these efforts and we have a team that will be delighted to connect to answer your tough questions, and to help build the Internet that we all want for tomorrow. That's why I have my 15-year-old daughter here with me today, to be quite candid. She is the north star that I bring to all of my work. Do I want to make sure that TikTok is a place where both my 15-year-old daughter and my 80-year-old father, who has different digital literacy issues, can both examine and engage with the world around them and do that well. And I thank everybody here for helping us do that.
>> NATALIE BECKER-AAKERVIK: Lisa, thank you so much for that contribution. We truly appreciate it. And also Berge, before I go onto you, Bjorn Berge, I just want to mention that Monsignor Ruiz has also expressed a wish to impart some messages in Spanish. So I am going to, if there is still a need to do that, if you would like to, there is a headset on every chair.
>> LUCIO RUIZ: That's okay.
>> NATALIE BECKER-AAKERVIK: I will leave it up to you. But before we get to you, so there is a headset on every chair. If you want to take it up, there are channels. I just find the English channel, 126, and you have the opportunity to do so if you would like to. So, I'm going to encourage the audience to do that. We have a couple of minutes before we get to Monsignor Ruiz. He can, of course, express it in English for our panel as well. But if there are messages you want to add in Spanish, we are giving you the opportunity to do so. Everybody has a headset. These are the channels. This is the on button and you find the English channel.
But so just to know there is the possibility.
Now for the final remarks from you, Mr. Berge, what is your takeaway that you want to leave the audience with? Is there a call to action? Over to you.
>> BJORN BERGE: Thank you. Let me maybe start with what also the Estonia Minister has mentioned here, and this is really essential and very important. There is a cyber war ongoing right now, and we are the subjects, the targets of massive Russian propaganda, misinformation, fake news. And, of course, this needs to be combated in every way we can. I mean, this is essential.
Secondly, disinformation is also, has become a more growing threat to our democracies. And we referred to it earlier also. But it even goes to the core of democracy, which is the election and how elections are being manipulated. And also with generative AI, it's making it even faster and cheaper and harder to control.
So, what we really need is a stronger intergovernmental cooperation and clear enforceable legal standards to confront the threat and not this type of patchwork fixes or after-the-fact reactions, but some robust, future-proof rules, and these must be grounded in human rights and designed for global relevance.
At the same time, I believe we must work across borders, sectors and disciplines to counter disinformation and foster public resilience, because no single actor can fix this alone, but it's only together we can create a more resilient information space. And ultimately, hopefully, democratic digital space where actually facts matter and rights are respected and voices are heard, and not manipulated. So maybe these are my final words.
>> NATALIE BECKER-AAKERVIK: Thank you so much for that.
Monsignor Ruiz.
>> LUCIO RUIZ: Final. Well, we are in the really amazing time, a challenging time for everybody. Well, for this reason, I think that, first, we need always seek the place of the human person. That must be always in the centre.
Second is to promote the person, all person and the whole person.
Third, help this deeply cultural process to be a process for everyone, made by everyone.
And finally, involve people to be not a user, but a part of the life and the culture. Thank you.
>> NATALIE BECKER-AAKERVIK: Thank you so much for that. Thank you, Monsignor, for your contributions. Thank you also for all our panelists and your contributions in this very important conversation. We thank you so much. We have really come to the end of the discussion here today. And powerful calls for action and messages that have come from our panelists. We thank you also, to you, our audience, who have been here and also to those of you globally who are watching for your attention as well.
And I would like us also now just to say a very big thank you to our panel. If we would give them a warm round of applause, please. Thank you so much.
(Applause)
And to our panelists, and just before we have a group photo that we are going to have, I would also like to invite all of our audience, those of you locally and those of you globally, back here tomorrow in this main plenary hall. We are going to be having a number of sessions this entire week by the sessions here in the conference room, presented by the host country Norway with the IGF 2025, very proud host country. We look forward to seeing you back here tomorrow as well for conversations like the one we had today where we take a deep dive into the challenges and the opportunities really facing society in these times today. So, we really thank you for your attention. And we look forward to seeing you again.
Thank you very much to our panelists. I'm going to ask you to stay on stage for a group photo. So we would really appreciate that.
And then also to the rest of you, thank you for your time. We will see you tomorrow.
(Applause)