IGF 2024-Day 4 - Workshop Room 3 - DCPR & IRPC Information Integrity- Human Rights & Platform Responsibilities-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> YASMIN CURZI:  Hi, everyone. Sorry for the delay.

We need to make (?) A co host as well.

There's others whether or not need permission to show their images and their videos.

>> DENNIS REDEKER:  Thank you.

>> YASMIN CURZI:  Thank you, all, for your patience. I'm a law professor in Rio de Janeiro.

The session is recognised alongside my colleague Dennis Redeker (?) On Internet Rights and Principles.

This is regarding platform responsibilities    DCPR Information Integrity: Human Rights & Platform Responsibilities.

This is how we engage with information and ideas and a charter.

With these advancements come criminal challenges, the spread of disinformation, misinformation, hate speech, and threats to democracy are concerns that demand our attention.

The idea of information integrity has emerged as a framework to address these issues, but it remains an evolving concept that lacks a solid, unified, theoretical framework.

This session today is to bridge this gap by exploring information integrity from the intersection of human rights and platform responsibilities. We just published the CPR outcome. It is available at the Internet Governance Forum website. I can put the link here in the chat later for anyone who have interest in this. Basically trying to explore this gap with the literature, we think that could draw more on the platform responsibility scholarship.

So repropose this to (?) The sociability of platforms, online contentious and governmental and international strategies to ensure and uphold human rights principles.

The goal is to support the ongoing activities of all stakeholders involved in this debate and involved in advancing human rights and inclusion in the digital landscape, and that's the idea of our session. I hope we can all engage here. You can raise questions.

We have a time for Q&A after the speakers' presentations.

I would like to thank you all again for being here. Thank you for the patience during the delay in starting the presentation.

I would like to give it to Dennis Redeker. Thank you so much once again.

>> DENNIS REDEKER:  Thank you very much, Yasmin. Thank you, everyone, the moderators and technical team.

We bring together two dynamic coalitions. Coalition was founded in 2008. It's a coalition that deals with today's question of platform.

The rights and principles are also platform rights and principles, and the discourse on platform responsibilities, I would say, has taken a turn into the question of human rights as a standard of platform actions and platform policies and for ways of negotiating between different jurisdictions.

The coalition works in this field and applies human rights standards to all kinds of digital technologies. The charter was written about 15 years ago. It talked specifically and translates to the universal declaration of human rights onto the Internet and applies to other technologies.

I think a fruitful engagement with these kind of documents with the University of Declaration of human rights (?) We look forward to this discussion today.

Thank you to everyone for joining today.

>> YASMIN CURZI:  Thanks, Dennis.

On the floor, we have    I don't know if they have access to the mics. We're not seeing the floor.

(Onsite mic is distorted)

>> YASMIN CURZI:  It's not working like it's    it's robotic.

>> Okay. Maybe I'm a robot.

Okay. I will try to stop it.

Okay. Is this better?

>> YASMIN CURZI:  Yeah. It's a lot better.

>> Okay. I have to hold it in a specific way.

So our next speaker here will be Ana Cristina Ruelas, UNESCO, working on the (?) Expression and the Article 19 that's in Central America and is qualified to speak on these topics.

So I will just hand it over to you, Ana.

>> ANA CRISTINA RUELAS:  Thank you very much.

And thank you, Yasmin, for the invitation. It's been great following up with this discussion from last year. It's been a little bit more, but last year, we had this conversation, and I think it's good how this has evolved over the last year and how good it is to see a lot of people more engaging into a more    let's say    human rights based and multistakeholder conversation over information integrity.

Many things have happened over the last year. The most important things inside of the UN is the approval of the Global Digital Compact that talks about the importance of information integrity and allowing people to have access to reliable information in the digital sphere.

I think I will start with that because UNESCO started this conversation a long time ago when we released the (?) Principles and said any kind of discussion should be done through a human rights based and multistakeholder approach. After that, we started a huge discussion in order to try to identify how should we balance freedom of expression and access to information at the same time that we're dealing with the governance of digital platforms?

And this discussion led to the guidance for the Governance of the Digital Platforms that we believe helps different stakeholder groups, how to meaningful engage in platform awareness and try to make sure that at the same time governments fulfill their responsibilities to enable the environment for freedom of expression by reframing to shutdowns, by refraining to not be transparent about the type of requirements that they place on digital platforms, but making sure that they are doing this a step towards to close the digital divide by ensuring journalist protection and (?) Viability.

We also have the responsibility of digital platforms to comply with five key principles, and those five key principles are very much related to what we should expect to happen in order to ensure that we promote the integrity of information.

The first principle relates to the responsibility for being transparent.

It is not only about transparency related to how they are putting in place their operations, their community guidelines. It's also about how they are dealing with content.

They have the responsibility and the responsiveness for platforms to explain to their users on how they're taking decisions over their content and how is it that they are moderating or creating content.

Also, to have the possibility to respond to users on why are the measures they're taking to ensure that while they design and use different products and services, they're actually being careful about the type of risks that these different types of products and services could entail. Primarily in the context like elections and crisis and conflict but also when there's a change of operations or where there's a case to protect the specific people that are critical to encase the freedom sphere, such as journalist, human rights defenders, artists, et cetera.

And the only thing the guideline says is it's very important to create tools from the governances, for the users and non users, to be in control of their content and make their own decisions.

This is also important because although there's a lot of actions related now to strengthen the integrity of information, I do think that it is very important that we recognise a need to create for different users, primarily the most vulnerable and marginalised users to make sure they can create their own content that can create narratives to those that are disseminating this information on hate speech and they can take control over the engagement with different services and products.

The Global Digital Compact says something that's important that relates to you, how to ensure information integrity that UNESCO is trying to move forward along with different stakeholders. It's that multistakeholder networks. We have knowledge that this is not only a thing that has to happen between regular authorities and regulated entities but to have, also, the participation of civil society actors, academics, journalists in the discussion. It's very, very important.

For that, UNESCO is strengthening the possibility of having a regular networks of 30 entities with think contamination and centres that would serve for implementation of the guidelines.

This is very much related to the importance of integrating into this discussion those that are actually implementing the rules that are being imposed by the legislative authorities.

For us, right now, the way forward to ensure information relates to (?) Human rights based and digital platform awareness, as I already said. On the other side, the strengthening of networks that understand the importance of bringing together different stakeholders, not just one role for each other but to together create policies and to make sure that regulatory authorities have the specific capacities to deal with the new problems related to information integrity.

So I will leave it like that, and we can continue the conversation.

Thank you very much.

>> Okay. I think you can see me, Anna, that I'm here in person. Can you hear me?

>> ANA CRISTINA RUELAS:  I can hear you, but I can't see you.

>> I tried turning on the video, but for some reason, it stops.

Can you hear me now?

>> ANA CRISTINA RUELAS:  I can hear you.

>> I want to say thank you to Yasmin for inviting me. It's a pleasure to share this session with you. It's really incredible. Ana Cristina Ruelas is a very important piece of this. I think there's many important topics to bring to us. Let me just see here.

Thank you, Ana Cristina. I think we have very specific work from UNESCO in producing knowledge about how human rights are (indiscernible) by new technology and economic and social challenges.

I'm not    hearing myself well. Can you hear me well?

>> ANA CRISTINA RUELAS:  Yeah.

>> Okay. Let's go. Okay. It's so much better.

I would like to ask for Marie, one of the speakers to share the contributions about the Global South, please.

>> Merrin: Hi, everyone. Thank you. Am I audible?

>> ANA CRISTINA RUELAS:  Yeah.

>> Merrin:  Thank you for having me as part of this panel and this important conversation.

I think in my intervention, I would like to talk about some of the limitations that we feel from the dominant framing of the integrity and the current approaches that we have seen so far, the strength and information integrity.

So information integrity is a much discussed as well as a much contested term. Initially defined in terms of accuracy, consistency, and reliability, the concept of the term has now evolved. Now, when we look at the UN Global Principles for Information Integrity, which was released in June of this year, we can see the focus is no longer on the primary but the integrity of the information ecosystem.

And this is a welcome and necessary change because many have been pointing out how the earlier framing of information integrity, in terms of just accuracy, consistency, and reliability, tend to neglect the system of actors and the political and non political factors that shape the (?) Dissemination of information.

While we have a broader framing of integrity now, we still need to deliberate on what it means in actual terms so it doesn't become another buzz word, so the UN global principle described integrity as    when you see GDC, you can see the similar language as (?) Space, one that enables trust, knowledge, and individual choice.

And although we can dispute these are important values to achieve, but the question is what do they mean in practical terms? Like, what kind of (?) Are we talking about? How can we remove the barriers to exercising individual choice? What is required to build trust in diverse context?

So these questions demand careful consideration, especially grounded in the unique challenges in different regions.

In other words, while it's useful to have an overarching set of principles, for any    these have to be infused by meaning with (?) What information should entail in each context.

(Audio is cutting in and out)

>> Merrin:  This simplifies information with some trustworthy information.

The assumption that just by providing trustworthy and accurate information will automatically inform public citizens and ignore the complexities of communication dynamics and public trust.

It also overlooks ways in which diverse population interpret and engage with information.

So I think a meaningful article of information integrity must address the informational needs of the people in their regional social context.

I identify the barriers that prevent them from engaging meaningfully and ensure their ability to participate in public deliberations.

So the dominant framing of information which focuses primarily on supply side of the information, that is ensuring accuracy, reliability, trustworthiness, it can be inadequate. The information integrity debate should also grapple with an individual's ability to (?) Opinions and speak and, most importantly, have their voices heard.

In many regions, there are people who have faced criminalisation and censorship. Also, they have had limited opportunity to produce information.

A third point I wanted to raise is the limitations of    some of the limitations of the current dominant approaches to strengthening information integrity.

The UN Global Principles for Information Integrity may have adopted a broader framing of the concept, yet, the recommended responses do not seem (?) To bring forward the responsible changes in the ecosystem.

State based (?) And technology companies.

It's true that the app approaches the measures    platform degradation measures like content improving (?) These are very important and crucial and must be implemented with great vigor and can go a long way in improving the aspects of information ecosystem.

However, they tend to be symptomatic remedies. They operate within the confines of the surveillance capability powers. These need structural reforms also.

Even when we talk about measures to enhance competition among digital products and services (?) These are signature. These should be carried forward. Even these measures, I'm not sure if they guarantee a shift away from the service oriented mode of curating conversations that we have now.

Even measures to give users more control over the content that they view.

I'm not sure how successful that will be given the polarised environment where entrenched platform designs make it challengeable for choices even for tech savvy users. A (?) Population may struggle and they still rely on messaging apps like WhatsApp to get basic information or welfare services.

I'm not sure how successful it will be to safeguard the information and strengthen integrity.

To the playbook of consumer focused remedies (?) Does not address the real cause of the problem, which is the business model of the platforms, the design architecture that (?) And beyond.

So challenging this paradigm needs structural changes to social media platform architecture and not just procedural rules.

We must reimagine how public discourse is managed only and putting integrity at the centre.

These values of truth seem incompatible with the model that we follow today. So hence to safeguard information, it's wise to shift away from the model of platforms. It's left to platforms and evaluate of their models.

We need structural reforms. We need legislation to enforce these structural reforms through legal and policy measures.

There's measures of how such structural reforms can be brought about.

It could be a (?) For addressing the societal harm.

Instead of prioritising (?) Platforms should (?) Tailored to different context.

Regulation, it could even require platforms to change their structures from profit seeking to profit model. We need bold solutions to really strike the real course of the problem.

And beyond regulating, there are other measures and things to think about like (?) Media and communication platforms. There's a strong case for initiatives with a civic mission to provide citizens with a (?) Global view of the world.

And this would require public funding and policy support.

Then reviving local journalism is very important. It is essential for combatting misinformation and restore trust in the community.

The government can play a role in this by providing sustained financial support, implement revenue sharing platforms (?) And think about alternative business models to ensure their survival in this changed landscape.

So, to conclude, I would just like to say that addressing the chaos in today's information landscape demands structural reforms that are (?) Platform business models (?) Design architecture.

Along with that, we need measures    we should have a positive (?) Of what an ecosystem should look like and take measures that would create centres for truth and public deliberation which would satisfy the information needs of the people and remove the barriers that people face in meaningfully engaging with the information they receive to respond to the information.

Yeah. That's it. Thank you.

>> Thank you, Merrin. That was a very eloquent presentation of the real cause of the problem and even some good ideas on how to do that.

Although, the one thing you're still missing is how to find the political will to do that. Talking here is one way to push exactly that. So maybe there is hope.

I will not delve more into that, but the next    we're supposed to have (?) But she is not present but someone from her office is.

>> MAYRA SAITO: Yes. It's Mayra.

>> Please go on.

>> MAYRA SAITO: Hi. Hello. I don't know if you can hear me properly. Mayra Saito. I cannot open the camera. It says the host has closed it.

>> I cannot facilitate it either.

>> MAYRA SAITO:  It's most important that you can hear me.

I'm sorry, Mayra.

>> MAYRA SAITO: No problem. Don't worry.

At least the technical issues are minor this year.

>> Did it work?

Mayra, I think you can start, and they can solve it later. Thank you so much for your patience.

>> MAYRA SAITO:  Oh, yes. I was having a problem to open my mic too. (Laughter).

Again, good morning. Thank you very much for inviting Brazil to take part in this debate. As I said, I'm actually replacing our director for freedom of speech, Samara. She had a last minute incident and could not join us here today.

I did not have much time to prepare my intervention here, but I hope that I can contribute to the debate.

Well, first of all, I would like to say that my institution, the Secretariat for digital policies is new to the government. It was built when it became very clear for us that we needed to have a digital policy to deal with the issues of disinformation, misinformation, and all the new challenges that the digital environment imposed to the information ecosystems.

We had, as you may know, several important cases of this information that had a severe impact on the elections, threatening our democracy and also our policies, including health policies. For example, during the COVID pandemic, when the spread of disinformation affected our policies and (?) Levels.

(Audio interference)

>> MAYRA SAITO:  We were focused on fighting this information. Our government established, by decree, the (?) For the (?) Of democracy. And we designed institutional challenges to ensure cases of disinformation, the threat and the democratic system and the implementation of federal policies would be dually followed by the government and institutions responsible for that.

(Audio interference)

>> MAYRA SAITO:  But in cases of disinformation, our Secretariat started    we were considered a pilot to deal with approaches to this information in concrete sectors.

First of all, we decided we would focus our attention in the health sector, considering the serious impact that disinformation had to the decrees of the vaccination levels in Brazil.

Just as an example, in 2021, we reached the lowest levels of vaccination in Brazil. It was similar to what we had in the '80s. And Health with Science was launched one year ago and is supporting the Minister of Health regarding the vaccinations in Brazil.

This is related to partnerships and (?)   

(Audio interference)

>> MAYRA SAITO:  We understood that the concept of information integrity allows us to have a positive approach to the ecosystem and the challenges it poses to public debate. We understand this is important to democracies and the rights to access to reliable, accurate, and evidence based information.

So with this understanding that information integrity was the concept that allowed us to have an integrated approach to the new environment, digital information environment, Brazil has been very active to include this concept nationally and internationally in the debate.

So as Brazil was in the presidency of the G20, until this year, November, we work to include this issue, the issue of information integrity, in the G20 agenda, and we managed to include the (?) Information integrity in the G20's working group on digital economy.

That was the first time that the G20 countries committed to act to promote information integrity in the digital space. So we think it's an important step.

We also contributed to the discussions on information integrity in New York during the negotiations of the Global Digital Compact. And we were happy to provide the separate information in the GDC.

We also (?) To the OECD on information integrity.

And now, in the G20 summit, last November, we officially launched, with the United Nations and UNESCO, the Global Initiative for the Information Integrity in Climate Change.

It's to focus our attention on how to implement this concept and personalise this concept in a specific sector, which is not so specific, so specialized because, of course, it involves the whole society, but, in any case, it's a concrete implementation of the concept.

And the idea of this initiative is to join forces between governments and international organisations and civil society organisations to promote the information integrity on climate change through a global fund to be managed by UNESCO.

The fund will support research projects, communication projects, and bring forward existing campaigns.

The initiative also aims at promoting the debate on information integrity. The international agenda, including the conference of the UNF tribalcy.

And we are now preparing to launch the Brazilian chapter of the global initiative.

The Brazilian chapter will be our national implementation of the global initiative for information integrity on climate change, and our intention is to create through the experience that we have achieved with our health and science programme and an integrated approach to climate change and talking with different actors, including the civil society and mobilizing the private sector to be a very inclusive process and the government, of course.

And to be faith to feel the concept of informational integrity as defined in the UN's Global Principles for Information Integrity.

We need to understand that matters need to be treated separately as they don't depend exclusively on the power of the federal government.

In this context, we are focusing our attention on research, accountability, support to journalism, education, and positive incentives for integrity.

In this last pillar, we wish to work to foster a national coalition of advertisers for the information integrity, which is something that I think is very important, which also deals with some of these structural issues of how advertising and publicity works in the digital space.

But it is also important to mention that even though in the Brazilian chapter of the global initiative, we are not focusing our attention and regulation   

(Captioner lost online audio)

>> I think Mayra is offline right now. Something happened with her connection, maybe.

>> MAYRA SAITO:  You think it's the mic. Sorry.

So I was also just saying that even though our actions here in the executive power, they are not focused primarily in regulation because we know it depends on other powers of the government which are not in our power to decide.

We are trying to have an active participation. Also the discussions on regulation that are taking place in Brazil's national Congress. Recently, we had an important approval by our Senate of our law on artificial intelligence. This law is not finally approved because it still needs to be approved by the Chamber of Deputies, but it's an important document in which we manage to establish a governance for AI systems with a regulatory agency, according to a scale of AI systems categorised by levels of risk, and we have a special focus on human rights with the different due diligence responsibilities.

And the subject to the rights of authors to train the AI systems will be paid to the right owners.

Moreover, we managed to include, also, the term information integrity in the text. We're trying to foster this debate on integrity. We want to test different approaches on how to completely operationalise it as a public policy.

So I think that's what I had to say here.

Thank you.

>> Hello? Can you hear me well? Now I think you can see me too. Okay. That's good.

Thank you, Mayra. You have a really hard job.

I want to read this. I realise we're competing with the lunchtime. This is unfair.

(Audio is cutting in and out)

>> I'm sorry. Your mic is cutting off a bit. I don't know what's happening. Someone is sabotaging our session.

>> Is it better now?

>> Yes. It's a bit better. Thank you so much.

>> I'm so sorry for this, guys. I'm trying to read my notes regarding the (?) But   

(Captioner has lost online audio)

>> Now you're totally muted. Maybe the battery on the mic is off.

Nothing. Not a word.

>> I think the room's mic is off.

>> Oh, yeah, the room's mic is off.

Nothing.

Okay. Now it's better.

>> Okay. Can you hear me? Okay.

>> Yes, yes, yes.

>> Okay. So I think it's really terrible. I need to turn it off the phone because it's better to talk. I think you can hear me well now. Right? Okay. So I can hear you, but I'm going to read some points that I note when Mayra was talking.

The G20 working group under Brazil's presidency achieved a landmark (?) Of promoting information integrity. We have a unilateral (?) In this crucial issue, highlighting its impact on economic stability.

For key areas identified, (?) Governance and artificial intelligence and information integrity.

Brazil champions the conference approach, balancing rights and promoting transparency for our initiatives like Brazil against Fake News.

This underscores the significance of this achievement, paving the way to (?) Information ecosystem. These are some points I write when Mayra was speaking because I think we have a really good experience in Brazil. I think Mayra, it's very important to have you here to share this contribution with us. So thank you so much.

But now, I would like to ask Professor Yasmin Curzi to share her contributions.

I had her article this week, gender (?) And integrity. It's really good.

Yasmin, the floor is yours.

>> YASMIN CURZI:  Thank you. It was an invitation. Thank you so much, Mary, for inviting me to contribute. I'm putting the link here for any of you who have interests and also following this discussion. What we are proposing, the idea of the article that Merrin invited me to contribute with was discussing information integrity through a gender approach, through the gender, as I mentioned, with the information integrity. Basically, what I'm trying to link here is how feminist scholarship can contribute to the debate on platform responsibilities and information integrity.

Some lessons that the feminist scholarship could actually be used to    utilised to inform policy makes on information integrity.

I'm being brief because we need time for Dennis to speak and to have the Q&A. But, basically, the idea of this feminist approach to information integrity is basically two proposals that I think are central here. The feminist scholarship, we have to address inequality in the roots of the cause.

So, as Merrin's presentation highlighted, we have structural issues regarding information integrity that relates specifically with the media pluralism, the lack of media pluralism in the ecosystem and the power dynamics that are inequal not only in Brazil but in all the countries, actually. We need policies to centralize the media monopoly. I'm not only talking about the tax but media in general, TV channels, et cetera. We need actions and policymaking to enable and foster major diversity in this sense.

Another lesson from the feminist scholarship that could also help inform information integrity debate is to enable more participation in these spaces, enable more participation from local and regional initiatives to actually bridge this and bring more diversity not only to the policymaking, this participation process, but also to producing and moderation.

Another thing that feminist theories    feminist activists and scholars have been highlighting, platforms need to open themselves to actually learn with the experiences and the research that feminist scholars and activists have been producing in civil society has been producing reports showcasing how online harassment and coordinated campaigns have been targeting specifically women and LGBTQ+ cultures.

Affects the participation of minorities (?) Literature for a long time. But as Merrin and Ana Cristina and others have been highlighting here, we need to address platforms' monopoly. We need to try to talk to these actors and actually make them engage more with human rights in an active stance and not only promoting truth on the Internet but also tackling how the algorithm and ecosystem and systems actually promote hate speech, but it relates to the economy attention relates to how they profit over this.

So we need to actually look at the root of the problem to be able to create an efficient solution in this.

These are the ideas they tried to highlight in this article.

Thank you for your time and patience here. I will pass the floor now to Dennis Redeker so he can also speak about his research.

Thank you so much.

>> DENNIS REDEKER:  Thank you, Yasmin. Thank you, everybody. This was already a fantastic discussion, and I really appreciate the different perspectives that we bring together here in this session. I think this is the spirit of the IGF, right? This is the multistakeholder perspectives, and we just heard some    well, Yasmin's research perspectives too.

Let me share some research that is very well fitting to this topic conducted at the University of Berlin.

I have my academic hat on here too. This is in addition to my hat of the co chair of the coalition.

Can you see the slides of my presentation?

>> YASMIN CURZI:  Yes.

>> DENNIS REDEKER:  I wanted to show a study that we conducted asking people about their attitudes toward social media platforms. A number of questions related directly, I think, to the information integrity, even though the discourse of information integrity has now become a global discourse.

Thank you for Brazil and others and UNESCO for championing these efforts. Also on the state and interstate level.

The research was done in 41 countries, as I said. It was mostly in the Global South and Eastern Europe conducted at the University of Berlin, and the methods were based on online survey with a survey questionnaire in six languages from 2022 to early 2023.

It includes 17,500.

I can tell you one thing. There's a scale from one to five. The average is quite high with almost four. So quite a number of people are concerned about misinformation around the world. But it differs, according to where you are and in which country you live.

I think there's a number of other characteristics too.

Why is this important? We often see this debate on information integrity. I think it's a global one, a very important one, but it applies   

(Audio interference)

>> There's something very disturbing to the sound. I'm not sure where it's coming from. Maybe it's the mic or something.

>> DENNIS REDEKER:  Let me check. There's no background noise here, but I can speak closer.

>> Now it's good, actually. Keep closer to the mic. I guess that's the answer.

>> DENNIS REDEKER:  Okay. I will do that.

So the comparison here from the survey analysing by country, we see that, for example, people in Poland have the most concern for disinformation where people in Haiti have the least concern.

There's a Global North/South here. Switzerland, and sub Saharan Africa, including Ghana and Nigeria.

Misinformation is one thing but the other thing    it's a question that we wanted to bring in as the coalition. To what extent are human rights better protected in the times of social media. One question that I asked is the people thought that since the event of social media, the following five human rights have been better or worse protected.

Has the protection of human rights, in other words, increased, decreased, or remained the same?

The legend on the right, you will see, this was 17 and a half thousand people.

It's quite interesting. We have to remember that in spite of hate speech, the Internet is a force to help protect the human rights and access to information. People say a large majority that it has increased the protection of that right, the protection of the right to freedom of expression has increased. Two thirds say that.

Some people say a decrease.

Some people say it stays the same.

Equality, it's interesting. It relates, perhaps, to Merrin's talk, coming with a social justice perspective. What does it mean that we have platforms? It's an increase or decrease, but most people say it remains the same.

Life, liberty, there's also quite a number of people who say since the event of social media, that protection of that right has decreased.

When we talk about hate speech specifically, that may be one of the drivers of that. There's also a strong correlation between that answer and a concern for hate speech.

But I don't have that data with me right now. I think also strikingly and not surprisingly, the right to privacy is seen by those respondents in those 41 countries to have decreased overwhelmingly.

Some people think that social media has increased that protection of that right or it remains the same.

These are some insights for a survey that hopefully also helps us to contextualise and helps us to, on the one hand, some of the rights, particularly the right to information and freedom of expression seem to have increased from a human rights perspective. That's positive.

But people have concerns, not just about hate speech but as I've demonstrated also.

Also, a high level of concerns about misinformation. I think this is something that the survey shows. All governments and other actors need to work together on this to realise information integrity across all levels and across all countries.

Thank you. 

>> Thank you, Dennis. I don't think we have any more prepared speeches. Should we switch to questions from the audience?

Anybody in the room?

>> YASMIN CURZI:  I think we have questions online as well?

>> Can you read them?

>> YASMIN CURZI:  Yep. Just a minute. I'm muting   

(Audio interference)

(No discernible speaker)

>> Hi. Can you hear me?

>> YASMIN CURZI:  Yes.

>> GIULLIA THOMAZ:  Okay. Thanks, everyone. I speak from Brazil. I'm Giullia. I'm at the University of Brazil. Since we see the rise of   

(Audio interference)

>> GIULLIA THOMAZ:     law enforcement, which are integrating social media into their work, I was wondering    of course, from my perspective (?) But there's accountability and (?) In reaching harmful content. Here, specifically including criminally targeting opponents' reputations, for example, and this shows how critical elections can be and remains a challenge in the digital (?). On that given content, how do you measure the performance of social media platforms during the 2024 municipal elections globally and what are the acceptable outcomes that you wouldn't like to see in the future and what are the civil societies roles   

>> One day for the municipal election São Paulo, we had an influencer spreading misinformation.

This is a problem we're dealing with in Brazil. As mentioned, we need accountability, and we need transparency. We have to regulate how social media platforms deal with this.

In Brazil, we think that the main issue is to approve the laws that we are discussing in Congress, and now they are paralyzed, unfortunately, was the government has been trying to address the issue how it can, even without the regulation being approved yet. Our elections court is trying to launch some other regulations and normatives the deal with this kind of thing.

But this is something that we need to address.

I think the civil society is important. Since 2022, when we actually had widespread disinformation during the elections and also in 2018, I think the pressure that the civil society makes is very important to ensure that governments try to deal with this in a structural and effective manner.

So this is an ongoing discussion still. We're trying to, on the one hand, negotiate and talk to the platforms and try to ensure that they apply their own moderation rules. But, also, I think we need to advance on the discussion on regulation, actually.

>> YASMIN CURZI:  Thank you so much, Mayra. I think we have one more question.

I don't know if any of you wanted to address this question as well.

>> ANA CRISTINA RUELAS:  Go ahead, Dennis. Just start.

>> DENNIS REDEKER:  It's a way of doing different sessions here at the IGF.

Yesterday, we had a workshop on the question of AI and disinformation during elections. So your question is so very much valued. It spoke to that conversation that we had.

A researcher from Oxford University brought up the case of the Romania elections that took place recently and, in fact, a court cancelled the results of the presidential elections due to the apparent benefitting of one candidate by a major social media platform.

So it's a question of transparency. It's quite a question around information integrity around elections is very important. This debate, we had illustrated this. We illustrated all the challenges we have with regard to information integrity, and it would be much better, obviously, to have solutions in place beforehand so that actions can take place uninhibited by, this rather than having to cancel them afterwards.

That's a very dramatic and potentially also problematic democratic practice.

So I think that's something we brought up. I wanted to bring that back into this debate.

>> ANA CRISTINA RUELAS:  Yeah. Well, I wanted to mention, also, for instance, the guidelines for the governance of the digital platforms, one of the things that we highlight in a very important manner is that we need to be careful in labeling content because although it's true that there's a lot of systematic disinformation that can be recognised as subject of restriction, there's a specific content that should go through a specific process of law.

This is important because if we consider that elections actually is a moment where there's especially protections for freedom of expression because it is also important for society to have a plurality and access to information to a variety of information. We cannot just go and try to regulate, let's say, a trend or a specific type of content.

That's why one of the things    and I'm glad that I was after Dennis because he mentioned transparency. I think that what we need to think of civil society, as international organisations, we need to ensure that platforms are accountable for the management of systemic risks that they see and they foresee when it comes to elections. So we saw during this election of 2024, some of the platforms published the risk assessment that they did before the process started. But we don't know, for instance, right now, how they have evaluated those measures.

How often did they continue doing this assessment because risk is not static. Risk moves. It's important that when there's a process of election, they can move around it.

We also don't know if they reinforced measures that they have in, let's say, peaceful times and not election times.

We do not know how they manage political advertising during non election times and during election times.

I think there's more elements of transparency and due diligence and accountability that we need to start putting forward and start reinforcing during election times that we have the key for the next election periods.

In our government side, it's very important that if governments and electoral government bodies try to be transparent on what are the types of requirements that they're placing during election time    and outside of election time, but what are the platforms they're using to moderate? What, for instance, are the type of requirements that they are placing for the staff members?

These different types of things, when it comes to these multistakeholder approaches, also how government is being transparent about the apps that they are putting forward electoral times.

There's also new discussions on those specific measures that should be reinforced during this period acknowledging, you know, that elections did not start with the (?). It's a whole cycle that needs to be revised and updated. And, definitely, civil society actors should be participating.

>> I just wanted to come in here and add on the role of electoral management bodies. I think it's very crucial because there has been a (?) On the role of social media platforms. There has been some guidelines of sorts. So they're now actively or also looking to the social media activity, and there are some set of guidelines for the candidates and the political candidates have to follow, but the issue is implementation.

So there's been a couple of instances where a major political party had to (?) Video against a particular community, a hate video, and it took a lot of time for the Commission to ask the platform. The platform took time    it took time to be taken down. It must have gone viral. I think they should more actively look out for such instances.

They trust the social media platforms to monitor the content.

I think the regulators' roles here is very important. They should come up some concrete SOPs that's very relevant. That's it.

Contextualised to the digital world.

Yeah, I just wanted to highlight the importance of coordination between the electoral bodies and platforms during the election time.

Thank you.

>> YASMIN CURZI:  Thank you so much. I have questions I wanted to hear from you.

Merrin touched a bit upon this.

Should the responsibility of balancing freedom of expression    should it lie solely with platforms? What role should it be and how do you see the stakeholders? Any experiences of platform regulation that are going well in any countries.

Is there a permanent role like, for example, here in Brazil? Or should we think about regulatory bodies within the government, for example, as it is in Türkiye or other countries. The United Kingdom as well. So what do you think about these arrangements? Is there a good model we should look at on the horizon? Or this is not possible yet.

>> ANA CRISTINA RUELAS:  So I will not say that we can say there's a perfect model just yet. I think in many of the cases, this is something that we are looking at just to prove. For example, for the case of the European Union, the Digital Services Act just had the first published of the human rights assessment    the risk assessment    just two weeks ago it was published. Researchers are looking at what they found there. The UK Safety Act, still it's in the process of publishing the mechanisms for managing risk.

So I think there's a lot different to see and for researchers for civil society to see how this is actually changing the behaviours from the platform and engaging new actors in the discussion of digital platform awareness and the Asia Pacific region. There's mechanisms that have been put in place. So there's different models we have seen all over the place.

I think regulatory authorities are also changing their view, and they're (?) Proudly on systems and processes.

We saw a lot of solutions to criminalize content, and now we're going more on the part of trying to identify how the systems and the processes should be, you know, how this platform should be accountable to the systems and processes that they put in place in order to identify potential harmful content.

And to deal with these issues, I think, as I said in my last intervention, that this published human rights assessments that the platforms did before elections are a good step forward. You know, it is definitely not the only thing that they should do, but I think it is important.

I definitely don't think that it only relies on the platforms.

I think that in order to enable the environment for freedom of expression online, there should be participation of states, and there should be participation of civil society, academics, and, et cetera.

But specifically from states, actors, you know, one of the things that was mentioned by Merrin is how do you ensure that once you're included in the Internet ecosystem, that you have the digital tools (?) And regulators or platforms. So I think there's definitely a duty. It was said the also protect journalist who are then critical voices that we need to protect also in the digital space.

We definitely need investment from the government system and media development, considering that platforms have already taken most of the advertising models like for traditional media.

I think there's different actions that should be taken by different stakeholders, and it's not just only one thing.

>> I'm in favour of giving the platform or the state control over how to (?)    but both of them have their    speaking from the engineering laws, I can say that earlier, it allowed platforms to    the safe harbour protection, right? As long as they're not seeing it, they cannot be held reliable.

Then there was a court order that said platforms    actually, all the positions should come from the court. Any dispute that you have has to go to the court. (?) And then the problem with that is that in the context of media, going to the court and getting an order, it's really not helpful for a lot of content, especially related to online gender based violence where (?) Is necessary. So this requires us to rely on the platforms again.

And then we realise that platforms are not doing good at all. So now in India, we have a system there. If someone has (?) Of the platform, then they can go to a government body that will also have government bureaucrats looking at it. There's concerns about how fair that's going to be.

So, yeah, I think it's a tricky thing, as to who should have the ultimate say.

I want to talk about where they collaborate with the platforms to actively take things down.

I think the civil society role is also important in really taking down harmful content from the Internet. Yeah.

Thank you.

>> Dennis, do you want to add something as well? Should we break up?

>> I will throw one curveball into the discussion. Have you thought about (?) There are some chat tools that are based only in routing or (?) Communication or whatever, and there's no point where you can show    or no company behind that you can give orders to. How can you regulate those? What can be done about those?

Too difficult a question, I'm afraid, at this point.

>> Does anyone want to address this?

>> I may not have an answer on how to regulatory it. When we talk about dismantling the structures, even in my intervention    for example, there have been instances like social code, which is (?)    (audio is distorted)    so while this poses an alternate to the model, there's (?) With respect to moderation.

I'm not really sure how to regulate certain platforms. I feel they should be explored, and they do have their advantages.

Instead of some centralized team of moderators, it allows the users also to engage in more activities, but, again, I don't know how much it can be scaled on a larger label. The (?) To a lot of people and how to deal with a moderation problem is really something that I was thinking about.

I would like to hear if anybody else has an opinion.

>> Just noting we have four minutes to go.

It's still a server that's controlled by someone. They have different moderation policies.

But some different approaches are much harder to moderate.

I have spoken enough. Yasmin, carry on whatever you want to say. We have 3.5 minutes.

>> Thank you. Thanks, Ana Cristina Ruelas and Merrin for joining.

Thanks, everyone, who is watching us online or in person.

Apologies again for the technical glitches and, et cetera. See you next year. Please keep up with the IPRC activities. We have a mailing list, if you want to subscribe with us and engage.

Please write me or Dennis, depending on the coalition you want to join.

We much appreciate people coming in.

Thank you so much, once again. I hope you enjoyed the session.

See you next time. Bye bye.

>> Dennis, you have two minutes if you want to say something else.

>> DENNIS REDEKER:  Thank you so much, everyone, speakers, moderators, technical team in Riyadh.

Thank you so much for the session, and we'll see you next year at the IGF. Please check out the website and join the email list.

>> We're finishing in time. More than a minute to go. Perfect. See you in Oslo.

>> Bye bye.