IGF 2024-Day 4 -Workshop Room 5 -OF 35 Advancing Online Safety Role Standards -- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MENNO ETTEMA: Good morning, everyone. Good afternoon for those in other parts of the world, or good evening, goodnight. We are here at an open forum for one hour, short timeline to discuss quite a challenging topic, which is to advance online safety and human rights standards in that space. I will shortly introduce myself first. I'm Menno Ettema. I work for the Council of Europe in the Anti  Discrimination Department, working grouping on hate speech, hate crime, and artificial intelligence. And I'm joined by quite a distinguished list of speakers and guests.

I'm joined here by Clara McClaren, a Professor at the Dunham Law School, expert on violence against women and girls, online. We are also joined in the room by Ivana Bartoletti, Member of the Committee of Experts on AI, Equality, and Non discrimination of the Council of Europe and also Vice President and Global Chief Privacy in AI Governance Officer at Wipro.

Also with us is Naomi Trewinnard, Council of Europe as well, sexual violence against children with the Lanzarote Convention. And we have the moderator Charlotte Gilmartin, who works in the Anti  Discrimination Department and is an expert with the Council of Europe. And Octavian Sofransky is Digital Governance Advisor, also at the Council of Europe.

The session is about human rights standards and if they also apply online, question mark. And I think it's important to acknowledge that the UN and regional institutions, like the Council of Europe, but also the African Union and others, have developed robust human rights standards for all its Member States, and that also includes other key stakeholders, including business and civil society.

The UN and the Council of Europe has clearly stated that human rights apply equally online, as it does offline, but how can well established human rights standards be understood for the online space and in new digital technologies? So, that's the question of today.

I would like to give the floor first to Octavian, who will provide us a little bit of information about the Council of Europe's Digital Agenda, so just to set the frame for our institution, and then we will broaden the discussion from there. Well, actually, narrow it into really working on the antidiscrimination field. Okay    Octavian, the floor is yours.

>> OCTAVIAN SOFRANSKY: Ladies and gentlemen, dear colleagues, I'm greeting you from Strasburg. The Council of Europe, the organizer of this session, remains unwavering in its commitment to maintain human rights, democracy and rule of law in the digital environment. This dedication was reaffirmed by the Council of Europe's Secretary General during the European Dialogue on Internet Governance last June. The Secretary General emphasized that the digital dimension of freedom is a priority for the Council of Europe.

Our organization has always recognised the importance of balancing innovation and regulation in the realm of new technologies. In reality, these elements should not be viewed as opposing forces but as complementary partners, ensuring that technological advancements genuinely benefit our societies.

At Council of Europe Community of Ministers Declaration on the WSIS+20 review was issued this September, advocating for a people centered approach to Internet development and the multistakeholder model of Internet governance, and supporting the extension of the IGF mandate for the next decade.

Moreover, we are proud to announce the adoption of the Pioneering Framework Convention on Artificial Intelligence and protecting human rights, democracy, and the rule of law last May. This landmark convention, which is open for signature at the Conference of Ministers of Justice on 5 September, very recently, is the first legally binding international instrument in this field and has been already signed by 11 states around the world. Sectoral instruments will complement this convention, including possibly on online safety, our today's session topic.

As a longtime supporter of the IGF process, the Council of Europe has prepared several sessions for this Riyadh edition of the IGF, including on privacy, artificial intelligence, and indeed, the current session on online safety, a topic that remains a top priority for all European states and their citizens. Thank you. Over to you, Menno.

>> MENNO ETTEMA: Thank you, Octavian, for elaborating the Council of Europe's work and the reason for this session and a few others. Can I ask all the speakers that are joining us online to switch on their cameras, because it makes it a little bit more lively for us here in the room, but also online that are joining? Thank you very much.

And I would like to thank you for this, Octavian, and I would like to go over to Naomi, because the Lanzarote Convention on sexual violence against children has a long term experience with the topic. It's a very strong standard. But recently, published a new document on the digital dimension of violence against    sexual violence against children. And Naomi, I give the floor to you to introduce the Convention and the work that it does.

>> NAOMI TREWINNARD: Thank you, Menno. Good morning. Good afternoon, everybody. I'm very pleased to be joining with you today. I'm a Legal Advisor at the Lanzarote Committee Secretariat, and that's the committee of the parties to the Convention for the Protection of Children Against Sexual Exploitation and Sexual Abuse. So, this is a comprehensive treaty open to States worldwide, and it aims to prevent and protect children against sexual abuse and to prosecute those who offend.

So, I wanted to just briefly present some of the standards that are set out in this Convention, so firstly to do with prevention. It requires States to screen and train professionals, ensure that children receive education about the risks of sexual abuse and how they can access support if they're a victim, as well as general awareness raising for all members of the community, and also preventive intervention programmes.

When it comes to protection, really, we're trying to encourage professionals and the general public to report cases of suspected sexual abuse and also provide assistance and support to victims, including setting up help lines for children.

When it comes to prosecution, it's really essential to ensure that perpetrators are brought to justice, and this comes through criminalizing all forms of sexual exploitation and sexual abuse, including those that are committed online, for example, solicitation or grooming a child and offenses related to child sexual abuse materials, so also called child pornography, and also witnessing or participating in sexual acts over a webcam.

The Convention also sets out standards to ensure that investigations and criminal proceedings are child friendly, so there the aim is really to avoid revictimizing or retraumatizing the child victim, and also to obtain best evidence and uphold the rights of the defense.

So, in this respect, the Lanzarote Committee has recognised the Children's House model as a promising practice to ensure that we obtain good evidence, perpetrators are brought to justice, and we avoid victimizing children. So, these standards and safeguards apply equally to abuse that is committed online and also contact abuse committed offline. The treaty really emphasizes the importance of multistakeholder coordination in the context of combatting online violence, and this Convention really specifically makes a reference to the information in communication technology sector and also tourism and travel and banking and finance sectors, really trying to encourage States to coordinate with all of these private actors, in order to better protect children.

And the Lanzarote Committee has adopted a number of different opinions, declarations, and recommendations to clarify the ways in which this Convention can contribute to better protect children in the online environment. For example, by confirming that States should criminalize the solicitation of children for sexual offenses, even without an in person meeting, so when this is, in order to obtain sexual offenses online. And also, given the dematerialized nature of these offenses, multiple jurisdictions will often be involved in a specific case. We might have the victim situated in one country, electronic evidence being stored in a server in a different country, and the perpetrator sitting in another country committing this abuse over the Internet. Therefore, the Committee really recognises and emphasizes the importance of international cooperation, including through international bodies and international meetings such as this one.

The Convention is also really clear that children shouldn't be prosecuted for generating images or videos themselves. We know that many children are tricked or coerced or blackmailed into this, or you know, generate an image and thinking it's going to be used for a specific purpose within a consensual relationship, and then it gets out of hand. So, the Committee's really emphasized that we should be protecting our children, not criminalizing or prosecuting them.

In terms of education and awareness raising, the Committee really emphasizes that we need to ensure that children of all ages receive information about children's rights, and also that States establishing help lines and hotlines, like reporting portals, so that children have a place, a safe place to go to get help, if they're becoming a victim. And in that context, it's also really essential to train persons working with children about these issues so that they can recognise signs of abuse and know how to help children if they're a victim.

So, I've put some links to our materials on this slide, so now I'll hand back to Menno now. Thank you for your attention.

>> MENNO ETTEMA: Thank you very much. Thank you very much, Naomi. It is a quite elaborate work to be done, but what I think the Convention really outlines is that it's legal and non legal measures, and it's the comprehensive approach, the multistakeholder approach that's really important in addressing sexual exploitation of children or violence against children.

In that line of thought, I wanted to also give the floor to Clare, who's involved in the    can speak on the work of the Istanbul Convention, around the Istanbul Convention, and particularly because it published a new, or relatively new policy recommendation number one, on the digital dimension of violence against women, which I think is a very important document to share here today.

>> CLARE McGLYNN: Yes! Good morning, everybody, and thank you very much. So, I'm Clare McGlynn. I'm a Professor of Law at Durham University in the UK, and I'm also a member of the Council of Europe's Expert Committee on Technology facilitated Violence Against Women and Girls. So, I'm going to briefly talk today about the Istanbul Convention that's just been referred to, which was adopted first in 2011.

And there's four key pillars to make this a comprehensive piece of law. It talks about prevention, protection, prosecution, and integrated policies. Now, the key theme of the Istanbul Convention is that violence against women and girls must be understood as gendered. Violence against women and girls is perpetrated mainly by men. It's also experienced because women and girls are women and girls.

Now, the monitoring of that Convention is done by the body called GREVIO. That's the independent expert body which undertakes evaluations of state compliance, as well as preparing various thematic reports.

And as already mentioned, in 2021, GREVIO adopted a general recommendation on the digital dimension of violence against women and girls. So, this general recommendation offers an interpretation of the Istanbul Convention, in light of the prevalence and growing concern and harms around online and technology facilitated violence against women and girls.

It provides many detailed explanations as to how the Convention can be interpreted and adopted, in light of the prevalence of online abuse, including things like reviewing relevant legislation in accordance with whether the digital dimension of violence against women and girls is particularly acute. We see this particularly in the area of domestic violence, where some legislation does not account for the fact that, in reality today, most forms of domestic abuse involve some element of technology and online elements.

Is also talks about incentivizing Internet into mediaries to ensure content moderation. The point here is about how women's human rights are being inhibited and affected by online abuse. And regulation, such as content moderation, is necessary to protect those rights. In other words, regulation frees women's speech online by ensuring we are more free and able to talk and speak online, rather than self censoring in the light of online abuse.

It also talks, for example, about the importance of undertaking initiatives to eradicate gender stereotypes and discrimination, especially amongst men and boys. If we're ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys. Thank you very much.

>> MENNO ETTEMA: Thank you very much, Clare. I really like the general recommendation because of how it portrays the offline forms of violence against women and harassment in all the different ways, shapes, and forms, and how that is actually also mirrored in the online space. So, it's actually a very clear explanation of how one and the other are the same, the online and the offline, even though we call it maybe different, or it might be slightly differently presented because of the online context, but the dynamics are very similar. Thank you.

Content moderation is an important part here as well, and working, again, as with stereotypes and attitudes is a challenge. So, it's, again, legal, but also the non legal approaches are very important. Thank you very much.

Ivana, can I give the floor to you? Because one new area's, of course, AI. Octavian already mentioned it, the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, just adopted. Can you just give two short words on how these human rights standards apply in the AI field? And then we'll give the floor to the rest of the audience and then come back a little bit more on the discrimination risks when it comes to AI, including gender equality.

>> IVANA BARTOLETTI: Yeah, thank you so much. So, AI, of course, is one of the most talked things at the moment. At IGF here, we've been talking about AI a lot and what is the impact of AI on existing civil rights and civil liberties.

So, obviously, artificial intelligence has been capable of doing so many excellent and good things over recent years. Can you hear me? It's been quite    there's been a big push over recent years, especially with the (audio breaking up) generative AI. And talking about (?) which is the AI we're used to    (audio breaking up)

>> MENNO ETTEMA: Hello?

>> IVANA BARTOLETTI: Breaking up. Can you hear me? Is that okay? Okay. So, and talking about generative AI, which is the AI that we've seen that can generate images, that can generate, that is another area of discussion.

Now, AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It can perpetuate and amplify the existing stereotypes in society, thus crystallizing them into representation. You were saying how do we change the stereotypes beyond the legal setting, but there is an issue here because the use of big data and machine learning can amplify the existing stereotypes and crystallize them. And on the other hand, it does provide, for example, very easy and lower the bar to access to tools such as, for example, generative AI tools that can generate deepfake images. And whether this is in the space of fake information, whether this is in the space of assigning civil women for pornography, what we are seeing is again lowering the barrier of access to these tools can have a detrimental impact on women especially.

But if you think about privacy, for example, I mean, privacy and what Clare was saying, when she was saying, you know, a lot of the domestic abuse is enabled by technology. AI plays a big part in it because of the enablement of tools like that can turn into monitoring tools, and these monitoring tools can turn into real tools of suppression.

So, we are very firm, and the Convention is wonderful, in the sense that it's the first real international convention. Yes, you have the European AI Act, which is limited to Europe. The Convention is international, alongside many other things that have happened. So, for example, the Digital Compact at the UN that thinks of framing human rights into the digital space. There's been declaration happening. So, there is definitely a discussion that is happening globally on how we protect, safeguard and enhance human rights in the age of AI, but it's not an easy task, also, and is one that needs to see all actors involved in tackling.

>> MENNO ETTEMA: Yeah, thank you very much. This is only just a small start on the discussion of AI, so we'll come back to that in a second round. But I think what we're trying to do here is explain the various conventions that exist, a few of the various conventions that exist related to discrimination and protection of groups that are particularly targeted. Istanbul Convention, the Lanzarote Convention. But I also wanted to engage with the audience here in the room and also online.

We launched a little Mentimeter, because for us, it's    and I'll ask Octavian to put the Mentimeter online    because for us, it's very evident that human rights apply equally online as offline, but maybe we're wrong. I was wondering what others think about this. So, I have a little Mentimeter for a little quiz to just put the finger on the pulse. Octavian, are you there? Can you put the Mentimeter on, please?

>> OCTAVIAN SOFRANSKY: The Mentimeter is on.

>> MENNO ETTEMA: We can't see it. You have to change screens.

>> OCTAVIAN SOFRANSKY: Okay.

>> MENNO ETTEMA: I can reassure you, we tested this yesterday and it worked perfectly. But when the pressure's on, there's always a challenge.

Okay, while Octavian is dealing with the technical challenge, maybe I can give the floor first to Charlotte. Maybe there are already some questions from the audience online, and then I'll go to    Ah, here's the Mentimeter. Sorry, Charlotte. You can scan the QR code or go to menti.com and then use the code that is mentioned there: 2990 0183.

So, if you scan it or type it in.  Yes, I see people going online. Great. Then you can go to the next slide, Octavian, for the new first question. So, are there specific human rights that are more difficult to apply online? There are four options. Please answer. Meanwhile, Charlotte, maybe I can give you the floor while people cast their votes. From online, any questions or comments that we should take in, here in the room?

>> CHARLOTTE GILMARTIN: For now, there's one comment from Peter from Liberia IGF. Their question is: What are some of the key interventions that can be suggested to increase or improve this topical problem in West Africa, especially in West Africa in the MIU region of Liberia, Sierra Leone, Guinea, and Ivory Coast, vis a vis these conventions and norms against women and girls, especially the Istanbul Convention?

>> MENNO ETTEMA: Thank you very much. Can I give the floor to Clare on this question?

>> CLARE McGLYNN: Yes. What I would add is that, as the colleague is possibly aware, the African Commission a couple of years ago did adopt a specific resolution on the protection of women against digital violence in Africa, and the work of the special rapporteur on the rights of women in Africa has done a lot of work around this regarding the online dimension. So, both that specific resolution and the work of the special rapporteur are likely to perhaps provide some further help and guidance on the particular issues and problems and challenges and opportunities arising in Africa.

>> MENNO ETTEMA: Thank you very much, Clare. And I would also say that Recommendation 1 of the Istanbul Convention, it gives very practical suggestions of what can be addressed, and I think these can be adjusted to, adapted to local context, of course, always. That's everywhere, including the European continent. But I think there are many guidelines there or suggestions that would be equally applicable in other parts of the globe.

I see that there is a tendency to say, yes, all human rights or some human rights are more difficult to apply online. It's an interesting result. So, yes, human rights apply online, but it's sometimes more difficult. Maybe some people want to respond to that.

I would like to go to the second question of the Mentimeter. What should be done more to ensure human rights online? So, if some human rights are more difficult to apply online, what could be done? What do you think could be done? And meanwhile, I wanted to check the audience here if there are any questions or statements that they would like to share. Yes, in the back of the room? Could you please state who you are, just for the audience? Should be. It should work.

>> AUDIENCE: Can you hear me?

>> MENNO ETTEMA: Yes.

>> AUDIENCE: Very well. My name is Jay Kahouse. I work for the Connection of Jurists, Nairobi. First of all, thank you for these wonderful presentations from various stakeholders. We appreciate you a lot. It's actually something that we are very much interested in as digital rights expertise and human rights defenders, and especially on digital rights facing the fact on AI.

So, my question is there has been, especially in African context, there has been a lot of fight by the authorities towards the human rights defenders in the name of defamation. I hope we all understand defamation. So, defamation has been used against human rights defenders online whenever they try to, I mean, to pinpoint issues regarding human rights online. They have mostly been chart under information and abductions and it's happened recently in African context.

For example, in Kenya, there was a Gen Z movement which was well known all over. So, how can we approach that, or how can we cap that, especially in the context of AI, to prevent such things as happening? Like, how can we, I mean, protect the human rights defenders online from being charred on defamation or on grounds of defamation as a tool to prevent them from doing their human rights work? Thank you.

>> MENNO ETTEMA: Thank you very much. Just looking at my speakers, who would like to pick up this question? Maybe I give it a go first myself and then the other colleagues can contribute.

I mean, it's a very pressing question. I think within the European scope    maybe, if I may translate to that area where I'm more knowledgeable. So, within the European scope, the Council of Europe and national authorities are moving away from defamation laws and regulation legislation. I think also in the UN this is echoed, that defamation laws are not particularly helpful. And because of the way they're formulated and applied, that a problem.

There are questions about now hate speech legislation, for example, and the Council of Europe has adopted a recommendation in 2022, CM Recommendation 2022/16, if you want to check it out, on combatting hate speech. And it specifically explains and argues why defamation laws are not up to the task to actually deal with hate speech. And hate speech is a real problem for societies. It undermines the rights of persons targeted or groups that are targeted, and undermines cohesion in communities.

And I think well crafted hate speech laws may function quite well, but well crafted also means that we need to acknowledge the severity of hate speech. So, you have hate speech that's clearly criminal, falls under criminal responsibility, and this should be very restrictive understanding, so we should be very clearly defined, explained what we understand, which grounds are protected under this criminal law, hate speech under criminal law.

Then you have other forms of hate speech that could be addressed through administrative law and civil law. For example, self regulatory mechanisms with the media or political parties that have administrative law in place, and that is a less severe intervention when it comes to freedom of expression, Article 10 of the European Convention, for example. And it's this balancing act.

And then, we have other forms of hate speech that cannot be restricted through legislation but still is harmful, so we need to address it. So, I would really argue that taking inspiration from the Recommendation, for example, to really engage in a national dialogue on reforming legislative situations into really being bind by narrow understanding of hate speech that falls under criminal law. In the recommendation, we also refer to international UN standards and conventions that specify what falls under that, and then other forms of reactions you could do, including non legal measures    education, awareness raising, counterspeech, et cetera. And this would be a much better response, and defamation laws should not be used in such a way. It can be very easily misused, well construed hate speech laws should help.

There is also the work on SLAPS, trade treaty legislation, that could give guidance on what could be done to address on, yeah, misuse of legislation for silencing a group. So, SLASP, there's a recommendation on SLAPS, and it's quite an interesting document that could guide you in your work, in that sense. Thank you.

Naomi, please.

>> NAOMI TREWINNARD: Thank you. Yeah, I just wanted to perhaps share some insights of something parallel that we've dealt with at the level of protecting children from sexual abuse. So, the Convention is quite clear that professionals and all those that have a reasonable suspicion of sexual abuse in good faith should report it to the appropriate authorities, the child protection authorities or police or whatever, but also, that people who report in good faith should be protected from criminal or civil liability, so also to be protected against claims of defamation. And so, actually, the Lanzarote Committee is looking at this question at the moment, looking how to reinforce protections for professionals so that they can respect their duties of confidentiality, their obligations to keep information safe, but also their duties to protect children. And I think it's a really fine balancing act, but certainly, I think clear guidance from States and policymakers setting out the ways in which people can    when they're denouncing something or reporting something, the ways that they should be protected from consequences can be very helpful as well.

>> MENNO ETTEMA: Thank you, Naomi, for that addition. Just going to the Mentimeter. I see a few suggestions    several suggestions. Thank you for that. Education. More education. Content moderation. More research. Data privacy laws, working on safety at all levels, physical and online space. Strengthen frameworks and their interpretation. So, it's quite an array, but I see very much also education as mentioned by a few people. Thank you.

I would like to open a second section of the discussion, going back to AI, because it's the new    it's the big elephant in the room. So, AI, I mean, it's the elephant in the room. And the question is if human rights standards are, in the area, for example, of gender equality, non discrimination, and the right of the child, are delicate porcelain that will soon encounter an elephant stampede, or are there actually opportunities by the use of AI, and should we not be so worried about the rights, human rights of these groups when it comes to the deployment of AI?

Ivana, you already mentioned some aspects, yeah, AI and human rights. They slowly come together. We need to be cautious. There are risks, but maybe there's some more to add, specifically in the area of non discrimination and equality, also from your work in the Expert Committee. Octavian, next slide. Yes, thank you.

>> IVANA BARTOLETTI: So, I think AI enables a lot of this. Not the question we just had, for example, about human rights activists and the same with journalists, no? There is also gender dimension of it, because what happens often is that it is women who are the ones that are targeted the most.

And the elephant in the room is AI, because AI has made a lot of this very much available, okay?

So, in the area of, if you think about artificial intelligence and algorithmic decision making, first we have to distinguish. It's very important. One is what is so called discriminative AI, which is the machine learning, what we use more traditionally. Well, those not really traditional, but in that sense. So, what is happening in that space, especially around algorithmic decision making is that we are seeing women being    and especially the intersection between gender, race, and other dimensions    we have seen often women being locked out of services, being discriminated against. It happened a lot, for example, with facial recognition. It happened a lot with banking services, education.

Now, this is because AI needs data. (Audio breaking up) Often, data comes from the Western world. Therefore, when these data then is (audio breaking up) It doesn't work now?

>> MENNO ETTEMA: It goes on and off. We can hear you, but it also   

>> IVANA BARTOLETTI: And therefore, this bias exists because it exists in society. So, there is, to an extent, there is little technological solution to a problem which is a societal problem. So, we are seeing these barriers. With generative AI, we have seen another set of issues, which is (audio breaking up)    It doesn't work. In the sense that these products are also the product of the scraping of the web, which means taking language as it is, bringing a whole set of new issues, like the language that we are all talking about, learning from these tools    is it inclusive or not?

So, I think there is an understanding that has become more mainstream around all of this. And on the fact that discriminative AI and generative AI, and the combination of the two, can perpetuate existing inequalities into systems that make decisions and predictions about tomorrow.

>> MENNO ETTEMA: Mm hmm.

>> IVANA BARTOLETTI: However, there is also a posit use of these tools that can be where we can leverage AI to try and address some of the systems. For example, leveraging big data to understand the root causes of these inequalities, for example, understanding that there are links between discrimination or sectors and areas of discrimination looking at big data that we wouldn't be able to look at through human eyes. Using artificial intelligence and algorithmic system to set a higher bar, or for example, how many women we want to work in a business, by manipulating the data, by manipulating the data, using synthetic data, by creating data sheets that enable us to improve the outputs.

What I'm trying to say is that we can leverage AI and algorithmic decision making for the good if we have the political and social will to do so, because if we leave it to the data alone, it's not going to happen, because they are simply representative of the world.

And I think there is a    understand, in the study on challenges and opportunities that we've done    and I encourage everyone to read it    it's important because we provide an understanding of where bias comes from; the fact that this bias that is detrimental to women human rights, to discrimination, that is dangerous. We provide a set of recommendations for States to say, how can we challenge this? How can we look at existing non discrimination laws and see if they're fit for the age of AI? For example, if a women is discriminated and is not getting access to a service because she is a woman and also Black woman, okay, how are we going to ensure that this intersectional source of discrimination is addressed by existing non discrimination law? And furthermore, who is going to have the burden of proof? Because if the individual who is already vulnerable in the big problem that we have, which is the unspoken thing here, which is the asymmetry between us as individuals and the data and the extractivism and the complexity of what some call surveillance capitals, right? In this bigger symmetry, it can't be left to the most vulnerable to say, "I am going to challenge this." So, this also means that there have to be strong regulation in place to make sure that the onus is on the company to provide the level of transparency, challengeability, and clarity in auditability of the systems that they're using so that the onus is not just left on the individual to challenge, but these systems can be open to question by civil society, government, and institutions. Business can play a big part in it.

So, what I'm trying to say here is that AI can be used, and especially if I think about automated bots, responsible automated bots, can be great in supporting public sector, private sector, to develop and create AI, which is inclusive. We can use AI, big data strategies, to really understand where the bias may come from. We can look at big data analytics and really say, identify patterns of discrimination, yeah? There is a lot that can be done in this space, but there has to be that willingness to do so. So, I'm really hoping that in a space like this, we can    I mean, like a document like that one that brings together, can be leveraged beyond Council of Europe, because it's really important that we understand that existing legislation around discrimination law, privacy laws, may need to be looked at, in order to be able to counter from the harms that come from algorithmic decision making or generative AI.

>> MENNO ETTEMA: Thank you very much, Ivana. That's quite an elaborate and detailed analysis of the challenges that lie ahead, but also the opportunities. There are opportunities and possibilities.

Can I give the floor to Naomi, maybe from the perspective of the risk for children's safety and the use of AI?

>> NAOMI TREWINNARD: Sure, and thank you. Yeah, thank you for the floor. So, in terms of AI, the Lanzarote Committee has been paying particular attention to emerging technologies, especially over the last year or so, and the Committee has actually recognised that artificial intelligence is being used to facilitate sexual abuse of children. So, Ivana mentioned generative AI models.

So, we know that generative AI is being used to make images of sexual abuse of children, and also that large language models are being used to facilitate grooming of children online and identification of potential victims by perpetrators.

Generative AI is also being used to alter existing materials of victims. I know of cases where a child has been identified and rescued, but the images of the abuse are still circulating online, and now AI is being used to alter the images of the abuse of this child has been rescued to create new images of that child being abused in different ways.

And then, we also know that this is being used to generate completely fake images of a child, and that in some cases, those fake images of a child, naked or being sexually abused, are being used to coerce or blackmail the child either into making images and videos of themselves; sometimes it's being used to blackmail children, in order to get contact details of their friends, so the perpetrator can have a wider group of victims. And in other cases, we know of fake images being used to blackmail children for financial gain. And so, all of these different forms of blackmail and abuse of children have been recognised as a form of sexual extortion against children by the Lanzarote Committee.

And Menno mentioned at the beginning of the session, the Lanzarote Committee held a thematic session on this issue a few weeks ago in Vienna and has adopted a declaration which sets out really some steps that States, particularly, can take to better protect children against these risks of emerging technologies, such as criminalizing all forms of sexual exploitation and sexual abuse facilitated by emerging technologies, so looking at legislation, making sure regulation is in place, including AI generated sexual abuse material, and also ensuring that sanctions are effective and proportionate to the harm caused to victims.

So, historically, we've seen sanctions in criminal codes being much lighter for, for example, for child sexual abuse material offense where there's no contact with the victim, perhaps really looking at those codes to see if that's still effective and proportionate, given the harm that we know is being caused to children today by these technologies.

On the screen there you have a link to a background paper that was prepared for the Committee, which really explores in detail setting out the risks and the opportunities of these emerging technologies. And just to close, I wanted to mention that criminalizing these behaviors is not enough. So, the Committee has also called on States to make use of these technologies. So, Ivana mentioned, there's also a great opportunity here to leverage these technologies to help us better identify and safeguard victims, and also to detect and investigate perpetrators. So, this really requires cooperation with the private sector, especially as regards preserving and producing electronic evidence that can be then used in court across jurisdictions and the Cyber Crime Convention, the second Optional Protocol also provides really useful tools that States can use to better obtain evidence.

So, I just wanted to close by saying, we're really grateful to have this opportunity to share this with you, and we're really interested in exchanging further with those in the room about how to cooperate to better protect children. And perhaps, lastly just to mention that the 18th of November is the annual awareness raising day about sexual abuse of children, and it's really an invitation to all of you to add that date to your calendars and to do something on the 18th of November each year to raise awareness about sexual abuse so that we can better promote and protect children's rights. Thank you.

>> MENNO ETTEMA: Thank you, Naomi, and also for mentioning the International Day, because it is awareness raising in education is a key part of resilience, but also being a way also for parents and others to support children that are a possible target.

I'll soon give the floor again to the audience, but I also wanted to just give the floor to Clare on violence against women and AI. Ivana already addressed some of these points, but I'm sure Clare has some contributions also from that perspective.

>> CLARE McGLYNN: So, yes. I don't know if the slide is going to come up that I've prepared, but it actually is just to be very brief, because what I want to say follows on from Ivana, and in fact, refers to and provide the link to the report that she and Rafael wrote about the opportunities of AI, as well as the challenges, and particularly drawing out what States could be doing, and particularly things like reinforcing the rights and obligations around taking positive action in terms of using AI to eliminate inequalities and discrimination.

But the one point I'll just add there is, as well, that Ivana's report refers to the possibility that into the future there will be other vulnerable groups that are not necessarily covered by existing antidiscrimination laws, and so, we have to be very open to how experiences of inequality and discrimination might shift in the advent in the world of AI and be alive to that and be ready to take steps to help protect those individuals. Thank you.

>> MENNO ETTEMA: Thank you. Octavian, the next slide didn't come up. Maybe you could work on that, because I think    yes, exactly. Because it's very important to encourage people to take a quick picture, because I think the report that Clare refers to is, in particular, and Ivana also worked on and referred to, is particularly useful to understand the risks of AI when it comes to discrimination, but also, particularly also to gender equality or violence against women, and the steps to be taken.

I think the point here is that there are new groups or new    we sometimes talk about grounds or characteristics that are particularly coming up because of the AI. Intersection of data or data points create new, how do you call it, references   

>> IVANA BARTOLETTI: Algorithmic vulnerability. Yeah, the point here is that you can have    so, when you think about non discrimination laws, you think about specific grounds, right? You say, you can't be discriminated because of this ground, religion, or whatever.

The problem with AI is that the algorithmic discrimination, which is created by the AI itself, because it can identify, for example, can discriminate against somebody because they go on a particular website or because the intersection between going on the website and doing something else. This is big data, right? The algorithmic discrimination may not overlap with the traditional sources of discrimination, the grounds for discrimination. So, there is a lack of overlap. So, somebody may be discriminated for an algorithmic discrimination, which may not overlap with the traditional grounds that we've protected people for. So, this lack of overlap is what Clare is referring to, and this is something that we need to think, because we may need to look beyond the way that we've looked into discrimination law until now.

>> MENNO ETTEMA: Yeah, thank you very much. I want to go back for a last round to the audience, and also launch another little quiz with the Mentimeter, so I'll ask Octavian to change the screen to the Mentimeter.

Octavian, can you manage? While Octavian is trying that out, maybe, Charlotte, can I give you the floor first, if there are any further comments or questions that came from the online audience?

>> CHARLOTTE GILMARTIN: Not just at the moment, no. No further questions or comments. But I have put the links to the documents that all the speakers have discussed in the chat, so if any participants want to find the links, they should all be there.

>> MENNO ETTEMA: That's great. I take this opportunity to also mention to everybody that's in the room, the recordings will be online, later on, on the YouTube channel of the IGF. And there, you can then also find all the links because the chat will also be visible in the recordings, in that sense.

Octavian, are you with us? Do you manage with the Mentimeter? Octavian? Yes, there you go.

>> OCTAVIAN SOFRANSKY: Okay.

>> MENNO ETTEMA: So, it's the same quiz, but in case you lost connection, you can scan against the QR code or use the numbers. I see people registering again. Can I get the first question?

So, as I stated at the beginning, AI is the elephant stampede trampling over gender equality, non discrimination, and the rights of the child. Yes    no holding them back (the AI, of course); no    elephants are skillful animals and human rights are not fragile; and maybe, but don't blame the elephants.

Meanwhile, are there any questions in the room? Just checking quickly. Ah, there you go, yes. Yeah.

>> AUDIENCE: Hi, can you hear me?

>> MENNO ETTEMA: Yes.

>> AUDIENCE: Ivana and Naomi all mentioned collaboration. So, how can governments, civil society, tech companies more effectively collaborate to ensure that the online platforms are protecting and upholding rights?

>> MENNO ETTEMA: Can I ask who you are?

>> AUDIENCE: Yes, sorry, Mia McAllister. I'm from the U.S.

>> MENNO ETTEMA: Thank you. The question was to Clare and Ivana. Clare, would you like to start?

>> CLARE McGLYNN: No, I'm happy, Ivana's probably got more expertise in this particular aspect.

>> IVANA BARTOLETTI: So, thank you for the question. So, there are several aspects here. First of all, there are a lot of    so, there is responsibility coming from platforms and private sector, okay, which are very important. So, for example, I mean, if I think about the European Union, the DSA, which goes in that direction, content moderation, having    so, there is an issue    there is something about transparency, requiring transparency, requiring openness, requiring auditability.

So, for example, one of the provisions of the DSA is that data can be accessed, and I'll be brushing over things, but for researchers to then be able to understand what could be some of the sources or online hate. So, there is an onus that must be placed to companies that is important.

There is AI literacy that needs to happen within education, in education settings. I always say, we need people to develop a distrust by design as a way to grow with these technologies but challenge them, you know. We need to tell people that they have to challenge all of this.

It's really important also to look at your regulation, but it's also very, in my view, important that we create safe environments for companies and governments together to experiment. So, for example, the sandboxes are very good. There are different kinds of sandboxes    regulatory, technical    but it's really important that companies    because there are some things that are very hard to tackle in this field, especially with generative AI, they are difficult, okay? Because some of the things can be at odds with the very nature of generative AI. So, having these sandboxes where you can have government and civil society to work together to look into this product, influence this product, I think this is very, very important. So, I would push towards this kind of collaboration.

>> MENNO ETTEMA: Thank you very much. Octavian, could you just launch the last question, just to gather some further thoughts on what can be done more to ensure human rights in the use of AI? I just wanted to ask if there are any other questions from the audience or online? No? Then maybe while people answer this question, a last word, a recommendation for us to carry forward. We have a minute left. So, maybe, Naomi, just last final word of wisdom.

>> NAOMI TREWINNARD: Thank you. Yeah, I think just to reiterate, again, I think the key is really collaboration and dialogue. So, I think this is an excellent opportunity at the IGF to have this dialogue.

For those that are interested in collaborating with the Lanzarote Committee, please do get in touch. Our details are on there, and we also regularly at the Council of Europe have stakeholder consultations in the context of developing our standards and recommendations, so please, tech companies, do engage with us, and let's have constructive dialogue together to better protect human rights online.

>> MENNO ETTEMA: Thank you, Naomi. Clare, last word of wisdom.

>> CLARE McGLYNN: Yes! I think what we need to see is greater political prioritization and the need to move, basically, from the rhetoric to action. And that, for me, means demanding that the largest tech platforms actually act to ensure that we proactively reduce the harms online.

There is a lot of very positive rhetoric, but we have yet to see an awful lot of action and actual change.

>> MENNO ETTEMA: Thank you. Ivana.

>> IVANA BARTOLETTI: Yeah, to me, it's very much breaking that innovation versus private rights, versus safety and arguments we sometimes hear. It's like, on the one hand, there is the argument that we've got to innovate, we have to do it fast and quickly, and to do so, we may have to sacrifice. Well, that is an argument that doesn't stand. Clare is right, you know, this is where we need more, more, more action.

>> MENNO ETTEMA: Yeah. We need to do all, and it's possible to do all through cooperation, clear standards, and clear commitment, legal and non legal measures. I think those are the key takeaway keywords that I want to take forward.

I thank my panelists, also my colleagues, Charlotte and Octavian, for the support. Thank you, everyone, for attending this session. And if there are any other questions, please be in touch with us through the forums on the Council of Europe website or directly, you have our details in the IGF web. Okay, thank you very much! And thank you, technical team, for all the support.