IGF 2025 - DAY 1 - Studio N - [Parliamentary session 3] Click with care Protecting vulnerable groups online

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ANNOUNCER: Ladies and gentlemen, the Programme will start shortly. Please find your seats.

>> MODERATOR: Good morning, everyone. Welcome to today's session. Click with care: Protecting vulnerable groups online. I'm delighted you are able to join us. I know there were travel difficulties getting in this morning. Thank you for being here and to our esteemed panelists for joining us today.

I'm Alishah. I work at Nominet, the dot‑UK domain name registry. I will moderate today's panel.

A bit of housekeeping before you begin. You will have interpretation in your headphones in English, Spanish and French. And when we open the floor to interventions and questions, you can ask your questions by going to the microphone to my left and your right.

So it is a pleasure to Chair today's session, which brings together a diverse panel of Parliamentarians, regulators, and advocacy experts to discuss critical issues of how to protect vulnerable Groups online.

We live in an increasingly digital world that offers opportunities for connection and learning and growth. The digital world brings risks and downsides which is often felt more acutely by vulnerable Groups, including children, individuals with disabilities, and members of marginalized communities, amongst others.

The consequences of harm faced online can have a ripple effect into real lives causing distress, harm, and isolation. The challenge of online harm has prompted a range of legislative and regulatory responses as well as proactive and reactive approaches and today's session will enable us to better understand some of these across the range of geographies and contexts.

I hope that by the end of the session we'll get a sense of how to work towards a more targeted, inclusive and informed response on online harms.

I will hand over to each member of our panel to introduce themselves. We'll start with Neema.

>> NEEMA IYER: Super. Hi, I'm Neema. I am the founder of Policy, a feminist organization out of Uganda, and we work across the continent. We work on feminist digital rights, could be online violence, gender disinformation, the impact of AI on women and any such topics. We do research on the topics and work closely in local communities and do advocacy work, which is part of why we are here as well. Thank you. Over to you.

>> MODERATOR: Next we'll hear from Raoul.

>> Raoul Manuel: Good morning, I'm Raoul Manuel, you can call me Raoul. I'm an elected member of Parliament in the Philippines representing the youth party. Prior to being part of the youth party and Philippine Parliament, I was active in the student Government and student Union. That is why we have been paying close attention to the issue of online freedom and protections, thank you.

>> MODERATOR: Thank you. Next we have Your Excellency, Teo Nie.

>> Teo Nie: Good morning. Thank you for the introduction. My name is Nie Ching. I'm from Malaysia.  I'm the Deputy Minister of Communication. I was appointed to this office in December 2022, however, in the year 2018, I also had the opportunity to serve in the Ministry of Education as the Deputy Minister as well. Currently, protecting children and minors on Internet is a topic that is very close to my heart. And under the Ministry of Communications we have an important Agency called MCMC, Malaysia Communication and Multimedia Commission. We work as regulator for content regulation with platform providers, et cetera. Looking forward to the session.

>> Next is Nighat.

>> Nighat Dad: Good morning.  My name is Nighat.  I'm a founder of Digital Rights Foundation. We are with a Group out of Pakistan. We focus on women rights and online focus of freedom of expression. We are working in direct support and systemic change. We have a digital security help line that provides legal advice, digital security assistance, and psychosocial support to victims and survivors of online abuse. And has survivor‑centred approach. And we also conduct in depth research, build literacy safety tools and engage in policy advocacy conversations at National, Regional and international level.

>> MODERATOR: Thank you. And Arda. Glad you could make it, thank you for joining.

>> Arda Gerkens: Sorry for being late. I'm the President of the regulatory body of online terrorist content and child Sexual abuse material. Used to be a member of Parliament for eight years and senator for 10 years, I bring political experience too. My organisation is there to identify harmful content on terrorist content and child sex abuse material and able to have the content removed. If not, we will find the ones not complying with our regulation.

We are unique in the field, I think we're the first regulator, as far as I know who has the right into the content.

>> MODERATOR: We have another panelist on the way here, we have will Sandra who is with ANACOM Portugal. Hopefully will be here shortly.

We will have questions for the panelists that they will speak to, followed by a quick fire round and then open out the floor for your interventions and questions.

Without further ado, the first question is for Neema. What are some of the unique online safety challenges faced by marginalized communities, particularly in the Global South that might not be adequately addressed in existing legislative frameworks? 

>> NEEEMA IYER: I want to start with research we did on this topic. We started with 3,000 women across Africa to find out about their experiences. One in three had experienced online violence and this led them to deleting their online identity. Many were not aware of reporting mechanisms. They also felt that if they went to any Authorities that they would not be listen to.

A second study is a social media analysis of women politicians during the Uganda 2021 election. We wanted to see the experience what it was like for women politicians. They're often targets of sexual and sexualized abuse. (Static) sexual and sexualized abuse. All good. The fear of the online spaces meant women politicians did not have online profiles or chose not to exist and to participate in the online sphere.

The third research we did was on the impact of AI on women. We often think of the care as social media, but more importantly, thinking of how does AI impact women in, you know, that may be marginalized in some way. We found out there are grave issues of underrepresentation and data bias. Algorithmic discrimination. And AI makes it possible for digital surveillance and censorship. Labour exploitation and low‑wage jobs that often are occupied by women. I want to frame that first and talk about the question. What is unique about the Group? The first is intersecting inequalities. There are large gaps if digital literacy and digital access, for example.

So when you are trying both as a platform and as Civil Society or a Government, you have to take into account the fact that there are some women that have absolutely no access, have no digital skills and across the spectrum. How do you tailor interventions that can meet all of the different people that exist in all of the different inequalities? 

Then in our context, for example, in Uganda, there are about 50 languages that are spoken in Uganda alone. Not considering the whole continent. And because these are smaller countries, they don't have a huge market share on the online platforms. They're often not prioritized. So how do you develop interventions? How do you make safety mechanisms when you don't have the languages on your platform?  Another one to talk about is the normalization of abuse which is, you can see in real life, and in online spaces that are both cultural and a result of platform inaction. So in regular life you go on the street, get harassed, go the police, they don't do anything. That is replicated in online platforms where you face the harassment, reach out for recourse on the platforms and there is platform inaction. Basically in that way, the kind of online abuse is normalized.

And then the invisibility in platform Governance processes. This is an amazing venue where we can talk about the issues. But a lot of women marginalized Groups are not in the rooms with us now to talk about their experiences.

And lastly, I want to talk about the fact that the laws that do exist, especially in our context have actually be weaponized against women and marginalized Groups. So many of these, you know, cybercrime laws or data protection laws have been used against women, have been used against dissenting voices and activist to actually punish them rather than protect them. That is the reality we live in. The fact that legislative frameworks are too narrow. They focus on take downs or criminalization or they borrow from western context, but they don't really meet the live realities of women. So for example, a law might address intimate image sharing, but it won't, you know ‑‑ it will ignore coordinated disinformation campaigns, for example. Or ignore this ideological radicalization that is happening to minors online.

It won't target specifically the design choices that platforms make, for example.

Like, where you know they amplify violence or those sort of things. We need to think broader about how we legislate about online violence. I'm happy this is happening. Back to you.

>> MODERATOR: Thank you. So much in there from the fear of abuse and online spaces to how different people feel and experience being marginalized and also how some of the legislative measures and also policies can sometimes have an adverse effect and thinking about the context. Thinking about how we do good regulation, we'll turn to Raoul. As a member of Parliament, can you share recent legislative measures in the Philippines to address online exploitation of children and pending efforts to protect women, LGBTQI and other marginalized communities from online violence and threats? 

>> Raoul Manuel: Thank you. Alishah. Before I proceed, I want to thank the IGF Secretariat for the opportunity to share our perspectives from the Parliament of the Philippines. In our case, we have been pushing for a vibrant debate and discourse to ensure that protections for marginalized and vulnerable Groups do not come at the expense of sacrificing our basic and human rights. The Philippines right now, for context plans as number 3 as of February 2025 in terms of the daily time spent by those in using the Internet. An average Filipino spends around eight hours, 52 minutes, roughly nine hours per day on the Internet, which is much higher than the Global average of six hours and 38 minutes.

So while this time can be spent connecting with friends, family, conducting research, do homework, this also exposes vulnerable Groups, including young people to different forms of violence and threats. For example, the Philippines, unfortunately has been a hot spot of online sexual abuse and exploitation of children. And also the distribution and production of child sexual abuse and exploitation materials. This is a problem that we have to acknowledge so that we can take proactive measures in addressing it. Second would be electronic violence against women and their children, which we call EVOC for short. Third, among the major forms of violence and threats online would be harassment based on identity and belief. So I will briefly touch upon what we have been doing in Parliament to address these. First, when it comes to online sexual abuse and for exploitation of children, we recently had the republic act 11930. It is a law that lapped on July 30, 2022, it is fresh. Aside from content take down, one major component of this is the assertion of extraterritorial jurisdiction. Which means the state shall exercise jurisdiction if the offense either commenced in our country the Philippines or if it was committed in another country, by a Philippine citizen or permanent resident against a citizen of the Philippines. Realizing the problem of online sexual abuse of children can happen in not just a single occasion but can be a coordinated network involving several hubs or locations. That is why really had to put this into law.

When it comes to electronic violence against women and children, the House of Representatives on its part approved the expanded antiviolence bill. It defines psychological violence as including different forms, including electronic or ICT devices. The use of the devices can be considered and defined to be part of violence against women.

So we did this in the House of Representatives, but since the Philippines is bicameral, we're still waiting for the Senate to also speed up in its deliberations.

Now, when it comes to online harassment, based on identity and belief, we approved at the Committee level so far, amendments to the safe spaces act, which sets higher standard on Government officials who may be promoting acts of sexual harassment through digital or social media platforms, like when they have speech that tends to discriminate those in the LGBTQI community.

Finally, we have a pending bill in the House of Representatives, which seeks to criminalize the tags of different Groups, individuals or entities as state enemies, subversives or terrorists without much basis in such labeling.

Recently, the Supreme Court adopted the term red tagging which has been a source of harm and violence into the physical world.

That is all for now. I hope that this can be a source of discussions also in how we can really work together to address these online problems. Thank you.

>> MODERATOR: Thank you Raoul. That was eye opening. There is lots happening in your legislative space. It is nice to have the slightly new regulation and legislation and hear from somebody later on that has the experience of enforcing this regulation.

So moving from the Philippines to Malaysia. Next we'll turn to her Excellency Teo Nie. What is the call for philosophy and overall strategy for protecting vulnerable Groups in today's complex digital environment?  How does Malaysia balance creating and enforcing laws and regulations with maintaining freedom of expression? 

>> Teo Nie: Thank you for the questions. First of all in Malaysia, we view online protection not just as a single action, but as a holistic ecosystem. Built on three core and strategic trust. The first is empowerment through digital inclusion and of course, literacy.

And the second would be through robust and legal balanced framework. Third, support of whole of society and multistakeholder collaboration.

So currently in Malaysia, our Internet coverage has reached 98.7% of populated areas. So when Internet coverage is pretty impressive, at the same time we set out more than 900 National information dissemination centres, which act as community hubs providing underground digital literacy training, especially to the seniors, the women, to the youth, who may be more susceptible to online risk.

Not only that, we recently launched a National Internet safety campaign. Our target is to actually enter 10,000 schools in Malaysia. There is our primary school, secondary school and of course, we aim to enter the campus of University as well. So we can engage with the user. This Programme is not the usual public awareness campaign, however, it is more specific, we develop a modular approach where we depend on the audience, for example, if the ages 7 to 9, then our content is more suitable for them and our interaction is we can design for them.

For example, primary school, secondary school focus on cyber bully to protect their personal information. And for elder, we teach them or share more about online scam, financial scam, et cetera. So we believe this is an approach whereby we need to go to the community, engage them, empower the community so that we can raise their digital literacy. Of course, I think we also need a legal framework to protect our people. It is very important for us to strike a balance between freedom of expression, but at the same time also make sure the vulnerable Group are protected by law. Last year, we have amended our act as communication and multimedia act, first time in 26 years, whereby we increase the dissemination of the child material. The grooming of communication through digital platforms, with penalty and minors are involved.

At the same time the amended law grants the Commission, the communication and multimedia Commission Malaysia authority to instruct the service provider to block or remove the harmful content and enhance platform accountability. And we develop a Code of Conduct, targeting major social media platform with more than 8 million users in Malaysia.

Malaysia has about 35 million population. We use the benchmark of eight million is roughly 25 percentage of the population. We hope to use regime to impose this Code of Conduct against the service provider. Like I said yesterday, it is not say it is a successful attempt. It was supposed to be implemented since January 1 this year. Two major platforms Meta and Google have yet to come to apply for the license.

So I think the challenge faced by Malaysia maybe would be similar to many other countries as well, would be Malaysia we don't have sufficient negotiation power when we engage with the tech giant like Meta and Google. How can we impose our standard on the platform to make sure the harmful content according to Malaysia can be removed instantly, in a reasonable period of time has been quite challenging in Malaysia. And we see that even though sometimes platforms still cooperate to remove certain harmful content, it is always like the user or the scammer put it out and MCMC upon the request, the content were taken down. However, there is no permanent solution such as online gambling and scammer post, et cetera. I think that's it for now. Looking forward to more questions. Thank you.

>> MODERATOR: Thank you. That was a good overview of how you can have legislation and a voluntary Code of Conduct and the challenges that go with that in terms of how to enforce it and towards the end getting to prevent some of the stuff in the first place. Because the takedowns are a reactive measure. There is a bigger challenge here on how to prevent this in the first place. We will now move to more of a focus on digital rights. We'll turn to Nighat. You lead efforts around online harassment and advocacy for privacy and freedom of expression. You are serving on the Meta Oversight Board. What gaps in terms of digital rights do you observe between the Global South and Global North and what are your perspectives on platform accountability? 

>> Nighat Dad: At the digital rights Foundation, over the years, we have been witnessing the rise of digital surveillance, privacy violations, gender‑based disinformation which is targeted. And now the disturbing rise of AI deepfake content. Since 2016 to our digital security hotline we have dealt with more than 20,000 complaints from women, female journalists, content creators and professionals and students. This is run by an NGO and the number is higher when it goes to our Federal investigation Agency, cybercrime wing. And the people mostly who complain as being blackmailed, silenced, driven offline by intimate images they never consented to, some of which are not even real. Last year and a half we have seen the rise in deep effects that blurred the line between fact and fiction but at the same time we have seen that the harms are real in the offline space. It is reputational damage, it is emotional trauma and some cases, complete social isolation. In worst cases we have seen some women committing suicide.

What is even more alarming is how platforms are responding to it as Honorable Minister mentioned, many platforms in our part of the world are not accountable to the Governments and too often, survivors are forced to become investigators of their own harm. Hunting down copies of content, flagging it repeatedly. Navigating back reporting systems that offer little support and no urgency.

And unfortunately, they're not public figures. If they're not politicians, the response is even more delayed. If it comes at all.

And in my work at the Meta Oversight Board, the same patterns show up, just on a Global scale. Last year, we revealed two cases of deepfake intimate imagery, one involving a U.S. public celebrity and another involving a woman from India. Meta responded quickly in the U.S. jurisdiction. Media outlets had reported on it. But in the Indian case, the image wasn't even flagged or escalated. And it wasn't upon added to the Meta's media machine matching service until the Oversight Board raised it.

And what we noticed as a Board, if the system only worked within the platforms when the media pays attention, what happens to the millions of women in the Global South that never make headlines.

So we pushed Meta in our recommendations in case to change the policies, we recommended that any AI generated intimate image should be treated as nonconsensual by default. That harm should not have to be proven through news coverage and be advised that the cases are governed under the document exploitation policy not buried under bullying and harassment. What is at stake is not just tone, it is bodily autonomy.

One thing that is concerning is Meta has recently scaled back, like several other platforms, the proactive enforcement systems while shifting to users. In empowerment, it looks different on the ground. In South Asia, many users don't know how to report. When they do, the systems are in English, in the in our Regional languages. The processes are opaque and fear of backlash is real.

In India, we have documented cases where women reporting abusers end up being harassed further. That is the same in Pakistan. Not just by other users. But the very mechanisms that are meant to protect them. I will stop here and will added more to the policy level debate.

>> MODERATOR: Thank you. There was so much in there. What is coming through is if we have the right to privacy and freedom of expression, that should be for all of us, everywhere around the world and how we are treated when something does go wrong, it should be equitable, because you can't put it all on the individual to try and get all of the images taken down. We are seeing more on nonconsensual and they're catching up in the regulatory portion because of the real harm.

Next we'll turn to Arda. You are the authority for the prevention for online count and child sexual abuse material in Netherlands and orders the removal of terrorist and CSAM content. How do you streak the balance of online rights, safer online environment and enforcement? What are areas of concern? 

>> Arda Gerkens: Thank you for inviting me on the panel. To address one of the last points in your question. How do we deal with law enforcement?  We basically only target the content. We are not looking for perpetrators who is uploading or downloading it. It is not of our interest. When it is certainly terrorist and child sexual abuse content, anything that is worrying we will report it to law enforcement to act upon it. We have something called deconfliction, to make sure we don't take down material in areas where police or other services are investigating to make sure we don't harm their investigation.

So far, that hasn't happened yet. We're doing a good job.

The other question is about how do you balance human rights?  And of course, with the powers we have, which is very important power, I think taking down material comes great responsibility. Definitely when you look at the field of terrorism, it can easily be abused and harm freedom of speech. We need to see how to balance that. First of all we have legislation. So it is not ‑‑ we have to hold the standard for the legislation when we send out removal orders. The legislation is quite broad, sometimes vague. For instance, one of the reasons of addressing something as terrorist content is the glorifying of an attack. What is glorifying? 

What we're doing at the moment is together with other European countries, legislation we are now clarifying that law to see ‑‑ so what do we think all of us is glorifying. What is the call to action so we can refine that? And make it quite clear also to the outside world, how do we assess on the reports we get and what threshold does it meet before we send out a removal order. We can give it to the platform, saying listen, if it meets this and this criteria, maybe you should take it down before we send you a removal order. That is better than to get the order.

This is for terrorist content and child sexual abuse material there should be no debate. I don't think there is freedom of speech or any other human right except for the right of the child.

But however, if you look at the removal of the type of content, you will see that on 31st content, the majority of the material we find is on the platform. Fort child sexual abuse material unfortunately as the Philippines as their downside, we have the downside that the Netherlands is a notorious host for this kind of material. We're focusing on hosting companies.

Some of them are really bad actors, so they probably wouldn't ‑‑ this would not be the only bad things on their platforms but there are many legit websites as well.

We need to make sure we are proportionate in our actions. We have really strong powers. We are able to pull the plug on the Internet to say it that way. We can even make sure that access profiles block the access. If you do such a thing. You need to make sure you are not harming innocent parties or companies involved.

So again, we need to be precise and well know what we are doing. Basically for all of this work, we engage a lot with industry. To know the techniques, I think it was Paul who said here yesterday, for politicians, it is important to know the technical aspects of the online world. So is it for us.

So we are not ‑‑ you know we don't know everything. There are lots of people that are much smarter than we are. So we engage with them. And we have an Advisory Board that will help us make the difficult decisions. We also engage with Civil Society to make sure that we hold all of the rights, which are there to be able to balance it. In the end, of course, it is our decision. We have to be able to explain it to the public, to you, why did we take the position, did we look at the downside, and any effects of it.

So yeah, that is how we're doing it. I think it is a very interesting job. Now, the matter of concerns of vulnerable Groups, that is something I would like to address. It is something we are currently seeing happening in the space of what used to be, I think, terrorism. I say used to be because terrorist action used to be quite clear‑cut. It is either Right Wing terrorism, look at the Chrysler shooting or Judaism, many attacks are known for that. We see hybridization of the kinds of types of content, mixed together with other content. We were finding with the online terrorist environment lots of child sex abuse material and we find that it is certainly vulnerable kids at the moment are at large online, can I say it that way?  Because we find the terrorist Groups or these Groups, extremist Groups are actually targeting vulnerable kids. They would, for instance, create a telegram channel where kids can talk about their mental health state, eating disorders. They groom information out of them and with that information they then extort them and they let them do horrible things, carving their bodies or sexual images. And this material is radicalized in the kids swiftly.

In Europe, recently, we had some very young kids that were at the verge of committing attacks. So what we see now is this is accelerating in a fast pace. And as our focus is on terrorism and child sexual abuse we cannot speak on eating disorders or mental health problems, we know at the table, there are organisations that address the problems but are not aware of the things happening. It is all in the dark.

If you talk about protection of vulnerable Groups online, we need to bring these to light. One thing, it was brought to light by media and the other was not. It up to us to bring to light that these things are happening online, so the awareness is out there for parents and other caregivers to take care of the kids but also for adults that if somebody finally is able to speak about what is happening, you are there to help and support them. But yeah, we need more to be done here as a coordinated approach to tackle this problem.

>> MODERATOR: Thank you. There was a lot in there in terms of proportionality and having in there and the hybrid threats and it is important to get it right. There is a lot at stake. We will turn to Sandra. If you want to do a short introduction, that is great.

>> Sandra: I'm a Chairwoman with the Portuguese National communication. At the moment, ANACOM deals with lots of communications and the digital service coordinator. You digital matters and online terrorism and all of the new issues. Also some competencies under AI. Quite a brought authority. I'm an economist and specialized in behavioral and experimental economics.

>> MODERATOR: Bringing together your two roles, can you explain what behavioral economics is about and how it is used to protect vulnerable Groups online? 

>> Sandra: If we are all rational human beings, we all need to care so much about safety and have the concern. Because we'll be super rational and able to understand what is good and bad and immediately react upon that. So behavioral economics is a field that blends insights from psychology and economics to fully understand how women make decisions and they make decisions not like machines. They don't really maximize all the time their welfare. But they are affected by social issues, by social pressure, by their own emotions. And we all are affected by cognitive bias.

We used short comes, heuristics to make decisions. We have a ton of cognitive bias. The cognitive bias, the significantly influence how users interact and behave in an online context. We have to have that into account.

For instance, can give you quick examples. Confirmation bias. Users seek out information or sources that align with their existing beliefs, leading to echo chambers on social media platforms.

These can perpetuate misinformation, stereotypes and false beliefs and limit exposure to diverse perspectives.

Another one, overconfidence bias. Users my overestimate their online security knowledge leading to risky behaviors such as weak passwords or ignoring security updates. Optimized bias. We underestimate the risk of online scams or data breaches, believing they're less likely to be targeted than others, to lead to inadequate precautions. On top of this, we suffer from this bias. But some Groups suffer even more. If you are thinking about children, you are thinking about some disabled Groups, people with mental health problems. They have of course, this bias in influencing their decision even more. And we as regulators, we have to take that into account. So we should of course be aware how these bias are used to explore the decision‑making process online. We have to fight with the same weapons. Basically, we have to make usage of the bias and try to make people or take good decisions. So we have to understand this cognitive bias. And also be aware that we can use them to make individuals ‑‑ make them take more informed decisions.

AI can also increase the economic value of the cognitive bias. And why?  Because AI makes organisations to use even more to exploit the cognitive bias and expose people to even higher risks. We have to be aware of that. And also, AI systems do not need to exploit vulnerabilities to cause significant harm to vulnerable Groups. Systems that, for instance, in overlook the vulnerabilities of the Groups could potentially cause significant harm. I will give an example. Individuals with autism spectrum disorder struggle with understanding speech like irony or metaphor due to impairments in understanding or recognizing the speaker's intention. In recent years, chat bots are used to interact with people with autism. If they are working on the typical adult conversations it may incorporate jokes and metaphors. People with autism could interpret them literally and could potentially lead to harm.

We have to be aware. We have to be aware of intentional and unintentional harms that can be caused to individuals.

We can use this bias to make individuals make good decisions to protect vulnerable Groups online. Behavioral economics can be used to enhance online protections for vulnerable Groups such as children, disabled users, marginalized communities in many ways.

So we can better design user interfaces. So websites, applications can be designed with user friendly interfaces that consider the cognitive load of users. Not safe behavior. Platforms can implement nudges to safeguard behaviors and presenting information about online risks about a clear and relatable way that can improve understanding and compliance.

So this is ‑‑ this is particularly important. For instance, regarding cyberbullying, behavior economics can play a significant role in protecting children from cyberbullying. So for instance, we can apply the principles to education and awareness campaigns again, framing information in a way that makes it very clear and very relatable for users.

Using social norms. Social norms can be a problem because people feeling pressure, especially to follow what others do. For instance, this is a preoccupation related to online challenges that many children engage and put them at risk. But at the same time, we can use social norms messaging, for instance, I highlight positive behaviors and peer supports through campaigns, can shift perceptions. So by emphasizing that most children are not engaged in cyberbullying behavioral can create a social norm against it.

I want to point out that we have to understand the behavioral bias that are putting our children ‑‑ as an example ‑‑ putting all of us at risk online, but we can use the same weapons to make a safer behavior. We have to understand and play with the same weapons as regulators.

Using nudge, encourage reporting, nudges that remind children of the importance of reporting bullying and reporting rights. Studies that confirm that.

Programmes can be designed to teach children how to respond to cyberbullying effectively and behavioral economics can inform the design of the Programmes.

So incentivize positive online behavior can test different incentives. Gamification, schools and online platforms can reward systems that recognize and incentivize positive online behavior. These can be tested using experimental tools. So this is just an example. There are much more online platforms can adopt clear policies against cyberbullying and communicates this effectively to users, again behavioral checks can help in framing the policies to highlight the collective responsibility of users to maintain a safely online space. So this is the point again that I want to make. This is an example. The same can be applied to understand algorithmic discrimination, how does it work?  How the bias increase discrimination and at the same time, how can we use nudge and behavior to fight those bias that are perpetuated in some algorithms?

So the message I want to leave is that especially if you are regulator, policymaker, be aware of behavioral insight. People are using it to make others behave in a way they want. Firms do it a lot to sell more. Marketing strategies are all making use of behavioral insight. So we as regulators have to use the same weapons. But for another purpose. With another goal in mind.

That's it.

>> MODERATOR: Thank you, Sandra. I think that was ‑‑ yeah. I think it is great to have a different perspective on the issue. I never heard anyone come at it from a behavioral bias perspective. Thank you for that. It is actually how do we turn this on its head and use gamification and things to incentivize slightly different behavior. That is something we have come to a lot. I have a quick‑fire question for each panelists before we open the floor.

What forms of accountability beyond content take downs should platforms adopt to protect marginalized users. I would start with Arda.

>>Arda Gerkens: We should start from the positive side. There is a lot we can say about the platforms. The effort is there, it doesn't cost money. When it comes to revenue, it is getting to be difficult. I think there is one thing that is indeed a takedown content, but things that can be done with the algorithms. By bringing extra attention to some of the material that they have. Or to lower it in the attention. And here, I think there is still a big chance. Because it is a piece of content in itself is not harmful. Could it be harmful, but only used by three or four persons, it is not a problem. Once it spreads and been into the eyes of millions, that is where it is harmful. Again, when it is spreading fast. It is also the way the system works. Because it is there, because you want to be able to spread it again. Get more attention. And therefore, get more viewers, more viewers, means are advertisement for the platforms. If we speak with them, we would look at the policy around moderation in the sense of taking material lower into the fields or bringing them up higher.

>> MODERATOR: Thanks, Arda. Next is Nighat. Do you want me to repeat the question? 

>> Nighat Dad: Some platforms are doing a lot. Some, not all. We should look at the positive side of some platforms, where they have oversight mechanisms that are still working. And give good decisions and recommendations that improve their policies. At the same time, we need to see what to do with the platforms that are still thriving in our jurisdictions but absolutely have no accountability. And don't have trust and safety teams or human rights teams. I'm talking about acts here. I don't think anyone in the room has a point of act with acts in terms of escalating content or in terms of disinformation that ties on this platform. It is very interesting for me to see for a number of year, in different jurisdictions when we talk about platforms in the North, it is easier to say we should move on to other alternative platforms, like Mastodon or blue sky. The problem in our jurisdiction is user base is not that digital literate. They're very comfortable with the platforms that they already have, not that Civil Society has access to the platforms, neither the Government. So I'm very concerned, what is our ‑‑ what are we thinking about the platforms?  At the same time, there are platforms that actually listen to all Government requests. And take down number is very higher. And that is where many have mentioned necessity and proportionality. I don't think many jurisdictions are actually respecting that.

And so I think we really need to see what are the oversight or accountability mechanisms are out there. And what different actors are doing? Yes, Government, Government is making policy and regulation. But what that regulation looks like?  Does it really respect UN guiding principles or international human rights law when it comes to content moderation or algorithmic transparency?  At the same time, what other actors are doing, platforms at the moment have much more power in our part of the world. We don't have Digital Services Act. But our Governments are coming up with its own kind of regulation. Which might not be as ideal as DSA and might not have the same enforcement that DSA has.

>> MODERATOR: Thank you. There is definitely something about transparency of what the platforms share and whether that content moderation processes work or other things and point around accountability and like designing this new regulation we have to take into account privacy, freedom of expression, getting the balance right and being able to enforce effectively.

Next, I will turn to you Teo Nie.

>> Teo Nie: I would like to see the platforms to improve the report mechanism. The report mechanism. My experience in Malaysia would be sometimes even public prominent figure such as a very famous player from Malaysia. A scammer using his video and photo to create scam‑related post. He has an account, Facebook account with blue badge, he can report through the mechanism cannot be helpful. He needs Tom pound links, send to me, send to MCMC and forward it to matter for the content to be taken down. The report mechanism is not functioning. And there is actually putting a heavier burden on the regulator to do the content moderation on the behalf of the platform, I don't think that is fair. Second, we talk about transparency.

The scam related posts are taken down, what actions are taken by the platform against the scammer. Against the scammer?  I think that is the question we need to pose to the platform provider. And I'm hoping to get an answer, how much revenue collecting revenue are they collecting from Malaysia?  Do we know?  I don't have the figure. How much revenue do we collectively gather? We don't have to figure. For me, take down the scam related post, it is not sufficient. I need to know what type of action is being taken by the platform against the person who sponsored the post?  Shouldn't the person be held responsible as well?  Because we don't have that type transparency, it is difficult for us to hold the platform accountable. And again, I would like to add more on the algorithm part. I think algorithm is very powerful. When they design the algorithm, it is to make the platform more sticky so the user will spend more time on the platform. However, I think it is time for the public and the general public, and Civil Society to also have a say to design the algorithm so that we can so call practice information diet as proposed by a favourite author that we also need to make sure that the information consumed by the user, by the social media user everyday actually healthy content. And not just whatever content they like. That can be very dangerous.

>> MODERATOR: Thank you. Yeah. Absolutely. Yeah. I think the incentives of the platforms and understanding the kind of stickiness point with algorithmic promotion. Yeah, the advertising revenue is another piece of the puzzle to have a separate discussion about. Next, I will turn to Raoul.

>> Raoul Manuel: Thank you. Actually before this month of June in the House of Representatives, we had a series of hearings by three House Committees, namely the Committee on information and communication technology, Committee on public order and Committee on public information. And the topic of take downs has been discussed. And in the fifth or final hearing that we had so far, the Government and Representatives from Meta report that the public hearing that they had the agreement that would enable the Government to send the requests for content take downs to Meta and our reaction at the time was without any basis or law that would set the standard as to what content can be taken down and what should just stay online.

Then it will be a slippery slope when it comes to using content takedown as a primary approach when it comes to ensuring that our online spaces are safe.

It can be, you know, having decisions just being done in the shadows and people not being aware or being made knowledgeable about the takedowns. That is why beyond takedowns, we assert that platforms have major responsibility. For example, we can monitor notable sources of content that is harmful to children, women, LGBTQI and other marginalized Groups. May it be bullying, hate speech, the red tags, posts promoting scams. And Filipinos should report the sources to Government.

Also platforms should work with independent media and Digital Coalitions so aside from going after each content, because that would be very tedious and laborious, we should focus on sources, may it be the accounts or network of accounts having inauthentic activity to promote a certain narrative or discourse so that we cannot just be reactive in our approach. Being proactive is the better way to go. That is my piece. Thank you.

>> MODERATOR: Thank you, Raoul, it is interesting in knowing the sources and you touched on the importance of independent media, which is in decline in a lot of the places we live. Sadly. We will go to Neema next.

>> NEEMA: I am with a Coalition that brings us in on design. It is difficult work, to echo on what was said by another panelists. It is difficult to impact with the user behavior. With content take down it is a reactive measure that happens after the fact. The content is already shared. You go through this mechanism. It can take days, months, years or it will never happen. It will never be taken down. It has also happened I report many times and it is not taken down. It is not a sense of justice for the people wronged after the content is up there, the damage is done, the wound is there.

I think it will be interesting to think about what are the kinds of design friction that you can introduce that stops the content from being shared. I think my behavioral economist colleague will have more to say probably. How do you stop it from happening so you're not in the position of having to take down?  As Arda mentioned, they're coming up with guidelines and practices that would be nice to use to take down.
What if it was used before it comes up?  Or when someone goes online to insult a woman for whatever reason, that there is a nudge that says you share you want to do that?  What do you benefit from saying this?  Of course, on the other end of that, it is also problematic.

So I want to acknowledge that this method is problematic because this sort of shadow banning, it has been used against feminist movements and marginalized people to silence them. Talk about racial issues, colonization, your posts don't get shown. The problem is we don't have transparency on the algorithms that show or hide information. And really all of us are at the mercy of the moral and political ideologies of whoever owns that platform.

It they're a Right Wing antifeminist person, those are the rules of the platform and we're all tied to the rules.

So what would be lovely in a perfect world is if the algorithmic decisions are co‑created by all of us and we understand that if we're doing child protection, counterterrorism, we have decided these are things we don't want to be posted or shown, decided it as Government, Civil Society and as the platform comes together. The platforms Mead to take the accountability and be more transparent and do more audits and research with Governments and Civil Society. So we don't look at the platforms as enemies, acknowledging they do a lot. There is need to collaborate and set the guidelines. Thank you.

>> MODERATOR: Thanks. Absolutely. Yeah, just having the multistakeholder voice in shaping the things that govern the platforms we interact with. I like the point of design friction. That is interesting. Finally Sandra and then questions from the floor.

>> Sandra: I couldn't agree more with what has been said so far. Think about this. Think about if you wanted to take just to skydiving activity. You go to a firm, sign up for the service, and you always get some briefing about security and safety measures that you need to take. So you are buying the service. And the firm that offers that service is forced to provide that briefing.

I think what I really would like is online platforms that are providing us a service, who would be also entitled and forced to give at least the briefings to all of us about safety, about measures that we need to take as human beings, be aware of our cognitive bias as I said and what all the content and online interaction may impact on our decisions and on our behavior. So I think it should be more entitled to provide us that sort of information.

Then where it should be online it is illegal, it should be illegal offline. We have the principles what is not legal offline should be illegal online. We should take it down. Here I'm more pro let's say measures like nudge, interventions, applying the behavioral insight. Increasing awareness, giving more education. Improve willing digital literacy. Of course, making us better users of online content and trying to be aware of the platforms.

And as digital service coordinators, the first step they have to take is to comply to platform. And then it is very hard. It is very hard to, you know, have to whom they can contact. This is something that platforms need to be responsible. Take the complaints seriously. And respond to users appropriately. Of course, about algorithms, more audits really needed. Regularly auditing for bias to help identify and correct discriminatory patterns. Diverse development teams. This is also something that a platform should look for. Building diverse terms of developers and stakeholders that can help mitigate biases and algorithms for instance, transparent and accountability making algorithms more transparent to allow users to understand how decisions are made to help identify the potential discrimination. And again, giving users more education. Also playing, again with the behavioral issues, the default settings is important point for behavioral economists, setting stronger privacy default to protect vulnerable Groups, for instance, social media platforms to make private account default setting for children. Ensuring that the information is more secure unless they choose to change it. Changing the default. Playing with those. It is also very important.

So basically we have to be as I said, aware of the cognitive bias platform should give us more information about these cognitive bias that all of us face and give us briefings, give us information and education and be more accountable and transparent.

>> MODERATOR: Thank you, Sandra. That is great. That is a thought provoking set of interventions on that question. We will open to the floor. We have about 15 minutes. If you would like to ask a question, I encourage you to go to the microphone to the front so we can make sure everyone can hear. We'll put headsets in.

>> ATTENDEE: Thank you very much. I'm a former Minister for information technology and telecommunication. I remain the Minister for five years. I'm somebody that enacted the cybercrime law in 2016 which introduced 28 new penalties and criminalized the offense as an offense the dignity of natural person if violated would result in penalty of going to the jail or being fined.

So we all know that it is important to legislate. We also know that when we are legislating and when we are creating new offenses of such nature, the interest Groups come out strong against such activities. We all know the funding is provided by the commercial interest holders.

So in 2016, I was trying to make the enactment, I had huge resistance from interest Groups. At that time, it was difficult for people to appreciate how they're being played in the hands of the commercial interest. Then I notice that similar people, similar interest Groups, create interests for themselves. They were creating and finding opportunity to generate revenue for their own interest later on. It is a game played globally. We have seen the games played in for the revenue generation at the cost of the dignity of a natural person..

It is not just the women, not just the children, not just the girls, it is the people on this globe that are getting the abuse. Now, the questions. The question that I ask is for the Minister from Malaysia. I heard you, you are eloquent and your clarity is appreciated.

Now, what you think in your experience that after legislating in Malaysia, have you been able to overcome the difficulties, the enforcement entail?  I feel being part of the Standing Committee, part of the information Committee, the time has come to stop begging the social media platforms. Because we cannot continue to remain hostage for requests made for the welfare of our citizens. What can we together do to make sure that we introduce the mechanisms where we do not expose our children, our gars and women at the hands of those people who probably have a different philosophy about content online?  So when people are sitting perhaps in the West, have a different ideology and different legal system governing and people in the East have a different value system. We're a country where a single aspersion on a girl can cause her to jump from the window without waiting for the content to get removed.

This has been the major issue for me. That we in the East and the far East live in a different value system.

What is it that we think that together we today come by and bring out a solution?  I don't think the commercial interest and revenue generation is going to allow you to provide the civil protection that is needed. So maybe you can guide me and tell me that what is it, in your mind that we need to do and come forward with solid recommendations. Thank you.

>> MODERATOR: Thank you. Are you happy to answer that?  Do a really quick response to that one.

>> Teo Nie: Thank you for your questions. Frankly speaking, what we are trying to do in Malaysia is easy. Being in the Government means we have the majority in Parliament. Passing the law is relatively easy. We need to do a lot of engagement, consultation, et cetera. Passing the law itself is not too difficult. But however, to enforce it, it would be super difficult. It would be super challenging.

As I mentioned, to every one of you here now, we need to admit Malaysia, even that we have introduced the licensing regime, supposed to be implemented since January 1 this year. However, until today, only acts, TikTok and telegram. Have more than 8 million users in Malaysia, came to us, get the license, however, until today, Meta and Google have yet to apply the license from Malaysia Government.

But the question next would be what can we do?  First of all, I think it is too difficult for Malaysia to deal with the tech giant. It is too difficult. So I'm really hoping that we can actually have a common standard imposed some social media platform. My country, Singapore, they're doing something, I myself think is a good idea. IE, the imposed duty on Meta. Meta or Meta must verify the identity of every advertiser. If it is targeting on Singapore citizens. Meta is doing that probably because they have office in Singapore and deemed to be licensed as well.

Meta is doing this in Singapore. My question is why can't you do it for Malaysia?  Because if you verify the identity of the advertiser, it would be much easier for us to identify who are scammers, who are those behind this account promoting online gambling, et cetera. Why only do it in Singapore and not the rest of the world.

To me, it is very important if we have one international organisation, identifying what are the responsibility that should be carried out by the platforms instead of one individual country.

Because as Malaysia, our negotiation power is just too limited.

At the same time, to overcome the issue that the standard or the order is set by the West. I think it is very important for us to engage this platform as a block. For example, instead of engaging, is Malaysia trying to engage with this platform, we are hoping that Asia as a whole to engage with this platform. Yes, if you engage with Malaysia, maybe they worry that Malaysia Government will abuse our power to restrict the freedom of expression, but how about Asia as a block?  Because we understand each other better as Asian countries because we have a standard with offer cultural and history and background, et cetera.

I think it is important for us not to apply one important but we understand the world as different Region whereby we can sit down and discuss about the standard that can be imposed on our platforms, in different Region. There is something really I would like to propose. Thank you.

>> MODERATOR: Okay. Future cooperation here. That is great. I will turn briefly to Nighat who wants to provide comments.

>> Nighat Dad: We should understand we're here in a multistakeholder as Government, industry, Civil Society and think Civil Society is a critical space. Because when they present policies and regulations, it is a role of Civil Society to basically think of critical points and nuances and hold the Government also accountable. I think when we talk about accountability it is about all powerful actors, Government and platforms.

>> MODERATOR: Thank you. That is important perspective to bring. We'll go to the next question in the room.

>> ATTENDEE: Thank you.

Good morning. I run a consultancy and trustee of the (?) Foundation which fines and takes down CSAM material around the world.

Over three hundred million children annually are the victims of technology facilitated sexual abuse exploitation, about 14% of the world's children are victims every year. So with that in mind, does the panel agree that we should mandate the use of privacy preserving age estimation or verification technology to stop children from accessing adult platforms and adult content and also from adults accessing child centric platforms and opening child accounts so they can target children.

And also does the panel agree we should make better use of technologies like client site scanning to prevent the use of messaging platforms like WhatsApp from being used to share CSAM at scale around the world?  Which you can do in a privacy preserving way?  Thank you.

>> MODERATOR: Thank you. We'll take one more question and open it up.

>> ATTENDEE: Thank you. I will start by congratulating the panel. It looks like there is a bit less testosterone on the panel today. Maybe it was a girl's day this morning. But my name is John from Kenya. Mine is more of a comment that is seated at the IGF, we can have a conversation around what it is that regulators can do. And regulators of other platforms to learn what to do with the technology that is available to us.

But if we're talking about human centered design, we have to remind to ourselves that their offenders are human. The victims are humans.

And we have to look and see beyond what is happening online and see if there are opportunities on already existing human structures in community. Because the technical stuff that we would talk about at IGF some of it is not applicable practically, in some jurisdictions. For example. We come from places where big tech is platforms that people are interacting with but they don't have physical presence in some of the jurisdictions. So you have no place to go and have a conversation with this big tech to ask them to do some of the things we're saying at IGF. If we look at the already existing structure within community. We might find an opportunity to empower the victim in the sense that if it is a child who is under threat. In a school, the already existing social structures, social clubs. For example, the lessons learning from Kenya, we have clubs like the scouting clubs and the guides that already exist. For young people, we know if you make it cool, for them it is the truth.

So what if this discussion starts offline for the victim, so by the time they get online, they already have the tools and empowered to get this done?  Because the bully is a human.

The victim is a human. So if we concentrate on the technology we are losing a very big part. Because this young person can be trained to be a bully. If they are trained offline, before the Internet, maybe it is a movement that saves a generation.

My point here, and the comment as we focus on the technology, let us not forget this technology for humans and their already existing social set ups. The social set ups could be family, could be school, could be clubs, all of the other social set ups that exist before we get online. We will leave it to the regulator to deal with the big tech because that animal is too big for the victim to face up. I thank you.

>> MODERATOR: Thank you. Thank you.

I think we'll answer Andrews question first. I think there were two parts to that. Something around age verification or creation of child accounts. And whether that can be a prevented action and something on client side scanning on device and whether that is a good practice octave measure. I don't know if there is anyone in particular that wants to take that one.
yeah.
>> NEEMA: Absolutely not. We passed a social media ban on children. In the past year I have no idea what is the plan for implementation. It is really giving all your data to the platforms. I think it is a slippery slope to a bad place. My general opinion is no, we as humans need some level of privacy in our lives. And I think that they are better. The fact is people will get around all the things anyways. I think there are better interventions rather than taking away the last shred of our privacy.

>> MODERATOR: Raoul.

>> Raoul Manuel: In the Philippines, we have the observation that sometimes the best way to solve the problem is not to find the underlying basis for such a problem because directly confronting the problem may not be totally enough. In the case of CSAM, and how young people are being used for the bad objectives, we have had a realization that the economic basis is really a primary factor that drives children and unfortunately their relatives to have this kind of livelihood so they can live from day‑to‑day.

So we also have measures to address issues like poverty, child hunger, all of that, alongside, of course, prevent being the spread and proliferation of the kinds of material that exploit children. And I would like to hear also another point regarding how difficult it is really, especially for those in the Global South to have social media platform stuck out. I can sympathize with our colleagues here. I also 43 that we need to coordinate the response, really. Because in our case, when we invited the Representatives from the social media platforms, they did not attend our first two hearings and the reason was simply because they did not have an office in the area. So why bother to attend?  We were insulted by such kind of response. We want to have concerted action on these issues that we're talking about. So we kind of threaten them with subpoena and the prospects of an arrest if they don't attend the hearing. So fortunately by the third hearing they attended. That was the start of having them send Representatives. But of course, we can't act alone. We have to work collaboratively. Thank you.

>> MODERATOR: Thank you. We only have a couple of minutes left. Would you like to offer a final comment? 

>> Sandra: To add and reinforce the point of what is illegal offline should continue to be illegal online. And if we restrict children to have access to certain services offline, certain context, I think we should also take the same approach online. But that doesn't mean of course, you know, make every banning of every sorts of behavior. There are better approaches rather than just extreme going for extreme options.

But also, I also like to add that the last intervention was very important. Thanks a lot for it. Because we are humans. And we need of course to be aware of our shortcomings, our biases as humans. That needs to be taught as it was said, in schools.

We need to be more prepared now to deal with the usage of cognitive biases online. It is basically making usage of technology to take advantage of them. We need to be aware. We need to be more aware of that. We need more digital literacy for sure. But let me also have something as an economist. We're in a world that there are lots of incentives for platforms to start developing features that take into account safety and security and make a profit out of it. And here I'm talking as an economist. We'll see that happening. Then it will be left to us. And it should be some minimum standards that should be for everyone and regulators should impose those. But also, I'm pretty sure that we will be selling any sort of features that we as users will be able to buy and to add on to our systems. To increase the level of protection. So there is like a huge market out there that is going to explore the safety, the security, and prepare as consumers and users to make that choice. And it all depends on our risk aversion, risk preference and safety preferences. But tell come.

>> MODERATOR: Thank you, Sandra, that is all we have time for today. I want to say a massive thank you.

>> ARDA GERKENS: Can I make a remark that is important? A positive message. Look how we are together as regulators. I have been at IGF for 15 years. There is a lot changing. A lot of politicians involved in that change. What we need to do now is come together globally. Indeed Malaysia has problems with platforms and other countries have other problems with other platforms. Once some obey, others will pop up. We need to work globally. We're part of Global safety online network as a new initiative, I invite everybody that wants to be part of it. GOSRN. Let's see ow to tackle the problem. It is a Global problem. We need to work together here.

>> MODERATOR: Thank you. That is really the takeaway from this session. Is through having this multistakeholder, multidisciplinary discussion, this is the only way to tackle the challenges.

And take into account intersectionality, the differences, the way platforms behave differently. Quickly, the opening of the IGF, the official opening is at 11:00 a.m. in the Plenary room on the ground floor. We hope to see you there. Thanks to the panelists and to all of you and our online audience. Thank you.

(Applause)