IGF 2023 DCCOS Risk, opportunity and child safety in the age of AI

Wednesday, 11th October, 2023 (04:00 UTC) - Wednesday, 11th October, 2023 (06:30 UTC)
Room J

Dynamic Coalition on Children's Rights in the the Digital Environment

Round Table - 90 Min


Child Online Safety
New Technologies and Risks to Online Security
Online Hate Speech and Rights of Vulnerable People


The overarching theme of IGF 2023 in Kyoto is The Internet We Want - Empowering All People. Our starting point for this discussion is clear: There can be no empowerment on the Internet without a foundation of safety. And the Internet we want, and need is one where children's rights are protected. With AI and the Metaverse on the agenda of governments and increasingly embedded in the lives of digital platform users worldwide, tech legislation and safety policy are at a critical moment of transition. Different types and applications of these new and frontier technologies have the power to be transformative in positive ways as well as potentially harmful in ways we can and cannot yet predict, including for children and young people. As a result, and as seen in other areas of technology, governments often find themselves playing catch up, struggling to define the proper guardrails for its use across diverse environments from social media and online gaming to EdTech. Society will only be able to harness the benefits of the ongoing technological transition based on AI when proper safeguards are in place. We need to build a shared understanding of the risks and how we can develop the right safeguards for children and young people. Nowhere has the misalignment between what is technically possible, socially acceptable and legally permissible been exemplified more than in the debate around generative AI models. Indeed, between the date of submitting this proposal and the delivery of the session in October 2023, the societal, legal and other debates around this are likely to undergo further rapid change. At the same time, there is a risk that conversations around AI as a ‘new’ or ‘complex’ landscape distract from the foundational safety issues that already put children in harm’s ways in digital spaces that were not designed for them. For example, virtual worlds powered by AI and characterized by anonymity and open access will expand the opportunities for people to exploit children and other vulnerable groups, as already seen by law enforcement. For children, the psychological impact of abuse experienced in virtual worlds will present new and likely more intense forms of trauma. If a core goal of the Metaverse is to blur or remove the boundaries between physical and virtual realities, the differences between physical hands-on abuse and virtual abuse will vanish, with a hugely negative impact on victims and society at large. Either way, the principles and models underpinning AI and the Metaverse are mediums in which child protection must be addressed in a holistic, rights-based and evidence-informed way. This will inform safety policy, awareness by children and parents, and help ensure global alignment between regulation for safety and regulation for AI to avoid fragmentation and inefficiencies in our collective response. This is also based on General comment No. 25 (2021) on children’s rights in relation to the digital environment[1] which obliges state parties to protect children in digital environments from all forms of exploitation and abuse. This session will: 1. Discuss whether and how different approaches to regulation are needed for different digital spaces such as social media, online gaming, communications platforms and EdTech. 2. Discuss how existing safety nets and messaging meet the needs, aspirations and challenges voiced by children, young people and parents around the world. 3. Address the following policy questions: 4. How do you design robust and sustainable child safety policy in a rapid changing tech landscape? 5. How do you create meaningful dialogue around the design, implementation and accountability of AI towards children and society? Goals / outputs of the session 1. Identify the main impact of AI and new technologies on children globally. 2. Understand young peoples’ own perception of risks in virtual worlds 3. Create the basis for DC principles of AI regulation for child safety. 4. Initiate DC messaging for parents to support their children in the digital space. 5. Co-construct DC guidelines for a modern and child rights-oriented child and youth protection in the digital environment. [1] https://www.ohchr.org/en/documents/general-comments-and-recommendations… .

The session will be run as a roundtable, with speakers to guide the key topics of conversation, but an inclusive approach to discussion and idea-sharing. The online moderator will ensure a voice for those attending online, and the use of online polls and other techniques will ensure an effective hybrid experience. To answer the questions directly: 1) We will facilitate interaction between onsite and online speakers and attendees by inviting comments from both groups, and bringing questions or comments from the online attendees to the room. 2) We have designed the session as a roundtable to ensure that both online and onsite participants have the chance to have their voice heard. 3) We aim to use online surveys/polls to ensure an interaction session.


Amy Crocker, ECPAT International, Civil Society, Asia Pacific (onsite moderator) Jim Prendergast, The Galway Strategy Group, Civil Society, WEOG (online moderator) Jutta Croll, Digital Opportunities Foundation, Civil Society, WEOG (rapporteur) Jennifer Chung, DotKids Foundation, Private/Civil Society, Asia Pacific


1. Liz Thomas, Microsoft, Private sector, WEOG (onsite) 2. Sophie Pohle, Deutsches Kinderhilfswerk e.V, Civil Society, WEOG 3. Katsuhiko Takeda, Childfund Japan, Civil Society, Asia Pacific 4. Jenna Fung, Asia Pacific Youth IGF, Civil Society, Asia Pacific 5. Patrick Burton,Centre for Justice and Crime Prevention, Civil Society, Africa

Onsite Moderator

Amy Crocker, ECPAT International, Civil Society, Asia Pacific

Online Moderator

Jim Prendergast, The Galway Strategy Group, Civil Society, WEOG


Jutta Croll, Digital Opportunities Foundation, Civil Society, WEOG



Targets: Target 5.2 eliminate all forms of violence against all women and girls in public and private spheres: Violence against children, including its manifestations online is a gendered phenomenon. Affecting girls in high numbers, the dynamics of violence and exploitation of boys and children of other genders and sexual identities is also a priority if we are to have a sustainable impact on violance and sexual exploitation. Child safety in digital environments will be increasingly complex in the face of AI and the Metaverse, for multiple reasons. This session seeks to identify and unpack some of these challenges. Target 16.2 end abuse, exploitation, trafficking and all forms of violence and torture against children: We are living through an unprecendented period of technological change, and one of the most nefarious consequences of this is the increase, reach and impact of child sexual exploitation and abuse. New and frontier technologies threaten and promise to change our world, yet there can be no sustainable, positive online world without safety, and in particular for children. This session seeks to identify, and understand the risks that children may face from AI and the Metaverse, and to propose concrete solutions that can help mitigate risk, prevent harm, and harness the positive benefits of the online world for children and society around the world.