Check-in and access this session from the IGF Schedule.

IGF 2023 WS #559 Harnessing AI for Child Protection

    Time
    Wednesday, 11th October, 2023 (04:00 UTC) - Wednesday, 11th October, 2023 (05:30 UTC)
    Room
    WS 2 – Room A

    Organizer 1: Ruchi Neupane, Safer Nepal Alliance (Safenep) 
    Organizer 2: Kamala Adhikari, 🔒
    Organizer 3: Babu Ram Aryal, Digital Freedom Coalition [NEPAL]

    Speaker 1: Jutta Croll, Civil Society, Western European and Others Group (WEOG)
    Speaker 2: Ghimire Gopal Krishna, Civil Society, Asia-Pacific Group
    Speaker 3: Sarim Aziz, Private Sector, Asia-Pacific Group
    Speaker 4: Michael Ilishebo, Government, African Group

    Moderator

    Ruchi Neupane, Civil Society, Asia-Pacific Group

    Online Moderator

    Babu Ram Aryal, Civil Society, Asia-Pacific Group

    Rapporteur

    Kamala Adhikari, Civil Society, Asia-Pacific Group

    Format

    Round Table - 90 Min

    Policy Question(s)

    1. How can AI technologies be effectively leveraged to detect and combat emerging forms of child exploitation in the digital age, considering the evolving nature of online risks? 2. What are the key challenges and ethical considerations associated with using AI for content moderation and identifying harmful online material involving children? How can these challenges be addressed while maintaining freedom of expression and privacy rights? 3. What strategies and mechanisms should be put in place to enhance transparency and accountability in the deployment of AI systems for child protection?

    What will participants gain from attending this session? By attending this session, participants will gain a deeper understanding of the potential of AI in child protection, including AI-powered content moderation, early detection of online risks, and intervention mechanisms. They will learn about innovative AI technologies and approaches that can enhance child safety in the digital world. In particular, participants shall gain following benefits:- 1. Knowledge of Policy Frameworks: 2. Best Practices and Case Studies; 3. Collaborative Opportunities; 4. Policy Recommendations; 5. Increased Awareness and Engagement;

    Description:

    The rise of artificial intelligence (AI) presents significant concerns regarding the exploitation of children, particularly in the context of child abuse. One pressing issue is the use of AI technologies to facilitate the production, distribution, and promotion of child sexual abuse material (CSAM) through online platforms. AI algorithms have the potential to automate the creation, sharing, and evasion of detection techniques, making it increasingly challenging to identify and remove harmful content. Furthermore, the privacy and data protection of children are at risk, as AI systems processing personal data, such as facial recognition or behavioral analysis, may infringe upon their privacy rights. The deployment of AI chatbots or virtual assistants also raises concerns about grooming and predatory behavior towards children, as these interactions can obscure the intentions of potential offenders. Bias and misidentification are additional worries, with AI systems prone to errors in detecting harmful content, potentially leading to false positives or negatives. Addressing these complex issues necessitates the collaboration of technology developers, law enforcement agencies, policymakers, child protection organizations, and society at large. Effective AI detection tools, robust legal frameworks, heightened awareness among children and parents about online risks, and timely responses to incidents of AI-facilitated child abuse are essential components of a comprehensive solution. The session on "Harnessing AI for Child Protection: Ensuring Safety in the Digital World" at the Internet Governance Forum (IGF) aims to explore the potential of artificial intelligence (AI) in safeguarding children online. With the growing risks of child exploitation and abuse in the digital realm, this session will bring together experts, policymakers, technologists, and child rights advocates to discuss innovative AI solutions, policy frameworks, and collaborative approaches to enhance child protection measures.

    Expected Outcomes

    1. The session will produce actionable policy recommendations, highlighting best practices, regulatory frameworks, and guidelines for responsible AI use in child protection. 2. The session will foster collaboration and partnerships among stakeholders, including governments, technology companies, civil society organizations, and child protection advocates. 3. The session will raise awareness about the potential of AI in combating child exploitation online and promote the importance of digital literacy and education initiatives. 4. The session will contribute to the development of ethical guidelines for the deployment and use of AI in child protection.

    Hybrid Format: The onsite moderator will introduce the subject matter experts at the table and explain the discussion topic before engaging all discussants in the room. The moderator shall ensure everyone ‘at the table’ is given equal weight and equal opportunity to intervene. Online Moderator shall be careful to give equal participation of the online participant as well. The discussants shall share their views in 3 mins for each policy question and rest of the time will be allocated to floor discussion, including the online participants.