IGF 2025 WS #441 Tracking online risks and barriers to digital rights

    Organizer 1: Technical Community, Western European and Others Group (WEOG)
    Organizer 2: Civil Society, Western European and Others Group (WEOG)
    Organizer 3: Civil Society, Asia-Pacific Group
    Speaker 1: Try Thy, Civil Society, Asia-Pacific Group
    Speaker 2: Marie-Eve Nadeau, Civil Society, Western European and Others Group (WEOG)
    Speaker 3: Velislava Hillman, Technical Community, Western European and Others Group (WEOG)
    Format
    Classroom
    Duration (minutes): 90
    Format description: Phase 1: Expert lightning talks (20 mins) Three experts from civil society, industry, and academia will deliver 5-minute presentations on: AI-driven risks in online safety (e.g., algorithmic biases, recommender systems) Successful interventions in Cambodia, Thailand, Vietnam, and the Philippines Policy and governance gaps in AI accountability for child protection These concise talks will establish a strong knowledge foundation. Phase 2: Real-world scenario simulation (20 mins) Participants, divided into stakeholder groups (policymakers, regulators, youth advocates, educators), will analyze AI-driven safety challenges, assessing risks, ethical dilemmas, and systemic barriers. Phase 3: Multi-stakeholder response & policy sprint (30 mins) Groups will develop rapid-response strategies and policy recommendations, with live expert feedback ensuring feasibility. Phase 4: Collective debrief & call to action (20 mins) Teams will present solutions, followed by a moderated discussion, synthesizing key takeaways into concrete recommendations for AI governance, platform accountability, and digital literacy.
    Policy Question(s)
    What are the key emerging risks to online safety? How can AI-driven interventions enhance online safety while mitigating security risks in Southeast Asia? What AI transparency and accountability measures are needed to protect children and youth? How should policy address GenAI, algorithmic manipulation, deepfakes, and AI-driven misinformation while safeguarding digital rights? What are the most urgent AI-related online threats in Southeast Asia? How can governments, platforms, and civil society create ethical AI guidelines to prevent exploitation? How can AI systems designed for children’s safety be developed and sustainably funded?
    What will participants gain from attending this session? This session will provide a critical understanding of AI-driven risks in online platforms and their implications for children and young people. It will examine AI personalization, recommender systems, and Generative AI (GenAI) in spreading misinformation, deepfakes, and exploitative content, while highlighting youth perspectives on digital security and real-world interventions in Cambodia, Vietnam, and Thailand. Participants will gain insights into algorithmic biases in content moderation and explore effective digital literacy programs that build trust and resilience, especially for marginalized communities. The session will also showcase diverse global and regional perspectives on AI governance, examining regulatory gaps, AI ethics, and policy solutions for platform accountability. Through multi-stakeholder collaboration, experts from civil society, academia, industry, and government will discuss actionable strategies, including AI transparency, scalable digital literacy models, and cross-sector cooperation, to ensure a safer digital environment for children and young people.
    Description:

    This session will examine the emerging threats and risks posed by advanced digital technologies, particularly Artificial Intelligence (AI), Generative AI (GenAI), and evolving online platforms (e.g., live-streaming and algorithm-driven social networks). The session will analyze how AI-powered personalization and content moderation tools can inadvertently expose users to exploitation, harmful content, and digital rights violations, while also exploring opportunities to leverage AI for safeguarding online spaces with specific context in Southeast Asia. This session will highlight proposed and implemented models for protecting children and young people online (showcasing most recent research evidence). These models have been aiming to address knowledge gaps, barriers to prevention and detection of online risks (including educational and infrastructural barriers) and present policy and bottom-up multi-stakeholder interventions needed for preventing such barriers, promoting digital literacy and digital rights. Experts from civil society, government, academia, and industry will discuss the impact of algorithmic biases, recommender systems, and behavioral analytics—which increasingly shape online experiences and influence user behavior, often to the detriment of children and young people. While such advancing technologies can be incorporated to tackle online risks, they, themselves, also pose risks and limitations when it comes to online safety. By drawing on real-world case studies from Cambodia, Thailand, Philippines, and Vietnam, participants will gain insights into existing regulatory gaps, ethical AI challenges, and urgent policy interventions needed to ensure digital trust and resilience and online safety of children and young people. The session will also showcase multi-stakeholder-driven strategies that enhance digital literacy, youth-led advocacy, and collaborative AI governance. It will offer concrete policy and industry recommendations to protect vulnerable users in the AI-driven online landscape.
    Expected Outcomes
    ● Policy recommendations for strengthening AI moderation, algorithmic intervention, and child protection policies specifically with regards to novel technological barriers and risks (algorithmic manipulation, AI-driven cross-platform risks of abuse, and algorithmic issues in livestreaming protection) ● Multi-stakeholder strategies for enhancing regional cooperation on online safety. ● Increased awareness of algorithmic risks and emerging online threats. ● Practical solutions and frameworks that can be adopted by stakeholders (government, policy makers, schools, non-profits, and companies) globally. ● Techno-social recommendations on implementing algorithmic systems for monitoring online risks
    Hybrid Format: The organizer will coordinate with youth and community groups to gather participants at public libraries and local hubs, including YIGF alumni in Cambodia and ASEAN. These hubs will enable real-time engagement with live translation into local languages. We will collaborate with provincial CSO networks to ensure connectivity. A morning session (Norway time) allows easier participation, while an afternoon session poses coordination challenges for ASEAN participants.