Session
Organizer 1: Civil Society, Asia-Pacific Group
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Civil Society, Western European and Others Group (WEOG)
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Civil Society, Western European and Others Group (WEOG)
Speaker 1: Sangeeta Mahapatra, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Albert Jehoshua Rapha, Civil Society, Asia-Pacific Group
Speaker 3: Beltsazar Krisetya, Civil Society, Asia-Pacific Group
Speaker 2: Albert Jehoshua Rapha, Civil Society, Asia-Pacific Group
Speaker 3: Beltsazar Krisetya, Civil Society, Asia-Pacific Group
Format
Roundtable
Duration (minutes): 90
Format description: The roundtable layout fosters interactive, multi-stakeholder discussions, allowing policymakers, researchers, and civil society representatives to collaborate in real time. Moreover, the policy sprint format keeps discussions focused and dynamic, integrating short, iterative challenge rounds where participants refine AI governance solutions. Simultaneously, we will use digital collaboration tools like Miro and Slido, enabling remote attendees to engage in brainstorming, live polling, and ensuring hybrid inclusivity.
Duration (minutes): 90
Format description: The roundtable layout fosters interactive, multi-stakeholder discussions, allowing policymakers, researchers, and civil society representatives to collaborate in real time. Moreover, the policy sprint format keeps discussions focused and dynamic, integrating short, iterative challenge rounds where participants refine AI governance solutions. Simultaneously, we will use digital collaboration tools like Miro and Slido, enabling remote attendees to engage in brainstorming, live polling, and ensuring hybrid inclusivity.
Policy Question(s)
- What key AI safety challenges does the Asia-Pacific face, and where do current government and industry risk mitigation efforts fall short as AI adoption accelerates?
- What proven AI safety practices in the Asia-Pacific can be expanded or adapted to improve security and public trust in AI systems?
- How can civil society groups effectively engage in AI safety governance to ensure their concerns are addressed and they can collaborate with governments and tech companies on practical solutions?
What will participants gain from attending this session? Participants will explore AI governance in Asia-Pacific amid the shift from AI Safety to AI Security. Drawing from APrIGF 2024, which emphasised the urgency of global dialogue and the need for regulatory frameworks balancing innovation with ethical safeguards, this session will address regional AI governance challenges.
Through policy sprints, scenario-based deliberations, and stakeholder role-play, attendees will develop governance solutions for AI risks like discrimination and disinformation. They will co-create actionable policy blueprints, integrating global best practices with regional needs, while engaging with digital collaboration tools like Miro and Slido.
Designed to foster cross-border networking and capacity-building, this workshop equips civic actors with the skills and connections needed to advance AI safety governance in the Asia-Pacific region.
SDGs
Description:
The session, co-organised by SAIL and GIGA Institute, responds to the Post-Paris AI Summit’s seeming shift from AI safety to AI security, potentially narrowing the space for civic participation in AI governance. Through a 90-minute Policy Sprint, it will generate actionable policies on AI safety practices in Asia-Pacific, a region facing a dynamic AI threat landscape, through effective civic interventions. The hybrid workshop will have experts from civil society, academia, and think-tank, who uniquely represent and bridge insights and experiences from the Asia-Pacific. It will use innovative strategies for collective problem-solving. The session will begin with identifying the top five AI safety challenges of the current regulatory landscape. It will next identify top five best practices on AI safety and concrete mechanisms for civic organisations to shape AI safety policies through “iterative refinement strategy”. This panel will bring together experts from GIGA Institute and Safer Internet Lab to discuss AI safety and governance. Representing academia, civil society, research institutions, they will cover human-centric AI frameworks, bias-proofing, risk auditing, policy convergence, and civic governance. Reinforced by virtual participants, this multi-stakeholder dialogue reflects IGF’s commitment to inclusive and collaborative digital policy solutions. We will utilise Miro to conduct “Stakeholder Lens Mapping” where they can self-select into roles such as civil society advocates, researchers, affected communities, government regulators, and tech industry representatives to analyse challenges. The session will conclude with a live prioritisation poll using Slido to identify the most feasible and impactful proposals. After the session, the recommendations of the onsite and online participants will be integrated into one document, which will be provided to IGF. To ensure continued participation and stake-building, participants will be given the option of being part of a mailing list to update their recommendations to keep pace with the shifting AI landscape.
The session, co-organised by SAIL and GIGA Institute, responds to the Post-Paris AI Summit’s seeming shift from AI safety to AI security, potentially narrowing the space for civic participation in AI governance. Through a 90-minute Policy Sprint, it will generate actionable policies on AI safety practices in Asia-Pacific, a region facing a dynamic AI threat landscape, through effective civic interventions. The hybrid workshop will have experts from civil society, academia, and think-tank, who uniquely represent and bridge insights and experiences from the Asia-Pacific. It will use innovative strategies for collective problem-solving. The session will begin with identifying the top five AI safety challenges of the current regulatory landscape. It will next identify top five best practices on AI safety and concrete mechanisms for civic organisations to shape AI safety policies through “iterative refinement strategy”. This panel will bring together experts from GIGA Institute and Safer Internet Lab to discuss AI safety and governance. Representing academia, civil society, research institutions, they will cover human-centric AI frameworks, bias-proofing, risk auditing, policy convergence, and civic governance. Reinforced by virtual participants, this multi-stakeholder dialogue reflects IGF’s commitment to inclusive and collaborative digital policy solutions. We will utilise Miro to conduct “Stakeholder Lens Mapping” where they can self-select into roles such as civil society advocates, researchers, affected communities, government regulators, and tech industry representatives to analyse challenges. The session will conclude with a live prioritisation poll using Slido to identify the most feasible and impactful proposals. After the session, the recommendations of the onsite and online participants will be integrated into one document, which will be provided to IGF. To ensure continued participation and stake-building, participants will be given the option of being part of a mailing list to update their recommendations to keep pace with the shifting AI landscape.
Expected Outcomes
Through a policy sprint and interactive discussions, participants will: 1) jointly identify top risks on AI safety against dynamic AI threat and regulatory landscape; 2) co-develop policy blueprints, stress-tested through multi-stakeholder deliberation; 3) set the foundation for building stakes and latent resilience and identify collaborative opportunities with each other through this joint participation session; 4) Provide a policy document to IGF; 5) ensure a sustained dialogue and updating of recommendations post-discussion round when participants can join a mailing list to update their suggestions keeping pace with the evolving AI safety situation in the Asia-Pacific; 6) amplify the outcomes of IGF session through their local networks as we bring on board a multi-field, multi-stakeholder participants who have ground expertise and widespread reach to build impact.
Hybrid Format: This Policy Sprint roundtable is designed to foster an interactive and engaging session between onsite and online participants through structured, outcome-driven discussions. Rotating challenge rounds and stakeholder role-play ensure dynamic engagement, allowing virtual attendees to act as “remote challengers”, refining policy proposals via interactive whiteboards.
To enhance participation, digital collaboration tools such as Miro and Slido will enable real-time brainstorming, policy drafting, and live polling. These tools ensure that online attendees can contribute meaningfully and that their inputs are equally valued alongside in-person participants.
In short, focused discussion cycles will keep all participants actively engaged, while real-time synthesis of insights will translate discussions into actionable policy blueprints. Even after the 90-minute session, the virtual whiteboard will remain accessible for continued evaluation of proposals, making hybrid participation both impactful and inclusive.