Session
Subtheme
Organizer 1: Technical Community, Asia-Pacific Group
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Technical Community, Asia-Pacific Group
Organizer 4: Technical Community, Asia-Pacific Group
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Technical Community, Asia-Pacific Group
Organizer 4: Technical Community, Asia-Pacific Group
Speaker 1: Henri Verdier, Government, Western European and Others Group (WEOG)
Speaker 2: James Ong, Private Sector, Asia-Pacific Group
Speaker 3: Phyo Thiri Lwin, Technical Community, Asia-Pacific Group
Speaker 4: Ruth Schmidt, Intergovernmental Organization, Western European and Others Group (WEOG)
Speaker 2: James Ong, Private Sector, Asia-Pacific Group
Speaker 3: Phyo Thiri Lwin, Technical Community, Asia-Pacific Group
Speaker 4: Ruth Schmidt, Intergovernmental Organization, Western European and Others Group (WEOG)
Format
Roundtable
Duration (minutes): 60
Format description: We propose a roundtable format with a 60-minute duration to foster an inclusive and participatory discussion on AI governance and marginalized communities. A U-shaped seating arrangement will promote direct engagement between speakers and attendees, allowing for fluid exchanges rather than passive listening. Unlike traditional panels, this format encourages open dialogue, collaboration, and real-time input from both in-person and online participants. The session will begin with five-minute scene-setting remarks from experts, each offering insights based on their expertise—such as AI ethics, policy frameworks, and lived experiences of marginalized groups. These remarks will set the stage by highlighting key challenges and opportunities in AI governance. The next 40 minutes will feature guided discussions in small groups, where online and onsite participants can interact meaningfully. The final 15 minutes will be dedicated to summarizing key takeaways, emphasizing shared insights and recommendations.
Duration (minutes): 60
Format description: We propose a roundtable format with a 60-minute duration to foster an inclusive and participatory discussion on AI governance and marginalized communities. A U-shaped seating arrangement will promote direct engagement between speakers and attendees, allowing for fluid exchanges rather than passive listening. Unlike traditional panels, this format encourages open dialogue, collaboration, and real-time input from both in-person and online participants. The session will begin with five-minute scene-setting remarks from experts, each offering insights based on their expertise—such as AI ethics, policy frameworks, and lived experiences of marginalized groups. These remarks will set the stage by highlighting key challenges and opportunities in AI governance. The next 40 minutes will feature guided discussions in small groups, where online and onsite participants can interact meaningfully. The final 15 minutes will be dedicated to summarizing key takeaways, emphasizing shared insights and recommendations.
Policy Question(s)
A. How can global and regional digital governance frameworks (e.g., IGF, WSIS+20, Global Digital Compact) be leveraged to address AI inequalities?
B. What barriers prevent marginalized communities from participating in AI policy discussions, and how can these be overcome?
C. What role do multistakeholder collaborations play in ensuring AI policies are inclusive and equitable?
D. How can governments, the private sector, and civil society work together to mitigate AI-driven inequalities?
E. What are some successful models of AI governance that center the needs of underrepresented communities?
What will participants gain from attending this session? - A deeper understanding of how the AI divide affects marginalized communities and why global cooperation is critical.
- Insights into existing digital cooperation frameworks, such as WSIS+20, the Global Digital Compact, and the Africa-Asia Policy Maker Network, and how these initiatives foster AI inclusion through cross-regional knowledge exchange and policy development.
- Best practices from governments, civil society, and grassroots initiatives working to create equitable AI policies.
- Actionable recommendations for strengthening AI governance in their own regions or organizations.
Description:
Artificial Intelligence (AI) is rapidly transforming societies, but its benefits and risks are unevenly distributed, especially among marginalized communities such as those in the Global South, Indigenous populations, persons with disabilities, LGBTQIA+ communities, and underserved groups. These communities often face significant barriers to accessing AI technologies and are disproportionately affected by their potential harms. In Indonesia, efforts to bridge the digital divide and promote inclusive AI governance have been strengthened by initiatives like FAIR Forward – Artificial Intelligence for All. This program equips policymakers with the necessary AI knowledge to advance responsible AI use and development. Additionally, community-based initiatives like the School of Community Networks and the annual Rural ICT Camps have played a crucial role in enhancing digital connectivity in rural areas. For instance, the 2023 Rural ICT Camp in Pulo Aceh, hosted by Common Room, focused on rural resilience and digital inclusion. These initiatives highlight the importance of multi-stakeholder dialogues in fostering inclusive AI governance. By leveraging existing frameworks and regional internet governance efforts, such as National and Regional Initiatives (NRIs), stakeholders can work together to ensure that AI policies and systems are ethical, inclusive, and centered on social equity. . This collaborative approach aims to provide strategic, action-oriented recommendations that build resilience within at-risk communities, ensuring that the benefits of AI are equitably distributed. Through this workshop, participants will engage in a multi-stakeholder dialogue, discussing best practices, addressing challenges, and taking actionable steps toward a more inclusive AI governance model. Prioritizing marginalized voices in these discussions is critical for developing balanced, inclusive, and risk-based approaches to AI governance that benefit humanity as a whole.
Artificial Intelligence (AI) is rapidly transforming societies, but its benefits and risks are unevenly distributed, especially among marginalized communities such as those in the Global South, Indigenous populations, persons with disabilities, LGBTQIA+ communities, and underserved groups. These communities often face significant barriers to accessing AI technologies and are disproportionately affected by their potential harms. In Indonesia, efforts to bridge the digital divide and promote inclusive AI governance have been strengthened by initiatives like FAIR Forward – Artificial Intelligence for All. This program equips policymakers with the necessary AI knowledge to advance responsible AI use and development. Additionally, community-based initiatives like the School of Community Networks and the annual Rural ICT Camps have played a crucial role in enhancing digital connectivity in rural areas. For instance, the 2023 Rural ICT Camp in Pulo Aceh, hosted by Common Room, focused on rural resilience and digital inclusion. These initiatives highlight the importance of multi-stakeholder dialogues in fostering inclusive AI governance. By leveraging existing frameworks and regional internet governance efforts, such as National and Regional Initiatives (NRIs), stakeholders can work together to ensure that AI policies and systems are ethical, inclusive, and centered on social equity. . This collaborative approach aims to provide strategic, action-oriented recommendations that build resilience within at-risk communities, ensuring that the benefits of AI are equitably distributed. Through this workshop, participants will engage in a multi-stakeholder dialogue, discussing best practices, addressing challenges, and taking actionable steps toward a more inclusive AI governance model. Prioritizing marginalized voices in these discussions is critical for developing balanced, inclusive, and risk-based approaches to AI governance that benefit humanity as a whole.
Expected Outcomes
- Increased Awareness: Greater understanding among stakeholders of the AI divide and its implications for marginalized communities.
- Policy Recommendations: Concrete suggestions on how global digital cooperation mechanisms can address AI inequality.
- Multistakeholder Engagement: Stronger collaboration between governments, civil society, academia, and the private sector to drive inclusive AI governance.
- Next Steps for IGF and NRIs: Proposals on how IGF and national/regional initiatives (NRIs) can better integrate AI governance into their works
Hybrid Format: To ensure an engaging hybrid session, we will leverage Zoom for live participation, as provided by the IGF Secretariat. This will integrate online and onsite attendees through video and chat features, fostering seamless interaction. A dedicated online moderator will actively facilitate virtual engagement, ensuring remote voices are equally represented in discussions.
We will enhance interactivity using Slido for live polling and Q&A, enabling real-time feedback from all participants. Additionally, Google Docs will serve as a collaborative space where attendees can share insights asynchronously, ensuring broader participation beyond verbal contributions.
To create an inclusive and engaging experience, a large screen will display online participants, making them a visible part of the conversation. Organizers will also ensure that all remote speakers have stable internet and proper video/audio setups. These measures will help bridge the gap between online and onsite participation, fostering meaningful dialogue and collaboration.