Session
Organizer 1: Civil Society, Asia-Pacific Group
Organizer 2: Technical Community, Asia-Pacific Group
Organizer 2: Technical Community, Asia-Pacific Group
Speaker 1: Pranav Bhaskar Tiwari, Civil Society, Asia-Pacific Group
Speaker 2: Ivar Hartmann, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Chelsea Horne, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Ivar Hartmann, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Chelsea Horne, Civil Society, Western European and Others Group (WEOG)
Format
Roundtable
Duration (minutes): 90
Format description: Roundtable fosters open dialogue, allowing diverse perspectives on AI bias, and ensures that attendees from different backgrounds can share real-world examples and challenges specific to their regions. It feels less like a lecture, and more like an open and engaging dialogue. A session of 90 minutes ensures that discussions can be comprehensive and action-oriented. Participants will be motivated to share key insights quickly, leading to a more dynamic discussion. It will begin with a presentation of the issue, followed by guided discussions and brainstorming activities. This structure ensures participants not only gain knowledge but also contribute their perspectives and ideas.
Duration (minutes): 90
Format description: Roundtable fosters open dialogue, allowing diverse perspectives on AI bias, and ensures that attendees from different backgrounds can share real-world examples and challenges specific to their regions. It feels less like a lecture, and more like an open and engaging dialogue. A session of 90 minutes ensures that discussions can be comprehensive and action-oriented. Participants will be motivated to share key insights quickly, leading to a more dynamic discussion. It will begin with a presentation of the issue, followed by guided discussions and brainstorming activities. This structure ensures participants not only gain knowledge but also contribute their perspectives and ideas.
Policy Question(s)
1)What mechanisms can help improve transparency and accountability in AI-generated content?
2)What are the measures currently adopted by governments and companies to reduce the biased outcomes by automated systems? Can they be made more inclusive while balancing innovations and regulation ?
3)What are the ways in which public participation can contribute to reducing bias in AI systems?
What will participants gain from attending this session? 1)Knowledge of emerging techniques for bias detection, algorithmic transparency, and fairness metrics to improve AI-driven decision-making.
2)Practical discussions on shaping policies that ensure AI systems are equitable, explainable, and aligned with human rights principles.
3)Understanding region-specific challenges in mitigating AI bias, especially in linguistically and culturally diverse populations, and strategies for inclusive AI development.
SDGs
Description:
As AI systems increasingly mediate access to essential services—healthcare, finance, education, and governance—bias in algorithms is no longer a theoretical concern but a structural challenge with real-world consequences. This workshop convenes policymakers, industry leaders, and civil society representatives to discuss AI bias in high-stakes decision-making systems, particularly in the culturally and linguistically diverse Asia-Pacific region. Participants will engage in a critical analysis of algorithmic bias, focusing on its origins in data selection, model training, and deployment. Discussions will center on the role of AI in perpetuating systemic discrimination and the need for accountability in AI-driven decision-making. The workshop will also provide a collaborative space to refine ethical, legal, and regulatory frameworks that address these challenges while balancing innovation and fairness. A key focus will be the limitations of current bias-mitigation techniques and the emerging best practices for ensuring algorithmic transparency, explainability, and auditability. Stakeholders will work towards actionable strategies to operationalize fairness metrics, improve bias detection mechanisms, and embed equity considerations into AI governance policies. With a forward-looking approach, this session will contribute to building a foundation for more robust, context-sensitive AI systems that uphold fairness and human rights. Attendees will leave with a deeper understanding of how to translate principles of responsible AI into scalable and enforceable frameworks for both private and public sector deployments.
As AI systems increasingly mediate access to essential services—healthcare, finance, education, and governance—bias in algorithms is no longer a theoretical concern but a structural challenge with real-world consequences. This workshop convenes policymakers, industry leaders, and civil society representatives to discuss AI bias in high-stakes decision-making systems, particularly in the culturally and linguistically diverse Asia-Pacific region. Participants will engage in a critical analysis of algorithmic bias, focusing on its origins in data selection, model training, and deployment. Discussions will center on the role of AI in perpetuating systemic discrimination and the need for accountability in AI-driven decision-making. The workshop will also provide a collaborative space to refine ethical, legal, and regulatory frameworks that address these challenges while balancing innovation and fairness. A key focus will be the limitations of current bias-mitigation techniques and the emerging best practices for ensuring algorithmic transparency, explainability, and auditability. Stakeholders will work towards actionable strategies to operationalize fairness metrics, improve bias detection mechanisms, and embed equity considerations into AI governance policies. With a forward-looking approach, this session will contribute to building a foundation for more robust, context-sensitive AI systems that uphold fairness and human rights. Attendees will leave with a deeper understanding of how to translate principles of responsible AI into scalable and enforceable frameworks for both private and public sector deployments.
Expected Outcomes
1)Participants will collaboratively refine key principles for mitigating AI bias, contributing to ongoing discussions on ethical AI governance.
2)Strengthened engagement between policymakers, industry leaders, and civil society to drive responsible AI development.
3)Identification of region-specific challenges and solutions for mitigating AI bias, ensuring inclusive and culturally sensitive AI systems.
4)Discussions will inform national and regional regulatory efforts, contributing to AI governance initiatives within IGF and beyond.
Hybrid Format: We will ensure seamless interaction between onsite and online participants by using a dedicated hybrid moderator to balance engagement. The onsite moderator will facilitate discussions, while an online co-moderator will monitor chat activity, relay questions, and ensure remote speakers are included.
To create an inclusive experience, we will use Zoom’s interactive features (polls, Q&A, breakout rooms) and enable live captioning for accessibility. Speakers will alternate between onsite and online participation to ensure diverse engagement.
To enhance interactivity, we will use Mentimeter or Slido for real-time polls and audience input, allowing both onsite and online attendees to contribute equally. A shared Google Doc will also be available for collaborative note-taking and resource sharing.
Finally, we will coordinate with online speakers beforehand to ensure they have stable connectivity and video participation capabilities for a smooth hybrid experience.