Session
Organizer 1: Civil Society, Western European and Others Group (WEOG)
Organizer 2: Civil Society, Western European and Others Group (WEOG)
Organizer 2: Civil Society, Western European and Others Group (WEOG)
Speaker 1: Barbotte Daphne, Government, Western European and Others Group (WEOG)
Speaker 2: Bhatia Aliya, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Charles Bradley, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Bhatia Aliya, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Charles Bradley, Private Sector, Western European and Others Group (WEOG)
Format
Roundtable
Duration (minutes): 60
Format description: We have chosen a roundtable-style layout, successfully used in previous sessions,, to accommodate a large audience while fostering interactive dialogue. The open seating arrangement enhances accessibility and encourages diverse participation from civil society, policymakers, and technical experts. A 60-minute session strikes the right balance between depth and engagement, allowing for structured speaker inputs, interactive discussions, and audience participation without losing focus. Given the complexity of automated content moderation, this format ensures participants gain practical insights while maintaining an inclusive and dynamic space for debate and knowledge exchange.
Duration (minutes): 60
Format description: We have chosen a roundtable-style layout, successfully used in previous sessions,, to accommodate a large audience while fostering interactive dialogue. The open seating arrangement enhances accessibility and encourages diverse participation from civil society, policymakers, and technical experts. A 60-minute session strikes the right balance between depth and engagement, allowing for structured speaker inputs, interactive discussions, and audience participation without losing focus. Given the complexity of automated content moderation, this format ensures participants gain practical insights while maintaining an inclusive and dynamic space for debate and knowledge exchange.
Policy Question(s)
A. How can automated content moderation be designed and developed to respect different cultural norms and values across the world, especially in the Global Majority?
B. What are some human rights risks specific to LLM-driven content moderation?
C. Should there be any specific regulation of automated content moderation, including LLMs? If so, what international legal instruments or norms would we want to protect human rights in this context?
What will participants gain from attending this session? This multistakeholder, interactive session invites participants to collaboratively examine the human rights impacts of GenAI systems and LLMs, while exploring strategies for prevention, mitigation, and remedy in line with evolving regulations and transparency obligations. A key focus is the importance of demographic and regional diversity within AI development teams, and we look forward to learning from participants across sectors and disciplines.
The session also underscores the vital role of civil society and marginalized communities in shaping AI-driven content moderation, particularly in the Global Majority. We encourage attendees to share insights from their local contexts, especially in high-risk situations such as conflict or crisis. Importantly, this space is designed to support activists and community members in strategizing effective advocacy and organizing efforts. By fostering dialogue and collaboration, we aim to strengthen civil society’s influence in AI governance and ensure technology upholds human rights and civic space.
Description:
Integrating emerging technologies in existing content governance systems is still at early stages. However, there’s a race to finance, build, and use them with little to no understanding of their implications on human rights. Focusing on Generative AI (GenAI) and their underlying technologies like foundation models or large language models (LLMs), we will explore the human rights impacts of these systems in content governance and what AI developers and policymakers can do to mitigate harm. We will look at how these systems are deployed in the Global Majority, where considerations of local language, context and cultural nuances are critical, e.g., during elections in fragile democracies or in conflict zones. As AI-driven platforms increasingly rely on these technologies for content governance, they risk unintentionally suppressing legitimate content while fueling violence online, disproportionately affecting marginalized groups. While AI systems are primarily designed and developed in the U.S., Western Europe, and China, they’re used around the world without meaningful involvement of local communities, especially marginalized groups. Our focus will be on civic freedoms including the right to privacy; freedom of expression, opinion, and information; assembly and association; non-discrimination; and procedural safeguards such as stakeholder engagement, transparency, and remedy. We will center the needs of and risks to racialized persons, women and non-binary persons, LGBTQIA+, migrants and refugees, disabled persons, children and the elderly, and those of lower socioeconomic status. We will unpack this topic by using Discord as a case study, as the platform is currently piloting ECNL’s framework for meaningful engagement in AI while developing machine learning interventions for enforcing its bullying and harassment policies, with a focus on children and teens. The pilot’s findings can be applicable to broader issues around the use of GenAI and LLMs for content governance, which will be collectively explored during the session.
Integrating emerging technologies in existing content governance systems is still at early stages. However, there’s a race to finance, build, and use them with little to no understanding of their implications on human rights. Focusing on Generative AI (GenAI) and their underlying technologies like foundation models or large language models (LLMs), we will explore the human rights impacts of these systems in content governance and what AI developers and policymakers can do to mitigate harm. We will look at how these systems are deployed in the Global Majority, where considerations of local language, context and cultural nuances are critical, e.g., during elections in fragile democracies or in conflict zones. As AI-driven platforms increasingly rely on these technologies for content governance, they risk unintentionally suppressing legitimate content while fueling violence online, disproportionately affecting marginalized groups. While AI systems are primarily designed and developed in the U.S., Western Europe, and China, they’re used around the world without meaningful involvement of local communities, especially marginalized groups. Our focus will be on civic freedoms including the right to privacy; freedom of expression, opinion, and information; assembly and association; non-discrimination; and procedural safeguards such as stakeholder engagement, transparency, and remedy. We will center the needs of and risks to racialized persons, women and non-binary persons, LGBTQIA+, migrants and refugees, disabled persons, children and the elderly, and those of lower socioeconomic status. We will unpack this topic by using Discord as a case study, as the platform is currently piloting ECNL’s framework for meaningful engagement in AI while developing machine learning interventions for enforcing its bullying and harassment policies, with a focus on children and teens. The pilot’s findings can be applicable to broader issues around the use of GenAI and LLMs for content governance, which will be collectively explored during the session.
Expected Outcomes
Participants’ insights will serve as a catalyst for future advocacy and responsible AI development, particularly in content moderation. By bringing together diverse perspectives, we aim to strengthen civil society’s role in shaping AI policies and practices that prioritize human rights. As LLMs and GenAI systems are rapidly deployed across social media and digital platforms, this moment presents both an urgent challenge and a critical opportunity to ensure civil society is meaningfully included in decision-making.
Centering human rights and civic space in AI development requires ongoing engagement, and this session is a step toward fostering long-term collaboration between civil society, technologists, and policymakers. To sustain momentum, we will compile key takeaways into an outcome document, guiding further research, advocacy, and knowledge-sharing throughout the year. We will also continue to actively engage with digital platforms, building on our pilot with Discord.
Hybrid Format: To ensure seamless interaction, both the onsite and online moderators will be physically present in the room, coordinating engagement between in-person and remote participants. The onsite moderator will lead discussions and manage speaker interventions, while the online moderator will monitor the virtual chat, relay questions, and facilitate real-time interventions. A dedicated online participation queue will ensure remote attendees can contribute equally.
To enhance engagement, almost half of the session will be allocated for audience participation, with Q&A, live polling, and direct interventions from both onsite and online participants. We will use Slido for real-time input and the IGF chat function to integrate remote contributions, ensuring an inclusive and interactive hybrid session.