Session
Organizer 1: Civil Society, Eastern European Group
Organizer 2: Civil Society, Western European and Others Group (WEOG)
Organizer 3: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 4: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 2: Civil Society, Western European and Others Group (WEOG)
Organizer 3: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 4: Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 1: Isabelle Lois, Government, Western European and Others Group (WEOG)
Speaker 2: Syed Nazakat, Private Sector, Asia-Pacific Group
Speaker 3: Ryan Ofman, Technical Community, Western European and Others Group (WEOG)
Speaker 2: Syed Nazakat, Private Sector, Asia-Pacific Group
Speaker 3: Ryan Ofman, Technical Community, Western European and Others Group (WEOG)
Format
Classroom
Duration (minutes): 60
Format description: The classroom layout and duration are ideal for structured knowledge sharing, presentation, case study analysis, and scenario-based exercises, fostering interactive discussions around real-world AI detection challenges, encouraging attendees to engage in small group discussions on the practical and quality solutions of AI detection.
Duration (minutes): 60
Format description: The classroom layout and duration are ideal for structured knowledge sharing, presentation, case study analysis, and scenario-based exercises, fostering interactive discussions around real-world AI detection challenges, encouraging attendees to engage in small group discussions on the practical and quality solutions of AI detection.
Policy Question(s)
A: What role can policy actors play in tackling threats emerging from deceptive use of synthetic media and facilitating development of effective, accessible and sustainable AI detection solutions?
B: What governance mechanisms are needed to promote adaptability, transparency, accountability, and fairness in the development and deployment of AI detection technologies?
C: How can policymakers support the creation of diverse and representative datasets to improve the effectiveness of AI detection tools across different languages, cultures, and contexts?
What will participants gain from attending this session? Participants will gain a deeper understanding of real-world usages of deceptive AI and limitations of AI detection tools, in particular of how these tools perform across various global contexts and content types. It will be an opportunity to examine the global landscape of the use of deceptive AI and existing technical solutions, and to identify the critical gaps in access to and the usability of detection tools. Attendees will explore pathways toward more effective, accessible and sustainable AI detection interventions. This workshop will highlight the need for AI interventions that are community-led, contextually relevant and aligned with real-world trust and safety challenges and policy response and create a space to encourage a cross-sector dialogue to foster multistakeholder collaboration. Participants will leave with practical insights, an equity-focused evaluation framework, and a stronger understanding of how public interest AI efforts can shape a more inclusive and resilient information ecosystem.
SDGs
Description:
A secure and inclusive digital information ecosystem demands robust safeguarding mechanisms against emerging threats to democracy and human rights. Post-hoc detection has a critical role in providing real-time crisis mitigation, protecting media integrity, advancing media literacy, and supporting public trust. However, it often fails to support key information actors working in high-stakes global contexts. Challenges such as gaps in training data for local languages and representations, technical constraints from compressed or low-quality media, and the lack of contextual understanding of local manipulation trends limit the usability of detection tools, particularly for the information actors in the Global Majority. We propose a workshop examining AI detection effectiveness through the lens of detection equity, drawing from the work of WITNESS's Deepfakes Rapid Response Force (DRRF), groundbreaking initiative connecting frontline information actors with leading media forensics and deepfakes experts to deliver timely analysis of suspected deceptive AI content, and our upcoming framework for sociotechnical evaluation of AI detection tools. This workshop brings together AI detection developers, information actors, and policymakers to explore global AI mitigation challenges through the lens of those most affected. Our session will present key real-world cases analyzed by the DRRF to demonstrate detection equity challenges and examine why current detection methods succeed for some content while failing in others. Through scenario-based discussions and exercises, participants will explore accessibility barriers, practical implementation strategies, and framework-driven solutions to improve AI detection solutions. Our goal is to create a space to discuss how to best align detection practices with trustworthy AI principles to design more accessible, accountable, and effective solutions that support and empower communities most impacted by synthetic media threats.
A secure and inclusive digital information ecosystem demands robust safeguarding mechanisms against emerging threats to democracy and human rights. Post-hoc detection has a critical role in providing real-time crisis mitigation, protecting media integrity, advancing media literacy, and supporting public trust. However, it often fails to support key information actors working in high-stakes global contexts. Challenges such as gaps in training data for local languages and representations, technical constraints from compressed or low-quality media, and the lack of contextual understanding of local manipulation trends limit the usability of detection tools, particularly for the information actors in the Global Majority. We propose a workshop examining AI detection effectiveness through the lens of detection equity, drawing from the work of WITNESS's Deepfakes Rapid Response Force (DRRF), groundbreaking initiative connecting frontline information actors with leading media forensics and deepfakes experts to deliver timely analysis of suspected deceptive AI content, and our upcoming framework for sociotechnical evaluation of AI detection tools. This workshop brings together AI detection developers, information actors, and policymakers to explore global AI mitigation challenges through the lens of those most affected. Our session will present key real-world cases analyzed by the DRRF to demonstrate detection equity challenges and examine why current detection methods succeed for some content while failing in others. Through scenario-based discussions and exercises, participants will explore accessibility barriers, practical implementation strategies, and framework-driven solutions to improve AI detection solutions. Our goal is to create a space to discuss how to best align detection practices with trustworthy AI principles to design more accessible, accountable, and effective solutions that support and empower communities most impacted by synthetic media threats.
Expected Outcomes
The session wants to bring the issue of detection equity to a broader audience and highlight the significance of reliable AI detection tools in safeguarding a safe and secure digital information ecosystem. We seek to use this space as an opportunity to engage with a diverse group of tech, policy, and civil society stakeholders to discuss, workshop, and test practical solutions to foster an open, safe, and secure digital information ecosystem. We hope to discuss new approaches and best practices supporting the creation of accessible and community-led technical solutions, as well as identify potential partnerships to support WITNESS’s advocacy towards the creation of resilient and equitable detection systems through targeted policy, legislative, and other initiatives.
Hybrid Format: We will include an online moderator in our workshop. The online moderator will facilitate interaction between in-person and online participants through monitoring of the chat and online questions, and flag any questions and comments to the onsite moderator to share with the other participants. Similarly, when conducting the exercises, the online moderator will oversee the discussions in the breakout rooms and encourage participants to report back their ideas to the onsite speakers and attendees. During the exercises, the online participants will use Miro Board to note down and organize ideas while the onsite participants will use sticky notes. The online moderator will summarize the main points and share them with the onsite participants. Prior to the workshop, we will send an email to the registered participants with information on how to use the Miro Board so that they can familiarize themselves with the tool ahead of time.