Session
Organizer 1: David Wright, 🔒
Organizer 2: Sophie Mortimer, SWGfL
Organizer 3: Boris Radanovic, 🔒South West Grid for Learning
Speaker 1: Nighat Dad, Civil Society, Asia-Pacific Group
Speaker 2: Karuna Nain, Civil Society, Asia-Pacific Group
Speaker 3: Boris Radanovic, Civil Society, Western European and Others Group (WEOG)
Speaker 4: Cindy Southworth, Private Sector, Western European and Others Group (WEOG)
David Wright, Civil Society, Western European and Others Group (WEOG)
Sophie Mortimer, Civil Society, Western European and Others Group (WEOG)
Sophie Mortimer, Civil Society, Western European and Others Group (WEOG)
Classroom
Duration (minutes): 90
Format description: Intending to break into small discussion groups, each with a challenge to discuss, debate and feedback.
How can international collaborations enhance the effectiveness of legal frameworks in addressing AI-facilitated NCII abuse? In what ways can technology platforms be held accountable for preventing and responding to deepfake-enabled gender-based violence? What innovative policy measures can be implemented to support victims of NCII abuse and prevent its occurrence in the digital sphere?
What will participants gain from attending this session? Attendees will gain profound insights into the complexities of AI and deepfake technologies in NCII abuse, including understanding current challenges, innovative prevention strategies, and legal frameworks. Participants will learn about groundbreaking tools and platforms designed to combat NCII abuse, empowering them with knowledge to protect and advocate for victims of digital gender-based violence. This session will provide valuable networks and collaboration opportunities among stakeholders striving to create a safer digital environment.
Description:
In the rapidly evolving digital age, the intersection of artificial intelligence (AI) and Non-Consensual Intimate Image (NCII) abuse poses unprecedented challenges and risks, particularly concerning gender-based violence. This workshop, titled "Bridging Gaps: AI & Ethics in Combating NCII Abuse," aims to dissect and address the intricacies of AI-facilitated NCII abuse, including deepfake technology's role in exacerbating gender-based online harassment. Drawing on pivotal research, including the Revenge Porn Helpline’s 2022 report and insights into the state of deepfakes, the session will explore innovative solutions and strategies to mitigate risks and safeguard individuals against digital gender-based violence. Highlighting initiatives like StopNCII.org and TakeItDown.ncmec.org, the workshop will convene world-leading experts from diverse fields—policy, industry, and NGOs—to offer a multidimensional perspective on combating NCII abuse. Through panel discussions followed by interactive group feedback, the session is designed to showcase concrete impacts and foster collaborative action, aligning with the 2023 roadmap of the Global Partnership for Action on Gender-Based Online Harassment and Abuse.
The session aims to catalyze a unified approach to tackling AI-enabled NCII abuse, promoting the adoption of comprehensive legal measures, technology solutions, and global policies. Expected outcomes include a consensus on best practices for prevention, support for victims, and a call for enhanced global cooperation. Specific outputs will encompass a summary report detailing actionable insights, policy recommendations, and a roadmap for future collaborative initiatives and research. This workshop is poised to make a significant contribution to ongoing global efforts against gender-based online harassment and abuse, informing both policy development and technological innovation.
Hybrid Format: Having successfully organised and presented a Hybrid workshop at the 2022 IGF, the organisers benefitted greatly from the preparatory support provided by the IGF. The role of the online moderator was key in monitoring and representing the online participants within the discussion, intervening and tabling their comments. It is expected that some of the key participants will also participate remotely and their contribution will be projected into the room, a function managed by the onsite moderator.
Report
Scaling Ethical AI for Global NCII Protection: Leveraging AI for Non-Consensual Intimate Image (NCII) detection and prevention must prioritize transparency, victim-centered design, and global inclusivity. Industry stakeholders, NGOs, and policymakers must collaborate to establish governance frameworks and ensure AI systems are sensitive to diverse cultural, legal, and linguistic contexts within the next two years.
Transparency and Accountability in AI Development: AI tools combating NCII abuse should undergo regular third-party audits, with clear mechanisms for users to challenge or appeal decisions. Industry-wide adoption of ethical standards for training datasets and transparency in AI governance is critical to maintaining trust and safety.
Addressing the Rising Tide of NCII Issues through Global Collaboration: The surge in NCII cases, exacerbated by the misuse of AI technologies, highlights the urgent need for international cooperation. Initiatives such as the draft UN Cybercrime Convention and the UNODC Global Strategy are pivotal for creating comprehensive legal frameworks and strategies to combat NCII abuse on a global scale.
To Governments and Policymakers: Establish and enforce global ethical standards for AI in NCII detection by 2026, ensuring frameworks account for cultural and legal diversity. Policymakers should integrate measures from the draft UN Cybercrime Convention and the UNODC Global Strategy to create cohesive international regulations addressing NCII abuse.
To Tech Industry and NGOs: Partner with StopNCII.org to adopt its hash technology for NCII prevention. Invest in AI research prioritising victim-centered, privacy-preserving solutions. Collaborate globally to ensure diverse datasets, robust reporting mechanisms, and alignment with international strategies like the UNODC Global Strategy by 2025.
The IGF 2024 workshop, “Bridging Gaps: AI & Ethics in Combating NCII Abuse,” brought together leading experts to explore the ethical use of AI in combating NCII abuse. Panelists included advocates, tech industry leaders, and NGO representatives who shared insights into leveraging AI for NCII detection, prevention, and victim support.
Key Discussion Points
-
Ethical Challenges in AI Implementation
Discussions underscored the necessity of victim-centered AI tools, emphasizing the balance between technological intervention and safeguarding victim privacy. Concerns over the cultural and linguistic bias in existing AI models were highlighted, with calls for broader and more inclusive datasets. -
Transparency and Governance
Participants stressed the importance of governance frameworks for AI, including transparency in its use and third-party audits to ensure fairness and accountability. Panellists identified the need for clear user appeal mechanisms and proactive educational initiatives to build trust in AI systems. -
Victim and Survivor-Centric Design
Effective AI solutions must empower users without compromising their autonomy. Tools like StopNCII.org were highlighted as exemplary in integrating privacy-preserving technologies, demonstrating that AI can deter NCII while respecting victim agency. -
Rising Tide of NCII and Global Collaboration
The session highlighted the escalating prevalence of NCII abuse, with new AI-driven threats such as deepfakes and sextortion schemes targeting diverse demographics. The importance of global initiatives like the draft UN Cybercrime Convention and the UNODC Global Strategy was emphasized as critical to building cohesive international frameworks to combat these challenges. -
Collaboration and Research
A recurring theme was the importance of multi-stakeholder collaboration among governments, NGOs, and the tech sector. Investments in AI research and development were deemed essential, particularly to address evolving threats and ensure alignment with international strategies like the UNODC Global Strategy.
Proposed Actions
- Foster global cooperation in establishing ethical guidelines and governance for AI in NCII prevention, integrating provisions from the draft UN Cybercrime Convention and the UNODC Global Strategy.
- Expand platforms like StopNCII.org to integrate AI capabilities that address NCII across diverse cultural and legal landscapes.
- Promote victim-informed solutions by involving survivors in the design and implementation of AI tools.
Conclusion
The session underscored the urgency of addressing the rising tide of NCII abuse and the pivotal role of global frameworks like the draft UN Cybercrime Convention and the UNODC Global Strategy. Collaborative, forward-looking strategies will be essential in harnessing AI responsibly and effectively. Participants called for immediate action to develop transparent, inclusive, and victim-centered AI frameworks that can adapt to the rapidly changing landscape of online harms.