IGF 2023 Lightning Talk #83 Multistakeholder Model for Terrorist Content & Generative AI

Sunday, 8th October, 2023 (01:20 UTC) - Sunday, 8th October, 2023 (01:50 UTC)
SC – Room H

Global Digital Governance & Cooperation
Role of IGF

Global Digital Governance & Cooperation

Department of Prime Minister and Cabinet, New Zealand Government and Ian Axford Fellowship in Public Policy
Rachel Wolbers, Ian Axford Fellow with the Department of Prime Minister and Cabinet in New Zealand, on sabbatical as Head of Global Engagement for Meta's Oversight Board. Private sector American, based in New Zealand.


Rachel Wolbers, Ian Axford Fellow with the Department of Prime Minister and Cabinet in New Zealand, on sabbatical as Head of Global Engagement for Meta's Oversight Board. Private sector American, based in New Zealand.

Onsite Moderator

Rachel Wolbers

Online Moderator

David Reid


Rachel Wolbers



Targets: The paper and talk will look at 17.16 as it creates a framework for global multistakeholder partnerships that share knowledge, expertise and technology to promote human rights and create a safer online environment for all internet users. The talk will explore how we can use the tools we have with multistakeholderism to address new challenges. The discussion will also address 17.17 by setting out a roadmap for further public-private and civil society partnerships to address issues around content moderation and generative AI.


Lightning talk on paper with a robust Q&A discussion.

Duration (minutes)

This lightning talk would discuss the paper I am writing as part of my Fulbright program in New Zealand working with the Department of Cabinet and Prime Minister. The paper is an examination of the multi-stakeholder approach taken by the Christchurch Call to Action after the horrific events of March 15, 2019 where a terrorist live-streamed his brutal attack on two mosques in Christchurch, NZ, killing 51 people. In the aftermath, the New Zealand and French governments worked with tech companies and civil society to outline a set of 25 commitments to eradicate terrorist and violent extremist content online. In the past four years, the Call has made significant progress using a multi-stakeholder model to bring new ideas to the community and has adapted their strategy as technology and trends change. The next big challenge to ensuring terrorists and extremists are not radicalized to violence will be understanding the impact (both positive and negative) of generative AI. My paper and lightning talk will cover how the Call can lead a multi-stakeholder initiative to understand the problem and find solutions to addressing the challenge. The paper, due on August 1, is broken down in four parts. First, I lay out the history of multistakeholderism in internet governance to understand best practices, drawing heavily on the work of the IGF and other internet governance multi-stakeholder bodies over the past 20 years. Second, I look at the unique challenges with content moderation and how generative AI will impact the trust and safety industry more broadly. Third, I create a set of best practices for multi-stakeholder initiatives trying to address content moderation using case examples such as the IGF, NETMundial, IETF, GNI, and others. Finally, I provide a set of next steps for the Christchurch Call to Action team. This paper is the result of spending six months working with this team and will hopefully be a guide to implement the recommendations. It also draws on my experience working on multi-stakeholder solutions to content moderation from my work at the Oversight Board, the US Department of Commerce's National Telecommunications and Information Administration, the US House of Representatives and a non-profit organization called Engine Advocacy which works with internet startups. The lightning talk will draw on the IGF2023 issue areas including connecting all people and safeguarding human rights because moderating terrorist and extremist content is intrinsically linked to the freedom of expression and civil and political rights. Additionally, it will address how to ensure safety, security, and accountability by looking at illegal and harmful content as well as trust and accountability measures. Finally, it will look at addressing new technologies, specifically generative AI.

I plan to speak for 15 minutes and then take questions from the audience. Those audience members could be in-person or online. Ideally, I would be in-person and present two or three slides. I would really like to make the discussion more interactive by asking for audience participation and feedback.