Session
Organizer 1: Technical Community, Latin American and Caribbean Group (GRULAC)
Organizer 2: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 3: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 4: Government, Western European and Others Group (WEOG)
Organizer 5: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 2: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 3: Civil Society, Latin American and Caribbean Group (GRULAC)
Organizer 4: Government, Western European and Others Group (WEOG)
Organizer 5: Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 1: Kari Laumann, Government, Western European and Others Group (WEOG)
Speaker 2: Laura Galindo Romero, Private Sector, Latin American and Caribbean Group (GRULAC)
Speaker 3: Nitin Sawhney, Technical Community, Western European and Others Group (WEOG)
Speaker 4: Armando Guío, Technical Community, Western European and Others Group (WEOG)
Speaker 2: Laura Galindo Romero, Private Sector, Latin American and Caribbean Group (GRULAC)
Speaker 3: Nitin Sawhney, Technical Community, Western European and Others Group (WEOG)
Speaker 4: Armando Guío, Technical Community, Western European and Others Group (WEOG)
Format
Roundtable
Duration (minutes): 60
Format description: The roundtable format and 60-minute duration are ideally suited for this session as they foster an interactive and inclusive environment for meaningful dialogue among diverse stakeholders. A roundtable layout encourages open discussion and better engagement enabling participants that attend the session to share their perspectives on the complexities of AI Regulatory Sandboxes (AIRS) and other AI design contexts, based on their roles and experience. The 60-minute timeframe allows for a well-structured session: a brief introduction to set the stage, followed by facilitated discussions on the three main policy questions, and concluding with collaborative reflections to generate actionable takeaways. This format promotes in-depth exploration of the challenges and opportunities in multistakeholder collaboration within AIRS. By prioritizing engagement and knowledge exchange, the roundtable format ensures that participants can actively contribute, debate, and co-create strategies, resulting in a richer understanding of inclusive and responsible innovation practices.
Duration (minutes): 60
Format description: The roundtable format and 60-minute duration are ideally suited for this session as they foster an interactive and inclusive environment for meaningful dialogue among diverse stakeholders. A roundtable layout encourages open discussion and better engagement enabling participants that attend the session to share their perspectives on the complexities of AI Regulatory Sandboxes (AIRS) and other AI design contexts, based on their roles and experience. The 60-minute timeframe allows for a well-structured session: a brief introduction to set the stage, followed by facilitated discussions on the three main policy questions, and concluding with collaborative reflections to generate actionable takeaways. This format promotes in-depth exploration of the challenges and opportunities in multistakeholder collaboration within AIRS. By prioritizing engagement and knowledge exchange, the roundtable format ensures that participants can actively contribute, debate, and co-create strategies, resulting in a richer understanding of inclusive and responsible innovation practices.
Policy Question(s)
Which are key use cases where participatory methods have successfully contributed to fostering responsible innovation in AI Regulatory Sandboxes (AIRS) or other AI development contexts, and what lessons can be drawn from them?
What are the main challenges to ensuring an inclusive and participatory approach in AIRS, such as issues of confidentiality, trust, capture and efficiency, and what strategies can help address these obstacles effectively?
How can the multistakeholder model of Internet governance inform the design and implementation of AI Regulatory Sandboxes to enhance collaboration, inclusion, and accountability?
What will participants gain from attending this session? Participants will gain a deeper understanding of how multistakeholder collaboration can enhance the design and implementation of AI Regulatory Sandboxes (AIRS) to foster responsible innovation. Drawing from global recommendations, such as the OECD’s guidance on AIRS, and established frameworks like Stilgoe et al.’s Responsible Innovation framework, participants will explore the importance of inclusion in sandbox experimentation.
They will engage in discussions on overcoming challenges such as confidentiality, trust deficits, and process complexity, while analyzing successful participatory approaches from various policy contexts. Additionally, participants will uncover valuable lessons from the multistakeholder model of Internet governance and its potential to inform inclusive and effective AIRS practices. This session aims to equip stakeholders with actionable insights and strategies to collaboratively address the ethical, legal, and societal implications of AI systems.
Description:
Regulatory sandboxes serve as collaborative regulatory instruments where regulators and innovators can interact in a safe environment to test and understand emerging technologies, such as AI systems, before formal regulations are established (Ranchordas, Vinci, 2024). These sandboxes hold immense potential to foster responsible innovation by facilitating co-creation among diverse stakeholders, including regulators, innovators, civil society, and academia. However, ensuring that such collaboration is inclusive and participatory requires careful consideration of its complexities and risks. AI systems, given their multipurpose and cross-sectoral impact, demand a multidisciplinary and multistakeholder approach. The OECD has emphasized this in its guidance on AI Regulatory Sandboxes (AIRS), highlighting the importance of engaging firms, regulators, competition authorities, intellectual property offices, and data protection authorities, among others. Additionally, Stilgoe et al.’s (2013) framework for responsible innovation stresses inclusion as a key element, urging that citizens and their representatives—who are often the ultimate users or impacted subjects of AI—should be integral to these processes. Despite these recommendations, practical challenges persist. Issues like confidentiality and IP rights, trust deficits between regulators and regulatees, regulatory capture concerns, and potential slowing of experimentation due to increased complexity must be addressed. Drawing on participatory approaches from other policy domains and leveraging insights from the multistakeholder model of Internet governance may provide valuable solutions. This workshop will organize a round table to bring a multistakeholder group to discuss, together with other participants from the audience, the role of multistakeholderism in AIRS, and more specifically: 1. Identify use cases where participatory methods have contributed to the development of responsible innovation in AI regulatory sandboxes. 2. Examine challenges of inclusive, multistakeholder processes in sandboxes, proposing actionable workarounds for confidentiality, trust, capture and efficiency issues. 3. Explore lessons from the multistakeholder model of Internet governance, assessing its applicability to the design and implementation of AI Regulatory Sandboxes.
Regulatory sandboxes serve as collaborative regulatory instruments where regulators and innovators can interact in a safe environment to test and understand emerging technologies, such as AI systems, before formal regulations are established (Ranchordas, Vinci, 2024). These sandboxes hold immense potential to foster responsible innovation by facilitating co-creation among diverse stakeholders, including regulators, innovators, civil society, and academia. However, ensuring that such collaboration is inclusive and participatory requires careful consideration of its complexities and risks. AI systems, given their multipurpose and cross-sectoral impact, demand a multidisciplinary and multistakeholder approach. The OECD has emphasized this in its guidance on AI Regulatory Sandboxes (AIRS), highlighting the importance of engaging firms, regulators, competition authorities, intellectual property offices, and data protection authorities, among others. Additionally, Stilgoe et al.’s (2013) framework for responsible innovation stresses inclusion as a key element, urging that citizens and their representatives—who are often the ultimate users or impacted subjects of AI—should be integral to these processes. Despite these recommendations, practical challenges persist. Issues like confidentiality and IP rights, trust deficits between regulators and regulatees, regulatory capture concerns, and potential slowing of experimentation due to increased complexity must be addressed. Drawing on participatory approaches from other policy domains and leveraging insights from the multistakeholder model of Internet governance may provide valuable solutions. This workshop will organize a round table to bring a multistakeholder group to discuss, together with other participants from the audience, the role of multistakeholderism in AIRS, and more specifically: 1. Identify use cases where participatory methods have contributed to the development of responsible innovation in AI regulatory sandboxes. 2. Examine challenges of inclusive, multistakeholder processes in sandboxes, proposing actionable workarounds for confidentiality, trust, capture and efficiency issues. 3. Explore lessons from the multistakeholder model of Internet governance, assessing its applicability to the design and implementation of AI Regulatory Sandboxes.
Expected Outcomes
Enhanced Understanding: Participants will gain a comprehensive understanding of how multistakeholder collaboration can strengthen the design and implementation of AI Regulatory Sandboxes, fostering responsible innovation.
Actionable Strategies: Attendees will leave with practical strategies and tools to address challenges in creating inclusive and participatory regulatory sandboxes, such as managing confidentiality and trust issues.
Cross-Process Learning: The workshop will promote knowledge exchange by identifying transferable lessons from successful participatory methods in other policy domains and the multistakeholder Internet governance model.
Collaborative Frameworks: Discussions will generate recommendations for collaborative frameworks that enhance the inclusivity, accountability, and effectiveness of regulatory sandboxes, providing stakeholders with actionable pathways to improve AI governance.
Hybrid Format: To ensure effective interaction in this hybrid session, we will have two moderators: one focusing on facilitating the overall discussion and another dedicated to engaging the online audience. This dual approach will ensure that both onsite and online participants have equal opportunities to contribute and interact.
The session will use a shared interactive platform (e.g., Mentimeter or Slido) to enable real-time Q&A, live polling, and comments. Both onsite and online participants will use these tools to submit questions or share insights, ensuring an inclusive dialogue.
The online moderator will actively monitor the chat and voice contributions from online attendees, ensuring their input is integrated seamlessly into the discussion. Onsite participants will also be encouraged to consider and respond to online contributions, creating a dynamic and inclusive exchange. A clear structure will guide the session, maximizing engagement across both formats while ensuring equal participation opportunities.