Session
Organizer 1: Private Sector, Western European and Others Group (WEOG)
Organizer 2: Private Sector, Latin American and Caribbean Group (GRULAC)
Organizer 3: Private Sector, Western European and Others Group (WEOG)
Organizer 2: Private Sector, Latin American and Caribbean Group (GRULAC)
Organizer 3: Private Sector, Western European and Others Group (WEOG)
Speaker 1: Charles Bradley, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Luciana Benotti, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 3: Dominique Lazanski, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Jayantha Fernando, Private Sector, Asia-Pacific Group
Speaker 2: Luciana Benotti, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 3: Dominique Lazanski, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Jayantha Fernando, Private Sector, Asia-Pacific Group
Format
Roundtable
Duration (minutes): 90
Format description: A 90-minute roundtable format is ideal for this session as it allows for interactive and high-impact discussions on the governance of agentic AI. Unlike the typical panel discussions, a roundtable encourages multi-stakeholder dialogue, participants to exchange diverse perspectives. This format also allows for the exploration of key concerns such as accountability frameworks and ethical safeguards, while ensuring a collaborative approach to solutions. By keeping the session interactive, participants remain actively engaged, fostering the development of actionable outcomes rather than passive listening. The roundtable format maximizes participation and ensures that discussions remain focused and productive. Ninety minutes provides enough time for robust discussion, but not so long that it loses momentum.
Duration (minutes): 90
Format description: A 90-minute roundtable format is ideal for this session as it allows for interactive and high-impact discussions on the governance of agentic AI. Unlike the typical panel discussions, a roundtable encourages multi-stakeholder dialogue, participants to exchange diverse perspectives. This format also allows for the exploration of key concerns such as accountability frameworks and ethical safeguards, while ensuring a collaborative approach to solutions. By keeping the session interactive, participants remain actively engaged, fostering the development of actionable outcomes rather than passive listening. The roundtable format maximizes participation and ensures that discussions remain focused and productive. Ninety minutes provides enough time for robust discussion, but not so long that it loses momentum.
Policy Question(s)
1. What safeguards are needed to adequately protect user privacy, safety and autonomy in a world with AI agents?
2. In balancing AI autonomy with human oversight, what do we need from legal frameworks in order to achieve this balance and provide recourse and accountability? Are current legal frameworks adequate, or are new ones needed?
3. What policy, regulatory and cultural innovations are necessary to address concerns around user safety, personification of AI and other potential social ramifications of agentic AI?
What will participants gain from attending this session? Get ready to dive into the world of AI-powered agents. From self-learning research assistants to financial trading bots and autonomous customer service systems, participants will explore real-world examples of these powerful technologies in action.
By the end of the session, you’ll have a clearer grasp of how these systems operate, where they’re headed, and—most importantly—why we must act now to address critical governance challenges before agentic AI becomes deeply embedded in society.
To keep things engaging and dynamic, participants will tackle real-world regulatory dilemmas through interactive, scenario-based discussions. Expect to grapple with tough questions, debate solutions, and gain a better perspective on AI governance through lively, moderated conversations.
SDGs
Description:
Imagine a world where AI doesn't just assist but acts—where autonomous systems research complex topics, make decisions, and carry out tasks on behalf of users without constant human intervention. This is not science fiction. It is Agentic AI. And it’s already here. Capable of planning and performing a wide range of actions in line with a person’s aims, AI agents could add immense value to people’s lives and to society, serving as research analysts, personal assistants, customer concierges and more. While this technology is already showing great promise, we need to have a discussion about developing controls, standards and safeguards. As these systems reshape industries, economies, and societies, a critical question is: Are we prepared to govern them responsibly? During the 2024 IGF, we successfully conducted a workshop titled "Better products and policies through stakeholder engagement.” It was a theoretical discussion about the role stakeholder engagement can play in the development of policies and products that are socially responsible, user-centric, and aligned with broader societal needs and values. For 2025, we want to take the theoretical a step further and apply this approach to Agentic AI. Our facilitators with work with audience members to discuss and address key topics such as: *How do we ensure AI agents act in the human interest? *What safeguards are needed to ensure ethical decision-making in Agentic AI? *How do we balance AI autonomy with human oversight? *How do we address the potential for AI misalignment and unpredictable behavior? For the facilitators, we hope to gather real world feedback that can inform current and future product development and policies. For participants we hope they will gain a better understanding of the current state of agentic AI and how they can support sustainable and responsible approaches to governance.
Imagine a world where AI doesn't just assist but acts—where autonomous systems research complex topics, make decisions, and carry out tasks on behalf of users without constant human intervention. This is not science fiction. It is Agentic AI. And it’s already here. Capable of planning and performing a wide range of actions in line with a person’s aims, AI agents could add immense value to people’s lives and to society, serving as research analysts, personal assistants, customer concierges and more. While this technology is already showing great promise, we need to have a discussion about developing controls, standards and safeguards. As these systems reshape industries, economies, and societies, a critical question is: Are we prepared to govern them responsibly? During the 2024 IGF, we successfully conducted a workshop titled "Better products and policies through stakeholder engagement.” It was a theoretical discussion about the role stakeholder engagement can play in the development of policies and products that are socially responsible, user-centric, and aligned with broader societal needs and values. For 2025, we want to take the theoretical a step further and apply this approach to Agentic AI. Our facilitators with work with audience members to discuss and address key topics such as: *How do we ensure AI agents act in the human interest? *What safeguards are needed to ensure ethical decision-making in Agentic AI? *How do we balance AI autonomy with human oversight? *How do we address the potential for AI misalignment and unpredictable behavior? For the facilitators, we hope to gather real world feedback that can inform current and future product development and policies. For participants we hope they will gain a better understanding of the current state of agentic AI and how they can support sustainable and responsible approaches to governance.
Expected Outcomes
This session is designed to drive meaningful progress in addressing the risks of Agentic AI. Key outcomes will include:
*A comprehensive understanding of the challenges, particularly regarding accountability and unintended consequences.
*Practical insights into policy and governance frameworks that promote responsible AI.
*Strengthened collaboration across stakeholder groups to ensure sustained dialogue and action.
*A clear roadmap for future research and innovation to enhance AI oversight.
*Empowerment of participants to champion effective AI safeguards within their spheres of influence.
As a follow up, we plan to take the findings back to product development teams to ensure feedback is incorporated into the product life cycle. We hope to publish a blog about the session and next steps
Hybrid Format: Using Zoom will allow both onsite and online participants to see and hear each other. We will ask all participants, both in person and remote to be logged in so we can manage the question queue in a neutral manner. Our onsite and online moderators will be in constant communication to ensure that we can facilitate questions and comments from both onsite and online participants.
We will urge our speakers to use clear and concise language, avoid technical jargon, and provide context for all information discussed during the session to ensure that both onsite and online participants can follow along and understand the content.
Finally, we plan to reserve 40 minutes for audience interaction and will explore the use of the quick show of hands feature in Zoom to ask questions and get feedback from both onsite and online participants in real-time.