Session
Organizer 1: Technical Community, Western European and Others Group (WEOG)
Organizer 2: Civil Society, Western European and Others Group (WEOG)
Organizer 3: Civil Society, Western European and Others Group (WEOG)
Organizer 4: Intergovernmental Organization, Intergovernmental Organization
Organizer 2: Civil Society, Western European and Others Group (WEOG)
Organizer 3: Civil Society, Western European and Others Group (WEOG)
Organizer 4: Intergovernmental Organization, Intergovernmental Organization
Speaker 1: Isabel Ebert, Intergovernmental Organization, Western European and Others Group (WEOG)
Speaker 2: Caitlin Kraft-Buchman, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Jhalak Mrignayani Kakkar, Civil Society, Asia-Pacific Group
Speaker 4: Hiselius Patrik, Private Sector, Western European and Others Group (WEOG)
Speaker 5: Min thu Aung, Technical Community, Western European and Others Group (WEOG)
Speaker 2: Caitlin Kraft-Buchman, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Jhalak Mrignayani Kakkar, Civil Society, Asia-Pacific Group
Speaker 4: Hiselius Patrik, Private Sector, Western European and Others Group (WEOG)
Speaker 5: Min thu Aung, Technical Community, Western European and Others Group (WEOG)
Format
Roundtable
Duration (minutes): 60
Format description: Following the speakers' presentation, the session will be structured to foster discussion and debate, ensuring active engagement among participants. We believe that a combination of expert insights and facilitated solution co-creation requires a minimum of one hour to allow for meaningful exchange. As the best approaches to integrating human rights into AI risk management remain uncertain, a roundtable discussion is an ideal format to actively engage, discuss and debate alternative approaches, with the aim of sharing expertise and collaboratively proposing potential solutions with both online and offline participants.
Duration (minutes): 60
Format description: Following the speakers' presentation, the session will be structured to foster discussion and debate, ensuring active engagement among participants. We believe that a combination of expert insights and facilitated solution co-creation requires a minimum of one hour to allow for meaningful exchange. As the best approaches to integrating human rights into AI risk management remain uncertain, a roundtable discussion is an ideal format to actively engage, discuss and debate alternative approaches, with the aim of sharing expertise and collaboratively proposing potential solutions with both online and offline participants.
Policy Question(s)
● How can existing and future regulations impacting tech companies ensure that AI governance frameworks fully integrate risk-based human rights due diligence?
● What additional mechanisms or incentives could be introduced to bridge the gap between current corporate AI governance practices and the expectations of civil society, academics, and end users regarding human rights?
● How can companies designing and deploying AI be held accountable for effectively identifying, assessing, and mitigating human rights risks?
● What new mechanisms might help rights-respecting AI become more widely accepted and practised given the current geopolitical context?
What will participants gain from attending this session? Participants attending this session will gain valuable insights into current trends, practices and recommendations on incorporating human rights into AI risk management (including the use of HRDD on AI services); current opportunities and challenges in integrating human rights into existing AI risk management paradigms, law and regulations; and how participants might better integrate human rights considerations into their AI-related work.
SDGs
Description:
Governments are increasingly requiring private sector actors deploying technology to undertake and implement risk management approaches to address, among other things, human rights-related risks. In the EU alone, the Digital Services Act obligates Very Large Online Platforms and Search Engines to undertake fundamental rights-based risk assessments of their services, the AI Act requires risk management systems that address fundamental rights for high-risk uses, and the Corporate Sustainability Due Diligence Directive (CSDDD) mandates large companies, including those in the tech sector, to conduct Human Rights Due Diligence (HRDD) on both upstream and downstream risks. Governments in Latin America, Africa, and Asia are also contemplating legislative approaches to the human rights responsibilities of AI companies. AI developers, deployers, and benchmarking organisations have developed a range of AI-specific principles, model evaluation tools, risk/impact assessments, and technical risk mitigations. However, many of these fail to fully integrate human rights principles or reference well-established frameworks for responsible business conduct, such as the OECD Guidelines for Multinational Enterprises and the UN Guiding Principles on Business and Human Rights. Given the significant risks and opportunities AI presents for human rights, this creates a gap between corporate practices and the expectations of governments, civil society organizations, academics, and end users, who advocate for or would benefit from a rights-based approach to AI governance, including the use of human rights-based methodologies to identify, assess, and mitigate AI-related risks. This panel will explore current trends, practices and recommendations on incorporating human rights in AI risk management by bringing together diverse perspectives: speakers include an AI model developer and deployer (Telenor), a multi-stakeholder initiative (GNI), civil society representatives (WATT, CCG), and a multilateral expert (OHCHR). The discussion aims to offer insights for participants on integrating human rights considerations into their AI-related risk management work.
Governments are increasingly requiring private sector actors deploying technology to undertake and implement risk management approaches to address, among other things, human rights-related risks. In the EU alone, the Digital Services Act obligates Very Large Online Platforms and Search Engines to undertake fundamental rights-based risk assessments of their services, the AI Act requires risk management systems that address fundamental rights for high-risk uses, and the Corporate Sustainability Due Diligence Directive (CSDDD) mandates large companies, including those in the tech sector, to conduct Human Rights Due Diligence (HRDD) on both upstream and downstream risks. Governments in Latin America, Africa, and Asia are also contemplating legislative approaches to the human rights responsibilities of AI companies. AI developers, deployers, and benchmarking organisations have developed a range of AI-specific principles, model evaluation tools, risk/impact assessments, and technical risk mitigations. However, many of these fail to fully integrate human rights principles or reference well-established frameworks for responsible business conduct, such as the OECD Guidelines for Multinational Enterprises and the UN Guiding Principles on Business and Human Rights. Given the significant risks and opportunities AI presents for human rights, this creates a gap between corporate practices and the expectations of governments, civil society organizations, academics, and end users, who advocate for or would benefit from a rights-based approach to AI governance, including the use of human rights-based methodologies to identify, assess, and mitigate AI-related risks. This panel will explore current trends, practices and recommendations on incorporating human rights in AI risk management by bringing together diverse perspectives: speakers include an AI model developer and deployer (Telenor), a multi-stakeholder initiative (GNI), civil society representatives (WATT, CCG), and a multilateral expert (OHCHR). The discussion aims to offer insights for participants on integrating human rights considerations into their AI-related risk management work.
Expected Outcomes
The session aims to offer recommendations on embedding risk-based human rights due diligence into both existing and emerging regulations affecting tech companies. It will explore potential mechanisms and incentives to better integrate human rights considerations into current corporate AI risk management frameworks. Additionally, discussions will focus on accountability measures for companies developing and deploying AI, ensuring they address human rights risks in their AI operations. Finally, the session will propose strategies and recommendations for normalizing rights-based AI within the evolving geopolitical landscape. As policy questions around incorporating human rights in AI risk management are being debated globally in forums beyond IGF, we envisage that the discussions and recommendations documented from this workshop will contribute to the global corpus of knowledge on the topic, which will hopefully inform future regulations, risk management mechanisms and corporate practices.
Hybrid Format: To ensure an engaging hybrid session, we hope to firstly request a screen/projection and in-room cameras with technical support to allow a setup where online and offline participants are seamlessly visible and audible to each other. We will moderate the discussion to ensure online and offline participants have the opportunity to participate and contribute equally in the discussions by explicitly inviting responses from online participants at key points in the discussions; monitoring for and reading out typed questions at regular intervals to accommodate online participants who do not wish to speak; and encouraging direct discussions between offline and online participants.