Session
Organizer 1: Private Sector, Western European and Others Group (WEOG)
Speaker 1: Anne McCormick, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Narayanan Vaidyanathan, Private Sector, Asia-Pacific Group
Speaker 3: Charlie Martial Ngounou, Civil Society, African Group
Speaker 4: Wan Sie LEE, Government, Asia-Pacific Group
Speaker 2: Narayanan Vaidyanathan, Private Sector, Asia-Pacific Group
Speaker 3: Charlie Martial Ngounou, Civil Society, African Group
Speaker 4: Wan Sie LEE, Government, Asia-Pacific Group
Format
Theater
Duration (minutes): 60
Format description: The session will consist of two halves of thirty minutes each. During the first 30min, the speaker will share their insights with the participations and will take questions. In the second half the role will be reversed, and the audience will be asked to share their experiences and use cases based on which the speakers will seek to draft some initial recommendations.
Duration (minutes): 60
Format description: The session will consist of two halves of thirty minutes each. During the first 30min, the speaker will share their insights with the participations and will take questions. In the second half the role will be reversed, and the audience will be asked to share their experiences and use cases based on which the speakers will seek to draft some initial recommendations.
Policy Question(s)
1. What are the minimum policy requirements for AI assessments to provide reliable information on the trustworthiness of AI systems used in high risk/impact applications?
2. How can AI assessment frameworks support the adoption of AI as tool towards meeting the 2030 Sustainable Development Goals?
3. How can AI governance and assessment standards help to reduce the global digital divide?
This workshop supports the UN Global Digital Compact ‘Objective 5’ to enhance international governance of AI, with a specific focus on clause 55 on advancing AI governance and capacity building towards to use of AI for advancing the SDGs.
What will participants gain from attending this session? Participants will gain practical insights to help them to better understand what AI assessments can and cannot do for them; how to evaluate if an AI assessment is providing the information that meets their needs and; what to include in their national or corporate/organizational policies in order to set the foundations for reliable AI assessment outcomes.
Participants will also have the opportunity to share potential uses cases and connect for participation in a future study on AI assessment methods tailored for the needs of Global South stakeholders.
SDGs
Description:
AI tools and products are increasingly being adopted in mission-critical elements of service delivery both in the public and private sector. To manage the potential risks from this use of AI, policymakers, civil-society and business leaders are emphasising the need for reliable implementation of AI governance processes, such as AI system management (e.g. ISO/IEC 42001) and AI Risk Management frameworks (e.g. NIST AI RMF). This need is especially high for AI systems uses where failure would result in significant negative human-rights impact. Good practice for governance of critical systems in other domains (e.g. Cybersecurity, IT Safety) has established the need to pair the use of management standards and frameworks with reliable assessment procedures that verify correct implementation of the governance processes and identify potential gaps or improvements. In this workshop we will discuss the growing field of AI system governance and assessments, identifying policy elements that are necessary to ensure that governance and assessment of AI can reliably deliver the desired safeguards for confident use of AI needed for high impact use cases. We will review: • Current policy approaches (both mandatory and voluntary) to the use of AI governance and its assessments • Progress in establishing standards and minimum requirements for reliable verification of AI systems • Considerations regarding the skills, training and expertise needed from AI assessment providers • Special challenges that need to be considered when establishing governance and assessment frameworks for the use of AI in the Global South. The diverse panel of speakers will seek to connect with the audience on identifying additional challenges and use cases for future studies to improve the state-of-the art in AI assessments.
AI tools and products are increasingly being adopted in mission-critical elements of service delivery both in the public and private sector. To manage the potential risks from this use of AI, policymakers, civil-society and business leaders are emphasising the need for reliable implementation of AI governance processes, such as AI system management (e.g. ISO/IEC 42001) and AI Risk Management frameworks (e.g. NIST AI RMF). This need is especially high for AI systems uses where failure would result in significant negative human-rights impact. Good practice for governance of critical systems in other domains (e.g. Cybersecurity, IT Safety) has established the need to pair the use of management standards and frameworks with reliable assessment procedures that verify correct implementation of the governance processes and identify potential gaps or improvements. In this workshop we will discuss the growing field of AI system governance and assessments, identifying policy elements that are necessary to ensure that governance and assessment of AI can reliably deliver the desired safeguards for confident use of AI needed for high impact use cases. We will review: • Current policy approaches (both mandatory and voluntary) to the use of AI governance and its assessments • Progress in establishing standards and minimum requirements for reliable verification of AI systems • Considerations regarding the skills, training and expertise needed from AI assessment providers • Special challenges that need to be considered when establishing governance and assessment frameworks for the use of AI in the Global South. The diverse panel of speakers will seek to connect with the audience on identifying additional challenges and use cases for future studies to improve the state-of-the art in AI assessments.
Expected Outcomes
The exchange between panel and audience during this workshop will provide the participants with practical insights to help them to better understand:
• how to select an appropriate assessment framework when evaluating AI systems;
• how to evaluate if an assessment is providing the information that meets their needs;
• what to look for in an assessment provider.
Further concrete outcomes will include the creation of a stakeholder group for the drafting of key considerations to address Global South specific needs related to AI assessments and governance.
Hybrid Format: Prior to the session organizers will make use of the session’s page on the IGF website to share preparatory material and kick-start a dialogue.
A preparation call will be organised for all speakers, moderators and co-organisers so that everyone has the chance to meet and prepare for the session.
During the session the onsite and online moderators will merge onsite and online attendees. Onsite participants will be encouraged to connect to the online platform to stay informed and engage with discussions in the chat.