IGF 2023 Launch / Award Event #77 Role of Stakeholder Engagement in Implementing Human-centred

The Dialogue
Kazim Rizvi, Founding Director - The Dialogue, Civil Society
Kamesh Shekar, Programme Manager - The Dialogue, Civil Society
Shruti Shreya, Programme Manager - The Dialogue, Civil Society
Laura Galindo-Romero, AI and Privacy Policy Manager - Open Loop at Meta, Private Sector
Raghav Arora, Meta, Private Sector

Speakers

Audrey Plonk, Head of Digital Economy Policy Division - Directorate for Science, Technology and Innovation - OECD, Intergovernmental Organisation
Miles Brundage, Head of Policy Research - OpenAI, Private Sector
Amber Sinha, Senior Fellow (Trustworthy AI) - Mozilla Foundation, Civil Society
Dr. Rumman Chowdhury, Harvard Berkman Klein Center Responsible AI Fellow, Academician
Pranav Bhaskar Tiwari, Empowerment Advisor, Internet Society, Civil Society

Onsite Moderator

Laura Galindo-Romero, AI and Privacy Policy Manager - Open Loop at Meta, Private Sector

Online Moderator

Shruti Shreya, Programme Manager - The Dialogue, Civil Society

Rapporteur

Kamesh Shekar, Programme Manager - The Dialogue, Civil Society

SDGs

9.5
9.b


Targets: Our proposed session enhances the technological capabilities towards having a more human-centric approach. Also, encourages innovation and research on topics related to the operationalisation of AI principles.

Moreover, the policy recommendations in our report based on participants' experience in receiving, handling and implementing the human-centred AI principles add to the effort on developing a conducive policy environment agnostic to jurisdictions and geography.

Format

Presentation + Panel Discussion

Duration (minutes)
60
Language

English

Description

Evolutions in making AI solutions ethical have picked up pace where there is a proliferation of AI principles defined by various governments, intergovernmental organisations, academia, civil society etc. Though there are common themes across principles, there are differences in the interpretation and understanding of them and the values they comprise across contexts, organisations, and AI systems. Against this backdrop, Open Loop and The Dialogue are delighted to host a session on “Role of Stakeholder Engagement in Implementing Human-centred AI Principles”, launching the key findings and recommendations from our report titled “Observations from Prototyping the Principle of Human-Centric AI: Recommendations and Way Forward”.

Currently, we are testing a policy prototyping program which intends to guide and enable start-ups in India to implement the AI principle of human centricity, as enshrined in the national AI principles of India, OECD etc., in a way that accounts for local and regional cultural factors with an emphasis on stakeholder engagement. The program follows the policy prototyping methodology developed by Open Loop. In particular, the Open Loop consortium is testing a framework and operational guidance (which together form the “policy prototype”) for identifying and incorporating community voices and accounting for local realities across the AI systems lifecycle with companies (AI startups) in India.

We are testing our policy prototype with 10+ participant companies providing B2C services from different sectors in India, including Agriculture, Health, Finance, and Education. Through this methodological approach, we produced a report with policy recommendations based on participants' experience in receiving, handling and implementing the policy prototype while testing its clarity, effectiveness and actionability, which we intend to discuss in detail during this session.

The focus on developing a program for implementing AI principles was chosen because, while there are numerous sets of AI principles like OECD AI principles which include Inclusive growth, sustainable development and well-being, Human-centred values and fairness, Transparency and explainability, Robustness, security and safety. Accountability etc., specific guidance related to the implementation of AI principles is still emerging.

Stakeholder engagement with AI actors across the AI lifecycle is a common theme across emerging AI risk management frameworks. However, there is not much guidance or harmonised consensus for how companies should conduct it in practice:

For example, the recently published NIST AI risk management framework 1.0 includes the following categories that are relevant for stakeholder engagement:

GOVERN-5: Processes are in place for robust stakeholder engagement.
GOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritise, and integrate external stakeholder feedback regarding the
potential individual and societal impacts of AI risks.
GOVERN 5.2: Mechanisms are established to enable AI actors to incorporate adjudicated stakeholder feedback into system design and implementation regularly.
MAP-5: Impacts to individuals, groups, communities, organisations, or society are assessed.
MAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.
MAP 5.2: Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.

Another example is the Human Rights, Democracy and the Rule of Law Risk and Impact Assessment (HUDERIA) framework proposed by the Council of Europe in the context of negotiating the Framework Convention on AI. A key element (step) of the HUDERIA framework is the stakeholder engagement process.

Therefore, our program incorporates a strategy for stakeholder engagement and incorporating context, as doing so is an essential step towards ensuring systems account for local context and the lived experiences of end-users. Furthermore, through our program, we are also trying to tackle the inherent challenge in stakeholder engagement, such as power and information asymmetry, which can be exacerbated in AI, where technology is often a black box. Moreover, our report and the proposed launch session also sit well with one of the consistent themes of IGF, like AI & Emerging Technology (IGF 2023) and Addressing Advanced Technologies, including AI (IGF 2022), where we are trying to lay down the operational strategy for deploying human-centric and human rights-based AI solutions.

Relevant Links
About the program: https://openloop.org/programs/open-loop-india-program/
Open Loop Roundtable with Experts in New Delhi: https://openloop.org/past-events/open-loop-india-roundtable/

The session format will be structured, considering both onsite and online attendees, where everyone, irrespective of the medium, would be equally treated and could reap maximum insights from the session. We will make the session interactive by allotting sufficient time for attendees' discussion and contribution to the topics. Followed by the panel discussion, the floor will be set for moderated open discussion where any attendees could post their comments, interventions, research ideas etc., on the topic to the forum. The attendees will also be encouraged to pose questions to the speakers and authors of the report. The onsite moderator will probe the online and onsite attendees to feel free to contribute to the discussion, and an equal chance will be given to both online and onsite attendees. Also, the online moderator will keep the chat on Zoom live and active by stimulating conversations.