IGF 2024 Networking Session #50 Strategies & Tools to Support, Adopt, & Scale AI Governance

    Holistic AI
    Ella Shoup, Holistic AI, Private Sector, Western European and Others (WEOG)
    Siddhant Chatterjee, Holistic AI, Private Sector, Asia-Pacific Group

    Speakers

    The speakers for this event will be confirmed closer to the date of the IGF but will likely include representatives from civil society, government (like the US National Institute for Standards and Technology (NIST)) and multilateral organizations (like the Organisation for Economic Cooperation and Development (OECD)). As the speakers must plan their travel and approval for onsite attendance, we are currently awaiting approval.

    Onsite Moderator

    Siddhant Chatterjee

    Online Moderator

    Siddhant Chatterjee

    Rapporteur

    Siddhant Chatterjee

    SDGs

    16.6
    16.7
    16.8
    17.6
    17.8


    Targets: SDG 16: Critical to the development and sustainability of effective, accountable, and transparent institutions is multistakeholder participation in the ever-evolving discourse on AI governance. Such a governance order should include the institutions that develop laws and regulations, along with the mechanisms that complement those regulations and offer diverse pathways for governing and managing AI.

    SDG 17: Promoting an AI governance regime acknowledges the importance of SDG 17 because it advocates for a multistakeholder approach. Western, Global North countries often tend to dominate the development of approaches, formats and tools in technology policy, leaving a vacuum in representative policymaking for Global Majority countries who might have different predicaments, philosophies and capacities on internet governance. The development of broader AI governance regime through avenues such as this networking session, should help forge new and diverse pathways through which countries and stakeholders can effectively collaborate on the Responsible AI project.

    Format

    Classroom

    At the start of the session, the organizers will send out a poll using Mentimeter to both online and onsite participants. The poll will survey the audience on the non-regulatory tools and strategies with which they are familiar or have previously used. The speakers will present the results of the poll to the audience and discuss common themes and differences.

    We will then transition into a more in-depth presentation of the mechanisms and tools mentioned previously in the Session Description:

    -Risk management frameworks

    -Assurance techniques

    -Technical standards

    -Certifications and licenses to guide model access, use and release

    The speakers will then again engage the audience and poll them on their experience and engagement with each of these mechanisms and tools. We will conclude the session with a Q&A session.

    The structure of the session is proposed below:

    -Audience polling and discussion: 10 minutes

    -Presentation of mechanisms and tools: 20 minutes

    -Polling and further discussion: 20 mins

    -Q&A Session: ~10 minutes

    Duration (minutes)
    60
    Description

    In anticipation of the disruption AI might bring to all levels of society, governments are under increasing pressure to act through regulation. Caught between the promise of technological advancement and the peril of economic uncertainty and safety risks, policymakers are confronted with a range of uncomfortable trade-offs: regulation that is too strict could stifle innovation, but no or less regulation may unleash a range of risks. Supposedly robust regulatory proposals are frequently plagued by concerns that it cannot match the pace of technology and that traditional law-making cannot foresee the potential advances AI will bring in two, five, or ten years. Meanwhile, AI providers continue to release faster, more powerful and sophisticated foundation models, all the while calling on governments to take a more active role in the technology’s future. Amidst this discussion is an assumption that regulation – and particularly compulsory regulation – is the sole pathway policymakers can use to effectively respond to advancements in AI.

    Some policymakers have embraced the challenge, such as the European Union (EU) and China, who now both have binding regulation on AI. Others, like the United States, United Kingdom, and India have taken a relatively light-touch approach, at least temporarily. The emphasis on binding regulation, although critical, means there is a real risk in overlooking what could be crucial and nuanced socio-technical solutions that could be non-regulatory, or even complementary to legislation. These include standards, risk management frameworks, and assurance techniques. Accompanied with regulation, these can forge a wider, more adaptable and future-proof AI governance order that reflects the multitude of users, applications, and industries that will inevitably integrate AI into their daily activities. These range of mechanisms not only offer an agile and flexible approach to AI but also allow for a high degree of public participation and deliberation in their development – thus helping operationalise a truly multi-stakeholder driven approach towards Responsible AI (RAI).

    In this session, we will survey the range of such mechanisms and tools that advance the Responsible AI (RAI) agenda. We will consider:

    Risk management frameworks (e.g. the United States’ National Institute of Standards and Technology’s (NIST) AI Risk Management Framework)

    Assurance techniques (e.g. Algorithm audits, Model Safety Evaluations and Adversarial Testing approaches)

    Technical standards (e.g. such as foundation, process, measurement and performance standards advanced by bodies like the International Organization for Standardization’s (ISO) and the International Electrotechnical Commission (IEC).

    Certifications and licenses to guide model access, use and release (e.g. RAIL)

    We will discuss the pathways of participation for different stakeholder groups (civil society, industry, governments, academia) to contribute to each of these mechanisms. As each stakeholder group offers a unique set of skills, capabilities and knowledge, they will have varied experiences in how they participate and contribute to such mechanisms. We hope this session will provide a high-level overview of the different tools at our disposal, as well as a forum for different groups to meet and exchange knowledge towards this goal.

    Prior to IGF, organizers will use the session’s page on the IGF website and social media channels to share informative and preparatory materials to help the audience better contextualize the topic. Holistic AI will be taking questions during the Q&A from both the in-person and online audience. The online moderator will be active throughout the livestream to manage incoming questions and point the online participants to further resources.