IGF 2023 WS #465 International multistakeholder cooperation for AI standards

Time
Wednesday, 11th October, 2023 (23:30 UTC) - Thursday, 12th October, 2023 (01:00 UTC)
Room
WS 4 – Room B-1

Organizer 1: Florian Ostmann, The Alan Turing Institute
Organizer 2: arcangelo leone de castris, The Alan Turing Institute
Organizer 3: Ana Alania, The Alan Turing Institute
Organizer 4: Nalanda Sharadjaya, The Alan Turing Institute

Speaker 1: Nikita Bhangu, Government, Western European and Others Group (WEOG)
Speaker 2: Ashley Casovan, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Jacquet Aurelie, Technical Community, Asia-Pacific Group
Speaker 4: Sundeep Bhandari, Technical Community, Western European and Others Group (WEOG)
Speaker 5: Matilda Rhode, Private Sector, Western European and Others Group (WEOG)
Speaker 6: Wan Sie Lee, Asia-Pacific Group

Additional Speakers

Wan Sie Lee - Singapore Government (Asia-Pacific). 

Moderator

Florian Ostmann, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Ana Alania, Technical Community, Western European and Others Group (WEOG)

Rapporteur

arcangelo leone de castris, Technical Community, Western European and Others Group (WEOG)

Format

Other - 90 Min
Format description: The workshop will be structured as follows: Presentation on the work of the AI Standards Hub (20 minutes) Panel discussion (45 minutes) Audience participation via Q&A (25 minutes) Seating should be arranged theatre-style, with space for the speakers and panellists on stage or at the front of the room. The presentation on the AI Standards Hub will feature the following speakers: Florian Ostmann (The Alan Turing Institue), Sahar Danesh (BSI), and Sundeep Bhandari (NPL). For the panel discussion, Florian Ostmann will be the onsite moderator and the following speakers have confirmed their participation at the time of submitting this proposal: Ashley Casovan (Responsible AI Institute, Canada), Aurelie Jacquet (CSIRO, Australia), and Nikita Bhangu (The UK Government, The UK). In addition, we are in contact with potential speakers representing perspectives from Japan and Singapore and are confident that we will be able to confirm their participation closer to the date.

Policy Question(s)

1) What is the role of international standards in the governance of AI technologies, and what is the importance of multi-stakeholder participation and international cooperation for achieving sound and effective standards and regulatory interoperability for AI? 2) What challenges do different stakeholder groups face when it comes to engaging with AI standardisation, and how can multistakeholder participation in standards development be advanced? 3) What strategies can be adopted to strengthen international cooperation between civil society, technical communities, the private sector, and governments to develop sound and effective standards for AI?

What will participants gain from attending this session? - Understanding of the role that standards will play in achieving an effective international system for the governance of AI technologies. - Understanding how standards can foster the sustainable development and use of AI technologies. - Understanding why multistakeholder participation and international cooperation are essential to develop successful AI standards and what strategies can help pursue this objective. - Detailed knowledge of the UK’s AI Standards Hub as a concrete example of how such strategies can be implemented and of the opportunities that they unlock.

SDGs

Description:

This workshop introduces the AI Standards Hub as a case study to consider strategies for multistakeholder participation and international cooperation around the development of standards for AI technologies. AI is set to be the technology driving the next cycle of social and industrial evolution. AI’s transformative potential and its risks call for the development of effective global frameworks and norms - a complex international policy challenge over the next decade.  There is growing recognition that international standardisation will be a critical component of mature AI governance approaches. At the same time, realising the potential of international AI standards depends on broad international cooperation and multistakeholder participation in the development of standards. Only in this way can AI standards find wide adoption and achieve technical and regulatory interoperability.  Taking stock of the above considerations, this workshop will examine what strategies can be adopted to strengthen both international cooperation and multi-stakeholder collaboration across civil society, the technical community, industry, and government. The workshop will start with a presentation of the AI Standards Hub: a joint initiative led by The Alan Turing Institute, the British Standards Institute and the National Physical Laboratory, and supported by the UK Government. The Hub is a first-of-its-kind initiative to advance the participation of multiple stakeholders from civil society, academia, and industry in AI standardisation processes. After the presentation, a panel of speakers from different regions will discuss the role of international cooperation and multistakeholder participation in AI standardisation, drawing on lessons from the AI Standards Hub. Finally, provided that a key aim of this workshop is to collectively learn from diverse perspectives, we will conclude with an interactive session where moderators will invite online and in-person participants from different regional backgrounds and stakeholder groups to contribute to an inclusive and mutually enriching discussion.

Expected Outcomes

Insights from this workshop will feed into a report by The Alan Turing Institute identifying priorities for international cooperation and recommendations for multistakeholder engagement in AI standardisation. Additionally, a summary of the workshop and key takeaways will be published on the AI Standards Hub website and distributed through our newsletter and social media channels, reaching thousands of engaged stakeholders. We hope our discussions will catalyse novel forms of multistakeholder engagement and international cooperation and encourage similar initiatives around the world. We expect that they will result in new forms of collaboration between countries embarking on similar initiatives around AI standardisation and establish pathways for international knowledge exchange within this field. Learnings from the session will also directly inform our ongoing work on addressing the challenges of different stakeholders in actively engaging with AI standards development and will directly shape the Hub’s future events, training resources, and research agenda.

Hybrid Format: - Livestream of the event via an interactive platform like Zoom to allow for Q&A. - Designated staffer to monitor chat/Q&A during presentation and panel discussion and to select questions from online participants for Q&A session. - Allow for an equal number of questions from onsite and online participants (depending on volume and time allotted). - Explicit emphasis on regional and stakeholder diversity in question/comment selection (to the extent possible based on on-the-day attendance). - The moderator will encourage panellists to address both onsite and online participants.

Key Takeaways (* deadline 2 hours after session)

Initiatives like the AI Standards Hub highlight the importance of bringing together expertise from across academic institutions, national standards bodies, and national measurement institutes for unlocking the potential of standards as effective AI governance tools underpinned by multi-stakeholder processes. It is key for such initiatives to link up, identify synergies, and pursue opportunities to coordinate efforts across countries.

Increased international networking across civil society, academia, the technical community, industry, and regulators/government is critical for addressing capacity building, promoting participation from all stakeholder groups, and advancing global alignment in the field of AI standardisation. Efforts aimed at individual stakeholder groups have an important role to address the needs of groups currently underrepresented in AI standardisation.

Call to Action (* deadline 2 hours after session)

MAG should actively consider what the IGF can do to advance the promotion and collaboration on globally recognised AI standards (including technical measurement standards).

Civil society, academia, the technical community, industry, regulators, and government should actively engage with AI standards initiatives, such as the AI Standards Hub, designed to advance multi-stakeholder input in AI standardisation.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

The session was dedicated to exploring the role that multistakeholder participation and international cooperation must play to unlock the potential of standards as effective AI governance tools and innovation enablers around the world. The workshop followed a three-part structure. The first part presented the work of the AI Standards Hub, a UK initiative dedicated to building a diverse community around AI standards through knowledge sharing, capacity building, and world-leading research. The second segment featured a panel of four speakers, bringing in diverse perspectives from across different stakeholder groups and geographic regions. The final part of the workshop centred on gathering questions and comments from the audience participating in-person and remotely via online platforms.

Segment 1: Introduction to the AI Standards Hub. The workshop started with the introduction of the AI Standards Hub, a joint UK initiative led The Alan Turing Institute, the British Standards Institution (BSI), and the National Physical Laboratory (NLB). Dr Matilda Rhode, the AI and Cyber Security Sector Lead at BSI, began by introducing the mission of the Hub, explained the significance of standards for the evolution of the AI ecosystem, and provided a brief overview of standards development processes. Dr Florian Ostmann, the Head of AI Governance and Regulatory Innovation at the Alan Turing Institute, addressed the importance of stakeholder diversity in AI standardisation and provided a snapshot of the Hub’s work across its four pillars – (1) AI standards observatory, (2) community and collaboration, (3) knowledge and training, and (4) research and analysis. Finally, Sundeep Bhandari, the Head of Digital Innovation at NPL, discussed international collaborations pursued by the Hub with organisations such as the OECD, NIST and SCC, and outlined future collaboration opportunities for building international stakeholder networks, conducting collaborative research, and developing shared resourced on AI standards.  

Segment 2: Panel discussion. Nikita Bhangu, the Head of Digital Standards policy in the UK government's Department for Science, Innovation and Technology (DSIT), started off the panel discussion by providing an overview of the UK government’s policy approach to standards in the context of AI. Referring to the recent AI white paper, Ms Bhangu highlighted the important role that standards, and other non-regulatory governance mechanisms and assurance techniques, can play in creating a robust set of tools for advancing responsible AI. Elaborating on the complexity of the standardisation ecosystem, she noted there are many barriers that stakeholders face in meaningfully engaging with AI standards and that it is vital for governments to support diverse stakeholder participation in standards development processes. Reflecting on DSIT’s policy thinking that led to the creation of the AI Standards Hub, Ms Bhangu noted that key aims guiding the initiative were to increase adoption and awareness of standards, create synergies between AI governance and standards, and provide practical tools for stakeholders to engage with the AI standards ecosystem.

Following this, the international panel took turns to discuss the most important initiatives in AI standardisation aimed at advancing multistakholder participation, addressed questions on emerging stakeholder needs and challenges in different parts of the world, and discussed the importance of international collaboration on AI standards.

Ashley Casovan, the Executive Director of the Responsible AI Institute, provided insights on Canada’s AI and Data Governance Standardization Collaborative from the perspective of civil society. She explained that the initiative aims to bring together multiple stakeholders to reflect on AI standardisation needs across different contexts and use cases. Wan Sie Lee, the Director for Data-Driven Tech at Singapore’s Infocomm Media Development Authority (IMDA), stressed that there is a widespread recognition of the importance of international cooperation around AI standards in Singapore. This is exemplified by Singapore’s active engagement in ISO processes and close collaborations with other countries. Elaborating on the city-state’s efforts to achieve international alignment on AI standards, Ms Lee pointed to Singapore’s AI Verify initiative, which closely aligns with NIST’s recently published Risk Management Framework. Aurelie Jacquet, Principal Research Consultant on Responsible AI for CSIRO-Data61, highlighted several Australian initiatives centred on advancing responsible AI, including Australia’s AI Standards Roadmap, the work of the National AI Centre and Responsible AI Network, and the development of the NSW AI assurance framework. These initiatives are dedicated to developing education programmes around AI standards, strengthening the role of standards in AI governance, and leveraging existing standards to provide assurance of AI systems in the public sector and beyond.

Moving on to the topic of stakeholder needs and challenges, Nikita Bhangu pointed to the lack of available resources and dedicated standards expertise within SMEs, civil society, and governments, which often leads to these groups being underrepresented in AI standards development processes. Ashley Casovan highlighted similar challenges in Canada, where lack of resources in government teams is hindering the process of analysing the information collected by the Collaborative. Ms Casovan also pointed to the efforts of the Canadian Collaborative to include perspectives from all domains of civil society, as well as indigenous groups, to ensure that their input is taken into consideration when finding solutions to harms posed by AI. Wan Sie Lee noted that Singaporean government is trying to address the challenge of limited resources by focusing on areas where they can provide the most value to the global conversation, such as tooling and testing. Furthermore, to improve stakeholder diversity Singapore is making an active effort to include voices from the industry in its policy approaches. Finally, Aurelie Jacquet addressed the complexity of the standardisation ecosystem and the challenges stakeholders face in understanding the standards development processes. To address this challenge, she added, experts in Australia have focused on drafting white papers and guidance documents to help organisations in understanding how these processes work.  

Talking about priorities for international cooperation, the panellists stressed that understanding the approaches taken by other countries is essential to avoiding duplication of work, building synergies, and understanding what kinds of coordination efforts are required. For this reason, multilateral fora like the OECD and IGF make for very important platforms. Additionally, initiatives like as the AI Standards Hub, were highlighted as important avenues for building networks internationally, identifying shared goals and challenges across different stakeholder groups, and jointly devising strategies to build an inclusive environment around AI standards.

Segment 3: Audience Q&A. The final segment of the workshop provided an opportunity for attendees to ask questions, share their perspectives, and get additional input from the speakers. The session heard from the Coordinator of the Internet Standards, Security and Safety Coalition at the IGF, who stressed the importance of using standards that are developed by the technical community outside of government-recognised standards development organisations to inform national policies on AI. They suggested reaching out to the technical community in places like IETF or IEEE and align on key areas of AI standardisation. One of the online participants highlighted the value of further exploring strategies for increasing SME engagement in AI standards development. They proposed that this subject could be considered as a potential topic for inclusion in EuroDig, Europe’s regional IGF, taking place in Vilnius on 17-19 June 2024. The session also heard from an audience member representing Consumers International, who emphasised the value of consumer organisations in ensuring responsible AI development, since they represent the end users of these products and services. They stressed that consumer organisations possess a wealth of evidence to support standards development and can help to ensure that standards are firmly rooted in the real-life experiences and needs of their end-users. The participant also highlighted the AI Standards Hub as an important resource for Consumers International to increase their engagement in AI standardisation.