IGF 2023 WS #299 Community-driven Responsible AI: A New Social Contract

Time
Wednesday, 11th October, 2023 (08:00 UTC) - Wednesday, 11th October, 2023 (09:30 UTC)
Room
WS 4 – Room B-1
Subtheme

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning

Organizer 1: Yasmin Afina, 🔒Chatham House
Organizer 2: Hillary Bakrie, 🔒United Nations
Organizer 3: Alexander Krasodomski-Jones, Chatham House
Organizer 4: Rowan Wilkinson, Chatham House
Organizer 5: Marjorie Buchser, 🔒

Speaker 1: Hillary Bakrie, Intergovernmental Organization, Intergovernmental Organization
Speaker 2: Kathleen Siminyu, Civil Society, African Group
Speaker 3: Zoe Darme, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Mahlet Zimeta, Civil Society, Western European and Others Group (WEOG)

Moderator

Yasmin Afina, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Rowan Wilkinson, Civil Society, Western European and Others Group

Rapporteur

Yasmin Afina, Civil Society, Western European and Others Group (WEOG)

Format

Panel - 90 Min

Policy Question(s)

A. How should countries, communities, cities and companies collaborate to guide the beneficial development of AI and ensure that no one is left behind? B. What are the strengths of community-led AI governance, and what might its limits be? C. What multistakeholder, deliberative or open-source technologies and practices could be applied to AI governance by countries, communities, cities and companies?

What will participants gain from attending this session? Chatham House, OSGEY and Google will tap into the IGF’s diverse audience to overcome Western-centric biases and limitations, which tend to dominate the space at the expense of under-represented communities. The session development will also be informed by Chatham House’s AI Taskforce, bringing together voices from six continents and reflecting their local constituencies in their approach. The moderator will also seek to challenge the participants’ assumptions and biases to capture the widest range of perspectives and unearth under-explored and innovative approaches to foster responsible AI. Supported by deliberative technology, the session will provide an opportunity for knowledge sharing, with participants able to better understand how different communities approach and frame the responsible development of AI. The discussions will ultimately enable participants to identify common ground, build bridges across sectors, and catalyse subsequent reflections and initiatives to foster technologies that empower all.

Description:

As AI progress proceeds at breakneck speed, companies, governments and international bodies are recognising that new norms and more inclusive and equitable approaches are needed to measure the impact of these technologies, mitigate risks of harm, and ensure their responsible development and use. Critical to good AI governance will be principles that are at the heart of the IGF, but rare outside: multistakeholder processes, transparency, technical expertise and global cooperation. These principles that will underpin any realistic effort to move beyond models of centralised corporate power or governmental torpor. Building on multidisciplinary research on AI governance, Chatham House, with the Office of the UN Secretary-General's Envoy on Youth (OSGEY) and Google, will host a panel discussion to foster an inclusive and informed public debate, and policy engagement, on how collectives - countries, communities and companies - can frame and guide the responsible development of AI technologies. To this end, this session will (1) provide a stocktaking exercise, examining some of the initiatives and best practices in recent years to push for the responsible development of AI and ensure their fair, equitable use across communities; (2) discuss how to operationalise responsible AI and what it means in practice for young people, vulnerable and marginalised groups; and (3) possible mechanisms for addressing social and policy concerns. Establishing common understanding around key themes, questions and risks, and ensuring diverse and systematic input regarding responsible AI development through this session, will ultimately contribute to global efforts at ensuring that these technologies are built for all, by all, and empower all. The session will pilot a new piece of deliberative technology - pol.is - being tested by Chatham House, pioneering an interactive new approach to the discussion. We expect participants will enjoy and find interesting, and will datafy the session summary published after the event.

Expected Outcomes

The session will feed into Chatham House’s work promoting the responsible development of AI, the outcomes of which will inform key stakeholders part of Chatham House’s wider network. One of Chatham House’s on-going projects will culminate in a paper, which the moderator will introduce at the session, and build on the momentum post-summit to maximise its outreach and impact. Furthermore, Chatham House will use the key takeaways to take the conversation further, including at the 2024 IGF and subsequent Chatham House-held research. Finally, results of the pol.is deliberation will be made public through the session summary. In addition, following OSGEY’s call for meaningful youth engagement in digital development, the outcome of the session will contribute to Office's high-level advocacy and programmes on innovation and technologies. Additionally, insights from the dialogue will further help guide the Office's support to the Our Common Agenda recommendation on Global Digital Compact development and implementation.

Hybrid Format: ​​During the first 30 minutes, each panellist will share their perspectives on the responsible development of AI, building on their respective backgrounds and areas of work (natural language processing for African languages, data rights, AI ethics, youth empowerment). The moderator will then ask panellists to reflect on ways that the responsible development of AI could be operationalised, and introduce the multidisciplinary research undertaken by Chatham House with its High-Level Taskforce on AI & Society. The third and longest part of the session will be a discussion among participants with input from the panellists. To encourage good online/offline participation, both moderators will coordinate, alternating between in-person participants and online. The moderators will also use Pol.is, a web-based deliberation platform, which will allow for direct engagement between participants in the room and online, a pioneering approach we expect to directly remedy the gap between virtual and in-person participants.

Key Takeaways (* deadline 2 hours after session)

The definition of communities and collectives can vary highly; is fluid in space and time (and more so than in the past); hence community-centered governance approaches must be agile and adapt accordingly.

Individual rights are not enough to harness and address the benefits, harms and risks from AI innovation: it is essential to look at the implications of AI on collectives and how they interact and affect each other.

Call to Action (* deadline 2 hours after session)

AI innovation requires a reflection on the social contract: it is not enough to engage with communities, those in power must also facilitate and enable the effective and meaningful implementation of governance solutions.

There is a dire need to facilitate and enable the capacity of communities to engage meaningfully and contribute to AI innovation. Incentivisation is key; communities need to be informed and empowered by those in positions of privilege and power.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

Introduction

As AI progress proceeds at breakneck speed, companies, governments and international bodies are recognising that new norms and more inclusive and equitable approaches are needed to measure the impact of these technologies, mitigate risks of harm, and ensure their responsible development and use. Building on multidisciplinary research on AI governance, Chatham House, with the Office of the UN Secretary-General's Envoy on Youth (OSGEY) and Google, hosted a panel discussion to foster an inclusive and informed public debate, and policy engagement, on how collectives - countries, communities and companies - can frame and guide the responsible development of AI technologies.

The discussion focused on several questions, including: 

  • What is the role of ‘powerful’ actors, such as governments and the private sector, in the governance of AI development and use? 

  • What community-led efforts are in place to govern the responsible development of AI? 

  • How should communities be engaged and what incentives are there for their participation in AI governance? 

  • What are some of the main considerations communities ought to take into account when devising and implementing governance approaches and solutions in the AI space?

Establishing a common understanding around key themes, questions and risks, and ensuring diverse and systematic input regarding responsible AI development through this session, will ultimately contribute to global efforts at ensuring that these technologies are built for all, by all, and empower all.

AI Governance: A multi-faceted approach

The discussions included perspectives from the private sector, international organisations, and civil society on what ongoing efforts are in place to engage communities in the responsible development and deployment of AI technologies and their subsequent governance. 

Key initiatives from across sectors and geographies have emerged, including the OSGEY’s work in engaging with the youth on AI governance issues: the youth are, indeed, arguably the largest and most connected generation. They are bound to inherit current policies, decision-making, and systemic issues; hence there is a critical need - and desire, from the youth - to ensure their representation in decision-making processes and, ultimately, a sense of control over their digital future. Greater inclusivity is a necessary response to the ongoing lack of trust and assurances with regard to the development and deployment of these technologies. The successful inclusion of the youth in AI governance will require intergenerational support and multistakeholder allyship. 

A couple of notable industry-led initiatives have also emerged from the discussions:. Google Seach’s Quality Raters isa select group of individuals across the globe trained under Google’s set of guidelines to help stress-test and refine the quality of search queries. This programme is a key example through which technology companies can, by proxy, engage communities and the value of established processes in the roll-out and testing of products and subsequent changes. 

In addition, research and development in voice recognition being predominantly in English, ‘non-priority’ languages therefore tend to neither be reflected in products, nor data to even develop and train these programmes in the first place. In response, Mozilla’s Common Voice initiative seeks to overcome this limitation through extensive building and engagement with communities, one notable example being with Kiswahili-speaking groups. These engagement opportunities take many forms, including: competitions for students; partnership with grassroot groups across Kenya; the ‘community champions’; as well as collaboration with linguists to capture the many dialects and variants in Kiswahili. 

Key considerations for the way ahead

It is clear that in order to leverage and maximise the benefits of AI technologies, governance solutions ought to think of their implications both for individuals and, most of all, collectives. As initiatives to engage with communities in the responsible development and deployment of AI technologies emerge, key considerations have emerged to inform future efforts: 

Effective community engagement requires enabling environments conducive to meaningful solutions. There is a particular emphasis for those on the ‘powerful’ end in the new social contract (i.e., governments and big technology companies) to facilitate such environments through, for example, capacity building, established processes, incentivisation and creating opportunities to address other, pressing issues such as climate change, healthcare access, and disability exclusion. 

The definition of communities is fluid in time and space. Individuals can belong to more than one community at the same time; and communities span across geographies, interests, languages and shared history, among others. As such, there is a need to reconsider and re-evaluate the social contract in light of AI development and the role and place of communities.

AI must not be mistaken as a monolithic technology and is highly contextual. The nature of the technology’s impact - and the outcome of subsequent policy responses - is highly contextual and will change depending on what is at stake. For example, the human rights implications of AI technologies will be different depending on the technological application, as well as the wider societal context in which it is being deployed. As such, governance solutions must be sustainable, agile, and look into both existing and long-term risks, and strive to foster both horizontal and vertical opportunities. 

There is no easy solution, safe experimentation is key. Concrete implementation of governance measures requires extensive research and experimentation on defining what works, and ensuring that solutions are agile, trustworthy, and meet the needs of communities. These experimentations must, however, be done in a safe environment: many lessons can be drawn from user research practices. 

Trust is multi-faceted. There are two particular aspects to establishing trust. Trust by communities; and trust in the product and its outputs. Establishing trust by communities can be done through greater ownership over the products’ development and the many benefits they bring, as well as through meaningful engagement and implementation of policy measures and solutions. Establishing trust in the tool requires, first, reflections on a number of elements and how to adapt technical solutions accordingly. One notable example pertains to users’ perception and how labels affect a product’s trustworthiness in the eyes of the users. The question of trust is particularly relevant as AI technologies are at risk of deployment and use for disinformation campaigns, an element of increasing importance given upcoming election cycles, increasing polarisation, conflict, and risks of harm against vulnerable communities.