Check-in and access this session from the IGF Schedule.

IGF 2023 Day 0 Event #31 Marginalized voices in AI human rights impact assessments

    Subtheme

    Artificial Intelligence (AI) & Emerging Technologies
    Chat GPT, Generative AI, and Machine Learning

    European Center for Not-for-Profit Law (ECNL)
    Marlena Wisniak, European Center for Not-for-Profit Law (ECNL), Civil Society, Eastern European Group
    Aaryn Yunwei Zhou, Government of Canada, Government, WEOG
    Vino Lucero, EngageMedia, Civil Society, Asia-Pacific Group
    Savannah Badalich, Discord, Private Sector, WEOG
    Lindsey Andersen, BSR , Civil Society, WEOG

    Speakers

    Marlena Wisniak, European Center for Not-for-Profit Law (ECNL), Civil Society, Eastern European Group
    Aaryn Yunwei Zhou, Government of Canada, Government, WEOG
    Vino Lucero, EngageMedia, Civil Society, Asia-Pacific Group
    Savannah Badalich, Discord, Private Sector, WEOG
    Lindsey Andersen, BSR , Civil Society, WEOG

    Onsite Moderator
    Online Moderator
    Rapporteur
    SDGs

    5.1
    5.2
    5.5
    5.b
    5.c
    10.2
    10.3
    10.6
    16.7


    Targets: SDG 5: gender equality– women and gender non-binary persons are disproportionately impacted by AI systems, from bias and discrimination in algorithms to silencing and harassment online. Women who have intersecting identity characteristics, such as racialized women; those from religious minorities; transwomen, queer and non binary persons; disabled women; girls, and those of lower socio-economic status, are furthermore disproportionately at risk of harm. These risks are especially acute for women from the Global South. Yet they’re also generally excluded from conversations related to the design, development, and use of AI systems. Building these systems in a way that considers the unique social, political and cultural contexts in which AI is created and used – often within patriarchal environments – is urgently needed.

    SDG 10: reduced inequalities – AI systems can accelerate and exacerbate existing social and economic inequality, from the use of AI for law enforcement and criminal justice, to automated social welfare systems and algorithmic content moderation. The issue is further heightened when looking at the disproportionate impacts and exclusion of Global South-based stakeholders. AI developers and deployers thus have a responsibility to identify, assess, mitigate, and remedy any adverse impacts that their systems may have on human rights. Human rights impact assessments for AI, with meaningful stakeholder engagement, helps promote marginalized and vulnerable users’ (and broader stakeholders’) enjoyment of human rights, instead of harming them, and reduce inequality between demographic groups and regions.

    SDG16: 16. Peace, Justice and Strong Institutions . Promoting peaceful and inclusive societies through meaningful participation in HRIAs and ensuring responsive, inclusive, participatory and representative decision-making at all levels (16.7)

    Format

    Gathering in the format of a roundtable/workshop, with possible breakout groups.

    Language

    English

    Description

    Human rights impact assessments of AI systems are an essential part of identifying, assessing and remedying risks to human rights and civic space resulting from the development and use of AI systems. This interactive session provides an introduction to risk and impact assessments for AI-driven platforms. Centering meaningful stakeholder engagement as a key component, participants will discuss how best to include civil society and affected communities from around the world, especially from the Global South, in the process. The session draws on the European Center for Not-for-Profit Law's framework on human rights impact assessments and stakeholder engagement. The collaborative session includes a short case study assessing the impacts of AI systems on human rights, with emphasis on rights to freedom of expression, assembly, and association. Participants will explore how impact assessments could be conducted in practice in such a context, with meaningful participation from local and regional civil society and communities.

    The session will be structured in three parts. First, the invited speakers will give a brief background about human rights impact assessments and stakeholder engagement for AI systems, sharing key challenges and opportunities, especially in the Global South. Second, participants will be invited to share their thoughts and reflections through an open (but guided) conversation and a case study. Open discussion will be available both for attendees participating remotely, and those who are attending in-person. The organizer will provide facilitation for both in-person and online breakout groups, working with a co-moderator who will facilitate the online conversation. Third, the organizer will provide a high-level overview of what was discussed, as well as open questions and ideas for future work, based on the group discussion.

    The in-person moderator will mostly be responsible for facilitating conversations of participants in the room. Remote participants will have the possibility to contribute via a chat and through a ‘raise your hand’ function. The in-person moderator will work closely with the online moderator, who will monitor the chat and facilitate virtual breakout groups.