Check-in and access this session from the IGF Schedule.

IGF 2024 Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations

    Freedom Online Coalition (intergovernmental organisation)

    Niki Masghati, United States Department of State, Government, WEOG

    Freedom Online Coalition Support Unit (secretariat housed in Global Partners Digital), WEOG

    Speakers
    • Allison Peters, Acting Deputy Secretary of State for Democracy, Human Rights, and Labor, United States (overall moderator) 

    • Amy Colando, General Manager, Responsible Business Practices, Microsoft

    • Nighat Dad, Founder and Executive Director, Digital Rights Foundation 

    • Nicol Turner Lee, Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution 

    • Rasha Younes, Interim Director, Lesbian, Gay, Bisexual, and Transgender Rights Program, Human Rights Watch 

    Onsite Moderator

    Allison Peters

    Online Moderator

    Nicholas Powell

    Rapporteur

    Nicholas Powell

    SDGs

    5.b
    16.3
    16.6
    16.7
    16.8
    16.b
    17.16
    17.17
    17.6
    17.7
    17.9

    Targets: This session aims to discuss salient human rights risks posed by AI systems to marginalized populations, and foster dialogue to explore both international and multistakeholder coordination and collaboration efforts to address such risks. The discussion will also identify how these human rights risks to marginalized populations – and the steps that should be taken to address them – differ across cultural, geographic, and economic contexts, such as in developing countries, to ensure an inclusive discussion which highlights a diverse range of perspectives. The session will place a spotlight on the action that AI developers can take, and ways in which governments can support context-specific AI design, development, deployment, and use, further linking to the potential of AI technologies to strengthen and enable progress towards achieving the SDGs.

    Format

     

    Roundtable - This session will come in two parts 1) discussing salient human rights risks posed by AI systems to marginalized populations and take stock of the steps that governments, industry, and civil society have taken to address them, 2) discussing how these human rights risks to marginalized populations – and the steps that should be taken to address them – differ across cultural, geographic, and economic contexts, such as in developing countries. The 90 minute duration will enable a fruitful and in-depth discussion, and the round table layout will enable strong audience engagement.

     

    Description

    While AI technologies promise significant benefits, the human rights risks they pose too often fall disproportionately on marginalized populations, such as women and girls in all their diversity, persons with disabilities, members of marginalized racial, ethnic, religious, or linguistic groups, Indigenous peoples, LGBTQI+ persons, children, and human rights defenders. For example, AI systems are often used to generate harassing and harmful “deepfakes” or spread disinformation that specifically targets women and human rights defenders; AI systems can perpetuate patterns of bias found in their training data, reinforcing historical patterns of discrimination faced by groups defined by traits such as gender, geography, race, or caste; and AI tools enable advances in surveillance technologies that are too often used to interfere with rights to peaceful assembly or freedom of association, especially by marginalized populations, and have been used for targeting by security forces with harmful effects for civilians and privacy rights.

    This interactive workshop session aims to collaboratively develop feasible steps that can advance the identification, assessment, and mitigation of risks to marginalized populations that are created or exacerbated by AI. Framed by remarks from government, civil society, and industry stakeholders describing the challenges and constraints they face in this area, the workshop will explore 1) pressing issues related to AI’s impacts on marginalized populations; 2) success stories that should inform future actions; and 3) feasible steps that different groups of stakeholders can take to advance progress. The discussion will pay particular attention to how these issues and potential actions differ across diverse cultural, geographic, and economic contexts. After the event, the key issues and steps identified will be collated into an outcomes document, which could be published by the FOC.

    Key Takeaways (* deadline at the end of the session day)

    The session highlighted that one of the most egregious risks of AI technologies is that towards marginalised communities, with a variety of ongoing harmful practices stemming out of their deployment and use, such as discrimination and censorship. While more work needs to be done, it is important that focus is placed heavily on creating safeguards for marginalised communities within governmental systems.

    Speakers noted the supposed socio-economic opportunities of AI technologies in reality often present as threats to marginalised communities. AI can positively impact all areas of life - medicine, climate, education, businesses - however, AI technologies cannot solve the deep-rooted demographic and structural biases. Systemic inequalities will be present in the training data during the design of tech, leading to egregious risks to users.

    The session highlighted emerging issues rooted in deeper societal inequalities such as tech-facilitated gender based violence. The session showcased the challenges to regulating the emerging technologies’ space, especially considering the governance gap between global North and South, and highlighted technology companies have a responsibility to respect human rights and ensure their products do not cause harm.

    Call to Action (* deadline at the end of the session day)

    Speakers called on governments to hold emerging technology companies more accountable, and create regulatory safeguards for the design, development, use, and deployment of emerging technologies.

    Speakers called on technology companies to ensure adherence to the international human rights framework, and conduct good practices, including human rights impact assessment and due diligence, providing remedy under the UN Guiding Principles on Business and Human Rights, and understanding the needs of diverse groups of users, before product deployment.