IGF 2025 Lightning Talk #245 Advancing Equality and inclusion in AI

    Council of Europe
    Sara Haapalainen, Council of Europe, Hate Speech, Hate Crime and Artificial Intelligence Nienke van der Have, EU Fundamental Rights Agency Yanna Parnin, Council of Europe, Gender Equality Division

    Speakers

    Representative of CSOs from the anti-discrimination field Representative of a women’s organisation Nienke van der Have, EU Fundamental Rights Agency.

    Onsite Moderator
    Sara Haapalainen, Council of Europe, Hate Speech, Hate Crime and Artificial Intelligence
    Rapporteur
    Yanna Parnin, Council of Europe, Gender Equality Division
    SDGs

    5. Gender Equality
    10.3
    16.3
    16.6
    16.b


    Targets: The digital world expands progressively connecting society and individuals, engaging more of their time, and responding to more of their needs. Ensuring respect of human rights in combating discrimination and assessing the potential role of AI in relation to these phenomena is crucial. Combined, these efforts contribute to the emergence of a culture of peace and cooperation, conducive to social and economic development. Protecting human rights in the use of AI systems has a definite impact on ensuring gender equality and eradicating poverty, providing quality education and reduce inequalities, build sustainable cities and communities, ensure durable peace, effective justice and strong institutions, and harness partnerships for the SDG Goals.

    Format

    The set-up can enable all participants to hear the views of speakers representing groups most often affected by discrimination and bias, as well as women affected by AI technologies such as biased algorithms that reinforce gender inequalities or deepfakes, and raise questions. A few interactive questions (e.g. with the use of Mentimeter) will seek to motivate and trigger participants to reflect on the risks of AI-discrimination and how to redress it from perspective of groups affected.

    Duration (minutes)
    30
    Description

    The session will present measures that can be taken to operationalise safeguards and remedies against discrimination in use of AI systems, engage with groups most at risk and equip human rights supervisory bodies. The Study on the impact of artificial intelligence systems, their potential for promoting equality – including gender equality - and the risks they may cause in relation to non-discrimination adopted by the Gender Equality Commission (GEC) and the Steering Committee on Anti-discrimination, Diversity and Inclusion of the Council of Europe (CDADI) in 2023, and numerous other studies have highlighted the risks that AI systems pose to equality, including gender equality, and non-discrimination, online and offline, in a variety of sectors. These range from employment, through the online targeted distribution of job adverts, to the provision of goods or services in both the public and private sectors such as online loan applications, or to public security policies or the fight against fraud. For example, the report Bias in Algorithms from the EU Agency for Fundamental Rights shows how easily speech detection algorithms can be biased against certain groups. The groups that are most affected by bias in AI systems are very often the same groups and individuals as those at risk of discrimination in society. These groups, as well as women, also experience structural inequality, and struggle to meaningfully participate in forums that develop, deploy and regulate new digital technologies and promote inclusion in AI. The CoE and EU are jointly building the capacity of equality bodies and representatives of groups most affected by discrimination, including by biases in AI systems. The EU and CoE want to provide them with a platform at IGF to share their experiences, and engage in the governance of AI, including sharing ideas on how to ensure sufficient safeguards against discrimination and access to effective remedies

    The set-up can enable all participants to hear the views of speakers representing groups most often affected by discrimination and bias, as well as women affected by AI technologies such as biased algorithms that reinforce gender inequalities or deepfakes, and raise questions. A few interactive questions (e.g. with the use of Mentimeter) will seek to motivate and trigger participants to reflect on the risks of AI-discrimination and how to redress it from perspective of groups affected.