IGF 2023 Lightning Talk #33 Framework for Meaningful Engagement in AI

Tuesday, 10th October, 2023 (08:10 UTC) - Tuesday, 10th October, 2023 (08:30 UTC)
SC – Room H

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning
Virtual/Augmented Reality

European Center for Not-for-Profit Law (ECNL)
Marlena Wisniak, European Center for Not-for-Profit Law (ECNL), Civil Society, Eastern Europe


Marlena Wisniak, European Center for Not-for-Profit Law (ECNL), Civil Society, Eastern European Group

Onsite Moderator
Online Moderator


Targets: SDG 5: gender equality– women and gender non-binary persons are disproportionately impacted by AI systems, from bias and discrimination in algorithms to silencing and harassment online. Women who have intersecting identity characteristics, such as racialized women; those from religious minorities; transwomen, queer and non binary persons; disabled women; girls, and those of lower socio-economic status, are furthermore disproportionately at risk of harm. These risks are especially acute for women from the Global South. Yet they’re also generally excluded from conversations related to the design, development, and use of AI systems. Building these systems in a way that considers the unique social, political and cultural contexts in which AI is created and used – often within patriarchal environments – is urgently needed. 5.1; 5.2; 5.5; 5B; 5C SDG 10: reduced inequalities – AI systems can accelerate and exacerbate existing social and economic inequality, from the use of AI for law enforcement and criminal justice, to automated social welfare systems and algorithmic content moderation. The issue is further heightened when looking at the disproportionate impacts and exclusion of Global South-based stakeholders. AI developers and deployers thus have a responsibility to identify, assess, mitigate, and remedy any adverse impacts that their systems may have on human rights. Human rights impact assessments for AI, with meaningful stakeholder engagement, helps promote marginalized and vulnerable users’ (and broader stakeholders’) enjoyment of human rights, instead of harming them, and reduce inequality between demographic groups and regions. 10.2; 10.3; 10.6; SDG16: 16. Peace, Justice and Strong Institutions . Promoting peaceful and inclusive societies through meaningful participation in HRIAs and ensuring responsive, inclusive, participatory and representative decision-making at all levels (16.7)



Duration (minutes)

As the use of artificial intelligence (AI) increases, so has the push for human rights impact assessments of the developing technology. ECNL and SocietyInside, with cross-sector input from the Danish Action Coalition on Civic Engagement in AI Design, amongst others, have developed a Framework for meaningfully engaging people during these impact assessments. The lighting talk will give a brief overview of the Framework, which was born out of the need for a clear process, helpful tools and standards setting, in order for the engagement to be meaningful in practice. The aim is that by using this Framework, both developers and those engaged, feel that they have collaboratively created concrete results. Human rights impact assessments need to include diverse voices, disciplines, and lived experiences from a variety of external stakeholders, and within the Framework there is an emphasis on engaging those most at risk from the harms of AI. In 2023 we continue to pilot the practical implementation and usefulness of the Framework with the City of Amsterdam, as a public body developing AI for its citizens, and with an AI-driven social media platform. In addition, we will continue consulting civil society and other stakeholders on the content and future iterations of the Framework as a living document that evolves in parallel with the practical needs. The lighting talk will invite others to contribute by providing additional input, ideas, or suggestions for a pilot implementation. The framework can be accessed at https://ecnl.org/publications/framework-meaningful-engagement-human-rig….

The onsite moderator will present the framework, using a slide deck which can be easily access to participants online. The onsite moderator will take any questions from the floor, and the online moderator will monitor questions of online participants.