Check-in and access this session from the IGF Schedule.

IGF 2021 Open Forum #40 The challenges of AI Human Rights Impact Assessments

    Time
    Friday, 10th December, 2021 (09:45 UTC) - Friday, 10th December, 2021 (10:45 UTC)
    Room
    Ballroom B
    Issue(s)

    Digital policy and human rights frameworks: What is the relationship between digital policy and development and the established international frameworks for civil and political rights as set out in the Universal Declaration on Human Rights and the International Covenant on Civil and Political Rights and further interpretation of these in the online context provided by various resolutions of the Human Rights Council? How do policy makers and other stakeholders effectively connect these global instruments and interpretations to national contexts? What is the role of different local, national, regional and international stakeholders in achieving digital inclusion that meets the requirements of users in all communities?
    Inclusion, rights and stakeholder roles and responsibilities: What are/should be the responsibilities of governments, businesses, the technical community, civil society, the academic and research sector and community-based actors with regard to digital inclusion and respect for human rights, and what is needed for them to fulfil these in an efficient and effective manner?

    Panel - Auditorium - 60 Min

    Description

    In the past years, researchers, public bodies, and businesses have come out with reports and suggested processes directed at addressing the human rights risks of artificial intelligence (AI) systems. Despite the significant amount of knowledge available in the public domain, there are still significant gaps in the existing regulations addressing AI systems and their impacts. This reality of inadequate regulation leads to inadequate human rights protections for individuals, whether their engagement with AI systems are via public avenues or with the private sector.

    As a solution to this lack of human rights protection, international organisations, academics and civil society have called for mandatory human rights impact assessments (HRIAs) to be conducted when businesses or state agencies are using or developing artificial intelligence (AI).

    In its recent report, FRA identified that when it comes to assessing the impacts of AI, fundamental rights beyond data protection are generally not mentioned by users and developers of AI systems. Given that fundamental rights violations are always contextual, there is a need to identify how to structure impact assessment requirements that allow such assessment to be conducted with both flexibility and effectiveness.

    The increased use of AI systems, often based on data collected over the internet, has stirred many human rights discussions. In particular in Europe, where the political processes culminated in April 2021 with the publication of a draft Regulation on AI.

    To ensure a fundamental rights-compliant use of AI, one of the most crucial and intricate questions lie in the articulation of audit and accountability systems. The technical complexity and, virtually, infinite fields of applications, multiply the difficulties in establishing human rights impact assessments that can effectively identify and prevent potential human rights violations.

    Fundamental rights compliance cannot be automated and hard-coded into computer software. Rather, each use case needs separate examination to determine whether any fundamental rights issue arises. Nevertheless, assessments can follow a systematic approach and provide similar information.

    While the fundamental rights implicated vary depending on the area of application, it is clear that the full spectrum of rights needs to be considered for each use of AI.

    This open forum addresses the following questions: • What minimal criteria should human rights impact assessments include? • What elements should any field-specific guidance include? • How to ensure the effectivity of HRIAs in detecting potential violations? • What requirements are necessary to ensure that impacted rightsholders have access to information and can demand their rights? • What should the specific roles of each stakeholder be, to ensure expert and independent assessments of AI systems (including from public institutions, AI developers and users (businesses and public entities), the technical community, civil society, the academic and research sector)?

    Depending on the health and political context in December 2021, at this stage, this Open Forum is expected to be hybrid. In order to both, ensure a lively event, and build on the participants’ experiences and expectations, the event will invite the public to participate in the debate through different ways: - A “break-the-ice” short session will launch the discussion. It will consist on some contextual questions from real cases (e.g. how many individuals were negatively impacted by an AI in welfare benefit system?) to show the importance of the topic; - Panellists will be invited to have short introductory remarks instead of presentations; - Mentimeter-like tools will be used to invite the public to participate, notably by: o Inviting participants to vote for the questions they would like the panellists to reflect on, from a (short) list of questions shown at the beginning of the discussion o Inviting participants to identify crucial topics they would like the panellists to reflect on, e.g.: “From your experience, what is the most challenging aspect of making HRIA effective?”

    Organizers

    EU Agency for Fundamental Rights

    • Elise Lassus, European Union Agency for Fundamental Rights, Europe, EU Agency
    • Emil Lindblad Kernell, Danish Institute for Human Rights, Europe/International, Human Rights National Institution
    Speakers

    - Emil Lindblad Kernell, Danish Institute for Human Rights, Europe, Human Rights - Lorna McGregor, Professor of International Human Rights Law, University of Essex, Europe - K, digital rights researcher - Alessandro Mantelero, Associate Professor of Private Law and Law & Technology at the Polytechnic University of Turin, Europe - Etienne Maury, ‎Legal and policy officer, CNIL, Europe (tbc)

    Onsite Moderator

    Elise Lassus, European Union Agency for Fundamental Rights

    Online Moderator

    Cathrine Bloch Veiberg, Danish Institute for Human Rights

    Rapporteur

    David Reichel, European Union Agency for Fundamental Rights

    SDGs

    5.b
    9.1
    10.2
    10.3
    16.10
    16.3
    16.6
    16.7
    16.b

    Targets: Depending on the objective of AI systems (to develop medical diagnostic tools, to support environmental impact assessments, etc), virtually, all SDGs could benefit, or be challenged, by AI systems. As a result, ensuring safe and human rights-compliant AI systems indirectly supports the achievement of many SDGs.

    DIHR analysis shows that more than 90% of the SDG targets are directly linked to human rights norms and standards. https://www.humanrights.dk/sites/humanrights.dk/files/media/dokumenter/sdg/folders/sdg-folder_2030agenda.pdf

    There is, however, a direct link with SDGs 10 and 16, as appropriate human rights impact assessments of AI systems will support the detection and prevention of potential biases and discriminations, and will ensure an effective accountability for all individuals whose rights might be impacted by the use of AI.

    Key Takeaways (* deadline at the end of the session day)

    AI impacts all fundamental rights: AI Impact assessments are too often seen as little more than an abstract, one-time box-ticking exercise mainly focused on the most salient harms. It is crucial to move away from Data protection impact assessments to consider the scale and seriousness of the impact AI on individuals’ rights and freedoms.

    Context is key: we must rethink and adapt existing impact assessment tools not only in relation to the type of technology involved, but also to its particular purpose and according to the social and political context of operation.

    Call to Action (* deadline at the end of the session day)

    More research and expertise are needed. Data protection authorities have an important role to play, and much can be done to strengthen their mandates, capacities, and expertise. However, a diversity of stakeholders should be put to contributions, to ensure that all fundamental rights, in all the different AI applications, are protected.

    Companies developing and using AI technologies also have a responsibility in making sure the rights and freedoms of individuals are respected. Independently of existing national or international regulation, companies should carry out human rights impact assessments in a transparent manner and at regular intervals.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

     

    Artificial intelligence technology impacts human rights, but how wide is the impact and how should it be assessed? To complement or replace specific tasks otherwise performed by humans, these technologies require the collection, storage, and processing of large quantities of personal data which can, without the due safeguards, negatively impact the right to privacy. However, since AI systems are being increasingly used in different sectors of society, AI can also affect many other rights such as the right to health, education, freedom of movement, freedom of peaceful assembly, freedom of association, and freedom of expression. Additionally, it is important to notice that AI systems change with time and can be used for purposes that are very different from the ones initially intended. Moreover, the impact on human rights is also dependent on the local context in which AI technology is deployed. The risk is higher, for instance, where the political context is unstable or where the legal framework governing AI does not ensure a sufficient degree of fairness, transparency, and accountability. How should human rights impact assessments (HRIAs) be conducted, and who should be responsible?

    Panellists agreed that HRIAs are too often still seen as little more than an abstract, one-time compliance exercise, narrowly focused on the most salient harms. Data protection authorities have an important role to play, and much can be done to strengthen their mandates, capacities, and expertise. At the same time, panellists also agreed that we should not expect them to be the only entities playing this role. Lorna McGregor emphasized the fact that in data protection impact assessments, fundamental rights and freedoms are usually identified to privacy, making them narrower than they were supposed to be. At the same time, however, if human rights should be more broadly understood, several panellists emphasized the need to focus on the risks associated with the specific context and area or industry where each AI system will be developed and deployed. Emil Kernell noted that although companies are being increasingly pushed to carry HRIAs, the specific context, and different ways in which their products will be used is not always sufficiently taken into account in HRIAs. For example, the development of a smart city in Montreal will raise certain issues that will be absent in other cities. K. further argued that companies have a duty to protect their customers’ data and rights, especially when the legal framework and authorities regulating data protection are non-existent or too weak. According to the same panellist, the decision of a company of entering into a new unregulated market must be informed by an understanding of the country’s history and political situation.

    Context-specific impact assessments that go beyond privacy concerns are not the only challenge that public authorities, NGOs, and companies have to face. E. Kernell challenged, for instance, the value of HRIAs that are carried out at a very early stage of the development of the AI system, since they will be too abstract. It is not sufficient to carry out impact assessments at the initial stage of development of new technology, these should also be conducted on a regular basis when more information regarding its actual impact is available. Alessandro Mantelero considered that legislators need to define, with some degree of detail, what is risk and how it should be measured, i.e., risk cannot be a matter of opinion or feeling, it should be demonstrated according to objective, quantifiable and transparent criteria. In response to a question from the audience, A. Mantelero explained that we should not expect to find an impact assessment model that could be successfully applied to all circumstances and AI technologies, including the one used in pharmaceutical industries since in AI is very contextual, each product can have multiple purposes and pose different types of risk, and it involves more stakeholders.