Session
Organizer 1: Cornelia Kutterer, Microsoft
Organizer 2: Johanna Harings, Microsoft
Speaker 1: Frederik Zuiderveen Borgesius, Radboud University
Speaker 2: Laura Galindo, Intergovernmental Organization, Western European and Others Group (WEOG)
Speaker 3: Cornelia Kutterer, Private Sector, Eastern European Group
Speaker 4: Kristian Bartholin, Council of Europe
Speaker 5: Daniel Leufer, Access Now
Alexandru Circiumaru, Private Sector, Eastern European Group
• Johanna Harings, Private Sector, Western European and Others Group (WEOG)
Round Table - Circle - 90 Min
Digital policy and human rights frameworks: What is the relationship between digital policy and development and the established international frameworks for civil and political rights as set out in the Universal Declaration on Human Rights and the International Covenant on Civil and Political Rights and further interpretation of these in the online context provided by various resolutions of the Human Rights Council? How do policy makers and other stakeholders effectively connect these global instruments and interpretations to national contexts? What is the role of different local, national, regional and international stakeholders in achieving digital inclusion that meets the requirements of users in all communities?
Inclusion, rights and stakeholder roles and responsibilities: What are/should be the responsibilities of governments, businesses, the technical community, civil society, the academic and research sector and community-based actors with regard to digital inclusion and respect for human rights, and what is needed for them to fulfil these in an efficient and effective manner?
- What are good practices/processes that the EU and OECD governments should use when it conducts such human rights impact assessments?
- What questions should be asked and what human rights impacts should be addressed?
- What is the role of providers of AI systems with regards to public sector customers?
- How deployers of AI systems – in particular in the public sector – should conduct a human rights impact assessment for intended uses?
- How should the EU ensure meaningful, timely, and transparent multi-stakeholder participation in the human rights impact assessment?
- What can we learn from other fields or types of impact assessments by governments?
- What are the effective remedial mechanisms in which this human rights impact can be redressed?
- What are the next steps for multi-stakeholder collaboration and work on the foregoing questions?
Targets: With the European Commission’s Proposal for a Regulation laying down harmonised rules on artificial intelligence, the European Union has the opportunity and the responsibility to assess the impact of the proposed regulation on the full spectrum of human and fundamental rights. The session will address why it is important in particular for the public sector to conduct a human rights impact assessment for intended uses and ensure transparency and accountability. Having said that, it is one of the main aims of this panel to contribute in the policy discussion to "develop effective, accountable and transparent institutions at all levels".
Description:
On April 21, European Commission unveiled its AI regulatory proposal with an aim to preserve the EU’s technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles. The European Commission set up a framework for AI suggesting rules should be human centric, so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights. Risk Management, transparency, documentation, and data quality have emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of AI and algorithmic systems. Considering how all AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle the main goals of this session will be: - What are the roles and opportunities for all AI actors in ensuring human-centric and value-based development and Deployment of AI systems ? - Why it is important for deployers – in particular in the public sector - to conduct a impact assessment for intended uses and ensure transparency and accountability?
The session will contribute and feed into the negotiations of European Commission's much-anticipated regulatory proposal on AI that was released on April 2021. It is in our hope that this workshop will generate interest and therefore will be the start point of a series of events that will be organized afterwards.
The workshop will be divided in specific sessions and each session will include questions that the speakers will be asked to address. That way will facilitate the audience to follow the discussion and grasp speakers' remarks. We will make sure that questions cover all the possible stakeholder groups (i.e. civil society, academia, policymakers, industry). The online audience will be strongly encouraged by the moderator to ask and/or submit questions (for the online audience). Other tools can also be used if possible from a technical perspective like polls.
Usage of IGF Official Tool.
Report
Risk assessments will play a central role in the development of AI governance. Although human rights impact assessments lie at the core of the risk categorization of AI technologies, Europe is lacking a respective legal basis for the creation of such mechanisms. The upcoming convention of the Council of Europe’s CAHAI Committee could, complement existing mechanisms, such as the Data Protection Assessment under the GDPR.
By limiting responsibility to the developer, the current AI Act proposal creates the risk that AI technologies are incorrectly deployed. This lack of accountability does not only limit customers’ right to redress, but risks negligence or abuse in the deployment of AI leading to human rights infringements. AI human rights impact assessments should, therefore, be applied to all stakeholders throughout the AI lifecycle.
We need a stronger multi-stakeholder dialogue to address existing gaps and emerging challenges in the regulatory framework of the AI, particularly in the context of human rights.
Regulators should consider creating the necessary space for effective human rights impact assessments, and carefully considering who should do them, what they should contain and when they should take place.