Session
Organizer 1: Marianne Franklin, Internet Rights and Principles Coalition/Goldsmiths University
Organizer 2: Sebastian Schweda, Amnesty Tech (Germany)
Speaker 1: Markus Beeko, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Renata Avila, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 3: Katherine Getao, Government, African Group
Marianne Franklin, Civil Society, Western European and Others Group (WEOG)
Sebastian Schweda, Civil Society, Western European and Others Group (WEOG)
Minda Moreira, Civil Society, Western European and Others Group (WEOG)
Other - 90 Min
Format description: This session is based on a roundtable but as it is incorporating an audience component the room seating needs to include a front table but also seating for the audience that allows some flexibility for audience members to present their questions to the panelists. A roundtable/classroom combination if possible.
(1) What are the challenges that the deployment of AI as a fundamental actor in data governance bring for human rights advocates? , (2 What sorts of responses under human rights law are available when AI goes wrong? Where and how can citizens find legal redress if the accused is an algorithm? 3) Which existing human rights instruments can support designers, and regulators when seeking to deploy AI for regulatory purposes such as responding to harmful content, debates around the right to privacy/anonymity versus real-name policies for online communications, terms of access and use by authorities and third parties handling personal data, in storage and processing? 4) What are the overall future options for human rights law and norms in the face of increased dependence on artificial, rather than human intelligence? 5) Are existing international human rights standards adequate to respond to the new challenges that AI brings for the future of internet design, access, use, and data management?
GOAL 4: Quality Education
GOAL 5: Gender Equality
GOAL 9: Industry, Innovation and Infrastructure
GOAL 10: Reduced Inequalities
GOAL 11: Sustainable Cities and Communities
GOAL 12: Responsible Production and Consumption
GOAL 16: Peace, Justice and Strong Institutions
GOAL 17: Partnerships for the Goals
Description: This high-level roundtable considers a range of possible questions underlying the resurgent debate about how online services, now increasingly designed on the basis of artificial intelligence capabilities that can forgo the need for human intervention, can be more clearly aligned with international human rights law. The principle that human rights exist online as they do offline (IRPC Charter 2011, UNHRC 2014, Council of Europe 2014) has gained a wide consensus across stakeholder groups. R&D and recent legislation around the world have flagged the rise in interest by regulators, public institutions, and service providers to develop and deploy AI systems across a range of services, public and business. These policies are becoming prioirties in internet and data governance policy agends at the local governmental, national and international level. The session, invited speakers and those invited to present questions to the panelists, will consider the future relationship between AI and Human Rights law and norms in light of the question: how can current, and future AI designs better comply with international human rights standards? In other words, what are the regulatory, technical, and ethical considerations for "Human Rights AI By Design"? Other Questions considered may include: - Are AI tools the best way to respond to urgent requests to take down violent video content, and hate speech on social media platforms, e.g. debates around best responses after live streaming of the Christchurch terrorist attack? - Who should monitor these automated tools and systems and to who are they accountable, governments, internet service providers, an independent oversight body, national legislatures? - How can the use of AI to enforce copyright law be achieved in compliance with human rights standards e.g. what are the chilling effects of mandatory upload filters for copyrighted works given their implications for freedom of expression, education, principles of fair use? - How can governments and the technical community work together to ensure that the use of AI for elections, e.g. data-management, and personalized targeting that can comply with national, regional, and international human rights standards, e.g. in the case of data-driven campaigns, digitalized health-records, educational and local government data-gathering and storage? - AI tools and applications can enhance the life and opportunities of persons with disabilities, for multilingual meetings, aid in the monitoring of serious health conditions and other areas of personal well-being. How can these opportunities be safeguarded against error, or misuse e.g. in the case of mental health needs, privacy around medical care and other sorts of care such as during pregnancy? - How can existing human rights instruments be more fully incorporated into national (cyber)security policies based on bulk online surveillance or targeted monitoring? What compliance mechanisms need to be in place at the local, national, and international level of regulation around intelligence-gathering and law enforcement?
Expected Outcomes: The session will end with a 3-5 point, agreed-upon action plan as to how to bring AI R&D for future applications closer in touch with the legal and ethical requirements of international human rights instruments and their equivalents at the national and regional level of governance.
This session, based as a 90 minute Roundtable/audience debate will be incorporating an innovative element by organizing the session along the lines of the "Question Time" format of a BBC TV show in which invited politicians and public figures are asked to respond to (preorganized) questions from members of the audience; these first questions will be requested from invited participants, focusing on the full range of geographical and stakeholder interests in this topic. The RP moderator will coordinate with the on-site moderator during the session and the latter will ensure that full participation from the floor is included in the discussion for each question. Invited speakers at the roundtable will keep their initial and closing comments brief.
Relevance to Theme: Data Governance is increasingly defined as a domain in which AI must play a formative role. The human rights implications for this commitment at the design, deployment, and regulatory level are based on principle rather than operationalizable detail. The session will consider these practical issues in order to link data governance as an AI domain with human rights law and norms in more detail.
Relevance to Internet Governance: Human rights have been confirmed as a fundamental principle to internet governance. AI and related algorithms are remapping the future of this interconnection thus calling for the need to move from principles to operationalization, committment to action. Co-organizers of the session (IRPC and Amnesty) have played formative roles in bringing this to the IG agenda in order to achieve this milestone.
Please see 16a above. The RP moderator and the on-site moderator are also co-organizers therefore will be preparing and conferring with invited audience members, who will contribute online beforehand
Report
Based on the BBC’s “question time” format in which questions prepared in advance by participants are put to the panel alongside with questions from the floor and remote participants this session explored the relationship between AI, data Governance and Human Rights in light of the question: What are the regulatory, technical, and ethical considerations for "Human Rights AI By Design”?
The panel was asked to:
- provide a definition of AI:
- Several were given, from a narrower definition of machine learning and automated-algorithm-based decision-making, to a broader definition of ‘digital intelligence’ which combines technical infrastructure and use of data.
- list three pressing issues at stake at the intersection of AI R&D and online deployment and human rights law and norms:
- augmented inequalities
- democratic deficit in decision‑making and accountability, and AI manipulation
- the importance of incorporating democracy, rule of law, fundamental rights, sustainability into AI systems
- the importance of human assessments on the impact that AI may have on the individual’s fundamentals rights
- the need to address development and sustainability issues associated to AI and understand the programmes by analysing the code and observing behaviour
Questions from participants ranged from the possibility of banning or limiting AI systems able impact fundamental rights, data bias, AI and privacy, to accountability, transparency and regulation.
There was a general consensus that regulatory frameworks are needed to ensure that fundamental rights are incorporated into AI systems and that assessments are carried out to ensure that those rights are protected throughout.
The panel acknowledged that political, gender, and racial bias in data needs to be firmly addressed and outputs discussed publicly to ensure that discriminatory frameworks are not perpetuated.
The panel also agreed that accountability is crucial and while AI systems cannot be held responsible for their output, legal persons need to be held responsible.
Action points and recommendations from the panel
- the need to take this discussion forward into the public debate, so that human beings together shape the way we take this forward
- Reengagement with democracy and strengthening of the democratic institutions. The need that people who are interested in these issues and those who have the technical know-how reengage with democracy and have a sustained engagement with the rule making process
- A legal declaration that data and digital intelligence are people’s resources, as democratic control over AI is not only possible, but also the way to ensure the existence and enforcement of human rights.
Around 150 people participated and roughly half of the participants were women
Gender bias, inequality were raised in the session, first raised by panellists and later posed in the question “Since data is essential to machine learning, how do we measure and mitigate political, gender, and racial bias in data?”
The panel recognised that bias in data is an issue that needs to be addressed and panellists agreed that although it is not possible to regulate the input that goes into the AI system, it is possible to set standards on the output and that a public participation and discussion on these outputs will be necessary to tackle the issue.