Session
Organizer 1: cath corinne, Oxford Internet Institute
Organizer 2: Vidushi Marda, ARTICLE 19
Speaker 1: Bernard Shen, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Levesque Maroussia, Government, Western European and Others Group (WEOG)
Speaker 3: Vidushi Marda, Civil Society, Asia-Pacific Group
cath corinne, Civil Society, Western European and Others Group (WEOG)
Vidushi Marda, Civil Society, Asia-Pacific Group
cath corinne, Civil Society, Western European and Others Group (WEOG)
Round Table - U-shape - 90 Min
How can AI systems best be governed? What are the promises and perils of ethical councils and frameworks for AI governance? What possible frameworks could guide AI governance, like those based on Fairness, Accountability and Transparency (FAT) or human rights approaches? What role should ethics, technical audits, impact assessments or regulatory-based approaches play?
GOAL 5: Gender Equality
GOAL 10: Reduced Inequalities
GOAL 12: Responsible Production and Consumption
GOAL 17: Partnerships for the Goals
Description: “They ignore long-term risks, gloss over difficult problems (“explainability”) with rhetoric, violate elementary principles of rationality and pretend to know things that nobody really knows.” Professor Metzinger, European Commission’s the High-Level Expert Group on Artificial Intelligence This was the scathing critique Professor Metzinger gave about the report of European Commission’s the High-Level Expert Group on Artificial Intelligence (HLEG) which he helped draft in April 2019. The debate on AI governance and ethics is disproportionately influenced by industry initiatives and corporate aims [1]. Even though a variety of actors are developing ethical frameworks, concerns from civil society and academia struggle to get industry support, and even in multi-stakeholder settings, are easily diluted [2]. For instance, during deliberations at the (EU-HLEG) [3], while some non-negotiable ethical principles were originally articulated in the document, these were omitted from the final document, because of industry pressure [4]. Civil society is not always invited to partake in deliberation around ethical AI, and when it is, the division of seats at the table is not equitable. In India for instance, an AI task force to create a policy and legal framework for the deployment of AI technologies was constituted without any civil society participation[5]. In the EU-HLEG, industry was heavily represented, but civil society did not enjoy the same luxury [6]. In the United Kingdom, the Prime Minister’s office for AI has three expert advisors - one academic and two industry representatives [7]. A recently disbanded AI ethics Council set up by Google included zero civil society representatives. Such ethics frameworks and councils are often presented as an alternative or preamble to regulation. However, in practice, they regularly serve to avoid regulation under the guise of encouraging innovation. Many ethical frameworks are fuzzy, lack shared understanding, and are easy to co-opt. By publishing ethical principles and constituting ethics boards, companies and governments are able to create the illusion of taking the societal impact of AI systems seriously, even if that isn’t the case. This kind of rubber stamping is enabled particularly because of the lack of precision around ethical standards. When such initiatives have lack accountability mechanisms or binding outcomes they are little more than “ethics washing” [8]. Yet, when done right such self-regulatory initiatives can play an important role as one facet of robust AI governance. In this roundtable we will do three things: first, we will discuss the recent surge in ethical frameworks and self-regulatory councils for AI governance. Second, we will discuss their promises and pitfalls. Third, we discuss other strategies and frameworks - including those based on human rights law - as viable alternatives for, and additions to, ethical frameworks for AI governance. The agenda is as follows: 00”00 - 00”05: short scene setting by moderator 00”05 -00”45: four panellists provide their take on the issue, representing industry, government, civil society and academic perspectives 00”45 - 01”00: panellists engage in discussion with each other, guided by the moderator 01”00 - 01”25: panellists engage with the audience, guided by the moderator 01”25 - 01”30: moderator summarizes best-practices from panellists and audience, rounds off the conversation by suggesting next steps for AI governance. References: [1] https://tech.newstatesman.com/guest-opinion/regulating-artificial-intel… [2] https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0080 [3] European Commission 2018. High-Level Group on Artificial Intelligence. https://ec.europa.eu/digital-single-market/en/high-level-group-artifici… [4] http://www.europarl.europa.eu/streaming/?event=20190319-1500-SPECIAL-SE… [5] https://www.aitf.org.in/members [6] http://www.europarl.europa.eu/streaming/?event=20190319-1500-SPECIAL-SE… [7] https://tech.newstatesman.com/business/demis-hassabis-office-ai-adviser [8] https://www.privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-…
Expected Outcomes: Cross-industry and stakeholder dialogue on how to govern AI systems Rough consensus on the modes and methods for effective AI governance Concrete suggestions for alternative frameworks to govern AI governance Identification of best and worst practices surrounding ethical frameworks and councils for AI governance Creation of a network of likeminded knowledge experts on AI governance
We intend to make this an inclusive conversation, both among the panellists and between the panellists and the audience online and offline. This will be done by creating ample time for interaction and using the hashtag #IGFAIEthics during the panel, to ensure that the audience can relate to the ongoing promises and perils regarding ethics and AI governance. We will also specifically ask the audience to share their experiences with AI governance to bring a wider diversity of views into the conversation. Regarding online participation will be facilitated as mentioned we intend to utilize the IGF’s WebX system, Twitter and Mastodon to include remote participants in the discussion. The remote participants will be afforded equal and proportional representation in the discussion. The remote moderator will facilitate the Q&A with the moderator. We would like a screen in the room to display the video questions, remote comments, and tweets.
Relevance to Theme: Questions of data governance are tied to the use of Artificial Intelligence (AI), in particular, Machine Learning (ML), systems. These systems are set up to look for patterns in large datasets and optimize towards certain goals. Recent research has indicated that such pattern-recognition and optimization efforts can have detrimental effects on human rights. For example, these systems when applied in social media content moderation filters have been found to take-down legitimate content, when used by banks are unjustly denying loans to communities of colour, when used in criminal justice unnecessarily prolong jail sentences for historically disadvantaged groups, and when used by HR recruiters these systems tend to deny women job opportunities. This dynamic is further complicated by the fact that many large datasets are obtained through state surveillance and the biggest technology companies, the latter having a tenuous relationship with user consent for third-party use of data. Any discussion of data governance must include consideration of how to regulate the systems by which such data is analysed and applied, which is what this panel aims to do by focusing on AI governance.
Relevance to Internet Governance: AI systems play an increasingly important role in Internet governance. Not only in terms of how data governance within web-applications takes shape, but also by the use of AI by social media companies to moderate content, by search engines to steer information queries, and dating apps to make a perfect match. AI is also increasingly used for the management of the Internet’s infrastructure. Internet routing - the forwarding of Internet packets across different networks - is but one example where AI systems are used. Another is network management by network operators. Hence the use of AI systems has a direct impact on both the topology and the governance of the Internet, making the development of strong normative frameworks for its application important for Internet users and designers across the stack.
We intend to include the participants in the official online participation tool as outlined under section 16a.
Proposed Additional Tools: Twitter and Mastodon, using a dedicated hashtag please see 16a.
Report
1.) How can AI systems best be governed?
2.) What are the promises and perils of ethical councils and frameworks for AI governance?
3.) What possible frameworks could guide AI governance, like those based on Fairness, Accountability and Transparency (FAT) or human rights approaches?
This session is geared towards generating a critical review of the current policy trend of doing AI governance by means of self-regulatory ethics frameworks. The session will both review existing case studies of such governance approaches in terms of their promises and perils. We also aim to articulate alternative frameworks for AI governance, based on data protection and human rights law.
The workshop critically considered the following three questions:
1.) How can AI systems best be governed?
2.) What are the promises and perils of ethical councils and frameworks for AI governance?
3.) What possible frameworks could guide AI governance, like those based on Fairness, Accountability and Transparency (FAT) or human rights approaches?
Two themes reoccurred in the discussion:
1. When discussing AI governance it is important to consider the law and ethics, rather create a false dichotomy between the two.
2. Context is crucial for assessing the impact of AI, current efforts at AI ethics struggle to do so structurally.
All panellists broadly concurred on these themes. Yet, their opinions diverged on a number of other issues. Interesting to note was the disagreement on the role of AI/ML systems in society critical processes. The industry representative stated that it was important to content with current use of AI systems. The civil society and academic participants stressed that the focus on AI ethics frameworks skips the crucial question whether ML/AI systems should be used at all for certain societal critical processes and question the inevitability of AI/Ml systems' use.
The Q&A raised a number of further points, especially regarding the importance of accountability which is often seen as lacking in ethics frameworks; stressed the way in which AI/ML systems cement current societal power dynamics and highlighted the importance of bringing an intersectional lens to discussions about AI/ML systems impact.
The policy recommendations arising from this session were:
- Think about ways to include the context in defining AI impacts, through community feedback mechanisms, and iterative software development
- Consider legal and ethical frameworks as complimentary, but focus on ensuring accountability
- Think about the broader ramifications for society brought by structuring it following the logics of AI systems (which are often focused on optimization and efficiency rather than compassion and accountability)
- Bring an intersectional lens to the discussion about AI/ML systems' impact
n/a
see above
people: 120
representation: 50/50
see above comments on intersectionality