Session
Other - 60 Min
Format description: Town Hall, flexible sitting.
Artificial intelligence systems have been long regarded as black boxes in the sense that their outputs are frequently hard to explain for a human being. Much has been discussed about technical tools to allow for these systems to be more transparent and regulatory solutions to make the provision of information about them enforceable from a legal perspective. Nevertheless, few discussions seem to having focussed on how to include affected communities, especially vulnerable ones, in the debate on how to better understand AI systems. How should one collect feedback from individuals, especially from minority groups, about whether they are able to understand the systems with which they deal and, from that, contest potential harms they are having? How to involve regulators, civil society and companies in the promotion of this goal? To think of practical ways to provide information on AI systems to affected groups from both a technical and a policy perspective, we invited panellists with experience in digital literacy and AI transparency applied to vulnerable communities that include ethnic minorities, children, disabled persons, elderly and others. For such, we will revisit general questions on AI transparency that relate to (1) what is the information provided about a given system that we need to allow for such an outcome; (2) Who should have access to it; (3) How can we ensure that it is indeed effective to allow for the accountability of actors responsible for their development and deployment; and (4) whether should we go beyond the mere technicalities, and face the business choices and environmental impacts that surround these systems in order to decide on their social acceptability. They are aimed at conceiving ways to put communities in control of the algorithmic systems that affect them through understanding their effects. In this sense, this panel expects to address these questions by inviting policy, law and computer science experts to debate what are the tools that we have to promote AI transparency and how to make them useful to effectively promote accountability. As an outcome, the session hopes to provide individuals with practical means to promote AI transparency for these groups in order both to think about how to involve those affected by AI in the loop of making them understandable and on designing policies that are effective in achieving this goal effectively. The panel will mark the launch of a publication by iRights.Lab that traces a comparative perspective between policy approaches to AI transparency in Brazil and Europe. The session is a partnership between iRights.Lab and the Laboratory of Public Policy and Internet - LAPIN.
The panel's goal is to bring potential policy and technical solutions for the problems they see in their own communities with regard to understanding the impacts of AI systems. Each group will have a rapporteur to share the outcomes reached by each group afterwards. If online, the online moderator will be responsible to guide them while jumping from one room to the other. The same will happen in person, with the offline moderator being available to answer to eventual doubts that appear. With everyone back in the main room, the moderator will open for each speaker to comment on the participants' ideas for a maximum of five minutes, in order to map potential solutions for the issues raised. Afterwards, a final Q&A session will take place.
iRights.Lab/ Laboratório de Políticas Públicas e Internet - LAPIN
José Renato Laranjeira de Pereira, iRights.Lab, Think Tank, Europe Cynthia Picolo, Laboratório de Políticas Públicas e Internet - LAPIN, Civil Society, Latin America
Yen-Chia Hsu (University of Amsterdam, Academia, Asia-Pacific), Nina da Hora (CyberBrics, Civil Society, Latin America) Abeba Birhane, (Professor, University of Dublin, Ireland), Prateek Sibal (UNESCO, International Organisation, Asia-Pacific)
José Renato Laranjeira de Pereira (Laboratório de Políticas Públicas e Internet - LAPIN/iRights.Lab; Civil Society, Brazil)
Mariel Sousa (iRights.Lab; Think Tank, Germany)
Cynthia Picolo (Laboratório de Políticas Públicas e Internet - LAPIN; Civil Society; Brazil)
Targets: AI systems have been deployed in critical contexts, which include the provision of social benefits, public security, access to information and many others. When these systems present discriminatory biases, they can largely affect the access of vulnerable communities to information and social care. Hence, as this proposal aims to promote participation and representation of individuals by fostering the debate on AI transparency and literacy for vulnerable communities, it matches SGD 16.7, 16.10 and 16.b.