Session
Other - 60 Min
Format description: Town Hall, flexible sitting.
Internet Governance Forum 2022 Addis Ababa, Ethiopia
Panel title: “Beyond the opacity excuse: experiences of AI transparency with affected communities”
Date: Dec 2, 2022
Time: 3 pm UTC+3
The moderators
● Mariel Sousa (Policy Advisor, iRights.Lab, Germany)
● José Renato Laranjeira de Pereira (German Chancellor Fellow, iRights.Lab & Co-Founder, Laboratory of Public Policy and Internet/LAPIN, Brazil)
The panellists
● Nina da Hora: (Brazil, Thoughtworks): https://www.linkedin.com/in/ninadahora/
○ Nina da Hora is a 27-year-old scientist in the making - as she identifies herself - and an anti-racist hacker. Nina holds a BA in Computer Science from PUC-Rio and researches Justice and Ethics in AI. She is also a columnist for MIT Technology Review Brazil and is part of the Security Advisory Board of Tik Tok Brasil and the transparency board for the 2022 Brazilian elections created by the Superior Electoral Court. She has recently joined the Thoughtworks team as a Domain Specialist to think about responsible technologies for the Brazilian industry.
● Hsu Yen Chiah (Taiwan, University of Amsterdam): http://yenchiah.me/
○ Yen-Chia Hsu is an assistant professor in the MultiX group at the Informatics Institute, University of Amsterdam. His research is focused on studying how technology can support citizen participation, public engagement, citizen science, and community empowerment. Specifically, he co-designs, implements, deploys and evaluates interactive AI and visual analytics systems that empower communities, especially in addressing environmental and social issues. He received his PhD in Robotics in 2018 from the Robotics Institute at Carnegie Mellon University (CMU), where he conducted research on using technology to empower local communities in tackling air pollution. He received his Master's degree in tangible interaction design in 2012 from the School of Architecture at CMU, where he studied and built prototypes of interactive robots and wearable devices. Before CMU, he earned his dual Bachelor's degree in both architecture and computer science in 2010 at National Cheng Kung University, Taiwan. More information about him can be found on his website (http://yenchiah.me/).
● Abeba Birhane (Ethiopia, University of Dublin): http://www.abebabirhane.com/
○ Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). Her interdisciplinary research explores various broad themes in embodied cognitive science, machine learning, complexity science, and theories of decoloniality. Her work includes audits of computational models and large scale datasets. Birhane is a Senior Fellow in Trustworthy AI at Mozilla Foundation and an Adjunct Assistant professor at the school of computer science, University College Dublin.
The Panel
● Introduction (Mariel & José / iRights.Lab & LAPIN):
Many have been highlighting the importance of transparency for AI systems to empower affected communities and enhance accountability, but we still have a long way ahead of us before this principle will become practical and effective. The idea of this panel is thus to look at how transparency can be put into practice with regard to communities affected by these systems. How should they be informed? What information is necessary and sufficient? Would that be dependent on the technology?
To advance on this debate, we have invited one cognitive scientist and two computer scientists to share their perspectives on the extent to which transparency in AI systems is meaningful in order to tackle and avoid negative outcomes towards society and the environment.
In her interventions, Abeba Birhane mentioned her experience auditing AI systems, especially those with large datasets. Although she expressed the importance of improving transparency mechanisms as a means to help people better understand their impacts, it should not be seen as the sole solution for addressing the negative outcomes produced by these technologies. It is fundamental however, to address the power asymmetries that they perpetuate. She also dove into the example of face recognition and, agreeing with Nina da Hora, she expressed that she could not perceive any positive outcome of its deployment, as it has, in numerous examples, helped intensify racism. In this sense, transparency, even though feasible, is meaningless if there are no means for individuals to act and exert control over these systems. It is necessary to entail justice.
However, one important step in thinking about AI transparency is to go beyond the notion that we need to focus on understanding strictly the inner working of systems, their technicalities.
Instead, it is crucial that we also have sufficient information about who is developing, implementing and deciding in a broader manner about AI systems, in order to comprehend what the political, ethical and financial interests are that are driving the creation and adoption of these technologies. She also highlighted that an opportunity to better inform about key stakeholders and data used to create AI systems is to make this information accessible as open data/ open source. This is especially important for public data.
Hsu Yen Chiah, in his turn, mentioned his experience at the intersection of design and computer science. His work focuses on understanding how technology interacts with people and his lab has a close connection with local communities. He builds systems in close contact with communities in order to understand their needs and contexts.
When asked about how transparency looked like in his projects, he mentioned that in a recent research project to develop a computer vision system for assessing air pollution, he held monthly meetings with communities from the impacted region. There they discussed the system and also created videos to denounce the issue to authorities. In this case, transparency was mainly about keeping people in the loop for them to understand the system and gather their views on how to increase its effectiveness. .
Nina da Hora shared her interest in the ethics of AI and how algorithms perpetuate colonialistic practices. She mentioned how, in her first project in a startup in Rio de Janeiro, she had problems with systems as they presented many flaws while conducting voice recognition and facial analysis of black people. This sparked her interest in AI ethics. In her most recent project, she assessed how face recognition systems have been impacting black communities in Brazil by being subject to a much higher rate of false positives, leading to unjust detainments.
With regard to transparency, she mentioned that there are many different ways, depending on the system and its application, to provide information about its functioning and deployment.
With that in mind, she mentioned that transparency needs to be provided in a way that the general public, but especially the groups most affected by these systems, including Black and Indigenous communities, understand their impact and are provided with tools to interact with them, sharing experiences and giving feedback.
Key takeaways:
Based on the above considerations of the panelists, we conclude that transparency in AI systems can be part of the solution, but it is not a silver bullet. More important than making models themselves transparent is to operationalise the provision of information about these technologies through (i) accessible information to affected communities on how AI impacts them; (ii) inclusion of underrepresented groups, such as black and indigenous people in the debates; (iii) building trust among citizens and those who build AI models; and (iv) ensure public data is open source.
Next steps:
Following the panel we see the need to further deepen the discussion about practical implications of AI transparency for specific communities. An opportunity to do so could be though a follow-up panel where we bring together representatives of affected communities, scholars and technicians involved in AI systems design. Valuable outcomes of such an event could be the assessment of similarities and differences, needs and challenges of affected communities to information about AI systems. Recommendations for policy advisors and technicians to the requirements of communities for transparent information and opportunities to engage with AI systems would also be positive results.
iRights.Lab/ Laboratório de Políticas Públicas e Internet - LAPIN
José Renato Laranjeira de Pereira, iRights.Lab, Think Tank, Europe Cynthia Picolo, Laboratório de Políticas Públicas e Internet - LAPIN, Civil Society, Latin America
Yen-Chia Hsu (University of Amsterdam, Academia, Asia-Pacific), Nina da Hora (CyberBrics, Civil Society, Latin America) Abeba Birhane, (Professor, University of Dublin, Ireland), Prateek Sibal (UNESCO, International Organisation, Asia-Pacific)
José Renato Laranjeira de Pereira (Laboratório de Políticas Públicas e Internet - LAPIN/iRights.Lab; Civil Society, Brazil)
Mariel Sousa (iRights.Lab; Think Tank, Germany)
Cynthia Picolo (Laboratório de Políticas Públicas e Internet - LAPIN; Civil Society; Brazil)
Targets: AI systems have been deployed in critical contexts, which include the provision of social benefits, public security, access to information and many others. When these systems present discriminatory biases, they can largely affect the access of vulnerable communities to information and social care. Hence, as this proposal aims to promote participation and representation of individuals by fostering the debate on AI transparency and literacy for vulnerable communities, it matches SGD 16.7, 16.10 and 16.b.
Report
Internet Governance Forum 2022 Addis Ababa, Ethiopia
Panel title: “Beyond the opacity excuse: experiences of AI transparency with affected communities”
Date: Dec 2, 2022
Time: 3 pm UTC+3
The moderators
- Mariel Sousa (Policy Advisor, iRights.Lab, Germany)
- José Renato Laranjeira de Pereira (German Chancellor Fellow, iRights.Lab & Co-Founder, Laboratory of Public Policy and Internet/LAPIN, Brazil)
The panellists
- Nina da Hora: (Brazil, Thoughtworks): https://www.linkedin.com/in/ninadahora/
- Nina da Hora is a 27-year-old scientist in the making - as she identifies herself - and an anti-racist hacker. Nina holds a BA in Computer Science from PUC-Rio and researches Justice and Ethics in AI. She is also a columnist for MIT Technology Review Brazil and is part of the Security Advisory Board of Tik Tok Brasil and the transparency board for the 2022 Brazilian elections created by the Superior Electoral Court. She has recently joined the Thoughtworks team as a Domain Specialist to think about responsible technologies for the Brazilian industry.
- Hsu Yen Chiah (Taiwan, University of Amsterdam): http://yenchiah.me/
- Yen-Chia Hsu is an assistant professor in the MultiX group at the Informatics Institute, University of Amsterdam. His research is focused on studying how technology can support citizen participation, public engagement, citizen science, and community empowerment. Specifically, he co-designs, implements, deploys and evaluates interactive AI and visual analytics systems that empower communities, especially in addressing environmental and social issues. He received his PhD in Robotics in 2018 from the Robotics Institute at Carnegie Mellon University (CMU), where he conducted research on using technology to empower local communities in tackling air pollution. He received his Master's degree in tangible interaction design in 2012 from the School of Architecture at CMU, where he studied and built prototypes of interactive robots and wearable devices. Before CMU, he earned his dual Bachelor's degree in both architecture and computer science in 2010 at National Cheng Kung University, Taiwan. More information about him can be found on his website (http://yenchiah.me/).
- Abeba Birhane (Ethiopia, University of Dublin): http://www.abebabirhane.com/
- Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). Her interdisciplinary research explores various broad themes in embodied cognitive science, machine learning, complexity science, and theories of decoloniality. Her work includes audits of computational models and large scale datasets. Birhane is a Senior Fellow in Trustworthy AI at Mozilla Foundation and an Adjunct Assistant professor at the school of computer science, University College Dublin.
The Panel
Introduction (Mariel & José / iRights.Lab & LAPIN):
Many have been highlighting the importance of transparency for AI systems to empower affected communities and enhance accountability, but we still have a long way ahead of us before this principle will become practical and effective. The idea of this panel is thus to look at how transparency can be put into practice with regard to communities affected by these systems. How should they be informed? What information is necessary and sufficient? Would that be dependent on the technology?
To advance on this debate, we have invited one cognitive scientist and two computer scientists to share their perspectives on the extent to which transparency in AI systems is meaningful in order to tackle and avoid negative outcomes towards society and the environment.
In her interventions, Abeba Birhane mentioned her experience auditing AI systems, especially those with large datasets. Although she expressed the importance of improving transparency mechanisms as a means to help people better understand their impacts, it should not be seen as the sole solution for addressing the negative outcomes produced by these technologies. It is fundamental however, to address the power asymmetries that they perpetuate. She also dove into the example of face recognition and, agreeing with Nina da Hora, she expressed that she could not perceive any positive outcome of its deployment, as it has, in numerous examples, helped intensify racism. In this sense, transparency, even though feasible, is meaningless if there are no means for individuals to act and exert control over these systems. It is necessary to entail justice.
However, one important step in thinking about AI transparency is to go beyond the notion that we need to focus on understanding strictly the inner working of systems, their technicalities.
Instead, it is crucial that we also have sufficient information about who is developing, implementing and deciding in a broader manner about AI systems, in order to comprehend what the political, ethical and financial interests are that are driving the creation and adoption of these technologies. She also highlighted that an opportunity to better inform about key stakeholders and data used to create AI systems is to make this information accessible as open data/ open source. This is especially important for public data.
Hsu Yen Chiah, in his turn, mentioned his experience at the intersection of design and computer science. His work focuses on understanding how technology interacts with people and his lab has a close connection with local communities. He builds systems in close contact with communities in order to understand their needs and contexts.
When asked about how transparency looked like in his projects, he mentioned that in a recent research project to develop a computer vision system for assessing air pollution, he held monthly meetings with communities from the impacted region. There they discussed the system and also created videos to denounce the issue to authorities. In this case, transparency was mainly about keeping people in the loop for them to understand the system and gather their views on how to increase its effectiveness.
Nina da Hora shared her interest in the ethics of AI and how algorithms perpetuate colonialistic practices. She mentioned how, in her first project in a startup in Rio de Janeiro, she had problems with systems as they presented many flaws while conducting voice recognition and facial analysis of black people. This sparked her interest in AI ethics. In her most recent project, she assessed how face recognition systems have been impacting black communities in Brazil by being subject to a much higher rate of false positives, leading to unjust detainments.
With regard to transparency, she mentioned that there are many different ways, depending on the system and its application, to provide information about its functioning and deployment.
With that in mind, she mentioned that transparency needs to be provided in a way that the general public, but especially the groups most affected by these systems, including Black and Indigenous communities, understand their impact and are provided with tools to interact with them, sharing experiences and giving feedback.
Key takeaways:
Based on the above considerations of the panelists, we conclude that transparency in AI systems can be part of the solution, but it is not a silver bullet. More important than making models themselves transparent is to operationalise the provision of information about these technologies through (i) accessible information to affected communities on how AI impacts them; (ii) inclusion of underrepresented groups, such as black and indigenous people in the debates; (iii) building trust among citizens and those who build AI models; and (iv) ensure public data is open source.
Next steps:
Following the panel we see the need to further deepen the discussion about practical implications of AI transparency for specific communities. An opportunity to do so could be though a follow-up panel where we bring together representatives of affected communities, scholars and technicians involved in AI systems design. Valuable outcomes of such an event could be the assessment of similarities and differences, needs and challenges of affected communities to information about AI systems. Recommendations for policy advisors and technicians to the requirements of communities for transparent information and opportunities to engage with AI systems would also be positive results.