Session
Panel - Auditorium - 60 Min
The diffusion of artificial intelligence (AI) tools in the daily life of several societies urge regulations that embrace its global character, with a lens for inclusion and diversity, so that technology works as a tool for achieving the Sustainable Development Goals (SDGs). In view of the benefits and harms of AI, it is imperative to focus on propositional debates, in order to enhance the positive aspects and diminish the destructive potential of technology through regulation. This panel should follow on a major global endeavor from the proponents and will bring multiple perspectives and regulatory backgrounds to debate the relationship between inclusion and AI regulation. The aim is to leverage globally diverse viewpoints, and practical experience, thereby contributing to the development of regulatory efforts of AI technologies to foster inclusion and diversity. The main questions to be addressed on the panel are: What are the salient concerns and drivers of the AI governance discourse related to inclusion and diversity in your region? Are the main stakeholders participating in the debate on AI regulation? Are their aspirations contemplated? Are there outside key actors who should be included in the regulatory and governance process? What do you think other regions can learn from the initiatives and responses from your region? How do you see (and hope to see) the discourse developing in your region in the coming years?
The session will explore a two part methodology, one thought-provoking experience and a second interactive. In the first part, the dynamic will be an exchange between the panelists, focusing on the proposed questions from their regional perspectives. In the second part, the floor will be opened to the audience. Individuals will be able to bring forward their perceptions regarding the future of inclusive regulation for AI. Throughout the whole session, there will be a digital mural where people may present their views of inclusive regulation of AI. The moderator and the rapporteur will be in charge of cataloging the perceptions and insights noted starting with the 4 speakers and moving on with the speakers. In the end we should have a cloud of words and map of perceptions.
BI Norwegian Business School
Christian Perrone, Head Law and GovTech, ITS Rio, CSO, South America Janaina Costa, Senior Researcher Law and GovTech, ITS Rio, CSO, South America Christoph Lutz, Researcher, Nordic Center for Internet and Society, BI Norwegian Business School, Academia, Europe
Celina Bottino, ITS Rio, CSO, South America Samson Esayas, Nordic Center for Internet and Society, BI Norwegian Business School, Academia, Europe Sandra Cortesi, Berkman Klein Center for Internet and Society, CSO, North America Shaun Pather, University of the Western Cape, Professor, Academia, South Africa
Christian Perrone, Head Law and GovTech, ITS Rio, CSO, South America
Christoph Lutz, Researcher, Nordic Center for Internet and Society, BI Norwegian Business School, Academia, Europe
Janaina Costa, Senior Researcher Law and GovTech, ITS Rio, CSO, South America
8.10
8.3
8.5
8.6
8.8
8.a
8.b
9. Industry, Innovation and Infrastructure
10. Reduced Inequalities
16.7
17.16
17.17
17.18
17.19
17.6
17.7
17.8
17.9
Targets: AI can enable the accomplishment of various targets across all SDGs, but it may also inhibit many others. In this sense, the inclusion and diversity lenses that we will bring to this session are an important aspect of this analysis that should not be overlooked. In this panel, we will discuss the recent results of research and practices assessing the potential impact of smart algorithms, image recognition, reinforced learning and data-driven approaches on resulting or reproducing discrimination and bias against women and minorities, being directly linked to the SDGs 5, 8 and 16. In addition to the lack of diversity in datasets, we expect to touch base on another main issue of AI, i.e., the lack of gender, geographical, racial, and ethnic diversity in the AI workforce. Diversity should be one of the main principles supporting innovation and societal resilience, which are essential to achieve SDGs 8, 9,10 and 16 through AI design, development and implementation. Finally, by bringing experts from the four corners of the world to the table, we hope to shed light on how to strengthen global cooperation regarding SDGs 16 and 17, in the sense that AI is a global technology and the opportunities and impacts are international.
Report
There is still a participation gap in the AI regulation debate. There has been an increase in participation of different stakeholders of late, yet particularly significant groups such as youth and vulnerable groups seem not to be at the forefront of the discussion nor be particularly represented.
We need more involvement of youth perspectives in AI governance.
Within multi-stakeholder approaches to AI governance, civil society perspectives must be more strongly included.
The town hall session #55 Inclusive AI regulation: perspectives from four continents brought together perspectives across four continents to reflect on the status quo of AI governance and specifically the question of inclusion. After a short introduction to the session, four presenters briefly summarized the state of AI governance in their respective context, speaking for about 8 minutes each. The participants were encouraged to set the floor, so that they would provide space for meaningful debate in the second half of the session, where questions from the audience were answered openly and deliberatively. In the following, the input presentations are summarized before the discussions are reported.
Celina Bottino (Project Director at the Institute for Technology & Society of Rio de Janeiro, Brazil)
Developments in the field of AI are enormous worldwide. In terms of regulation, the challenges are still there. In Brazil, advances in policy have been reflected in different instruments, from the National AI strategies to more than 20 proposed AI bills.
Yet, there are still complex infrastructure hurdles that have not been overcome. Celina Bottino noted the example of a project that the Institute for Technology and Society of Rio de Janeiro has, developed in partnership with the Public Defenders of Rio de Janeiro: a Sandbox of AI for exploring health issues litigated in the Judiciary. The project has the funding, the access to technology and the right minds, yet it still lacks structured, machine readable, quality data.
The advances in AI Regulation, mostly do not address this infrastructure gap. Data governance and open data are yet to be considered first on the list of priorities. She mentioned the most recent Bill presented in the Senate, which has been the result of an inclusive process where specialists from a wide variety of fields could partake in an open consultation endeavor and even there this matter was less than a significant element.
Samson Esayas (Associate Professor at BI Norwegian Business School, Norway)
He noted that he was representing the European perspective but originally he is from Ethiopia, where IGF 2022 is taking place. He mentioned that he is from the North of the country (Tigray) where there have been a lot of communication gaps and he wishes to call attention that inclusion means communities such as the one he comes from being taken seriously and partaking in regulatory processes as well.
As for the European regulatory process, in his views there are 4 main drivers of the discussion:
- The first driver is the protection of fundamental rights, particularly privacy, freedom of expression, freedom against discrimination and the protection of vulnerable groups.
- The second driver is protection of the integrity of elections and against disinformation and other systemic dangers;
- The third driver is accountability and allocation of liability; and,
- The fourth driver is data control and access. This is a similar concern as the one noted by Celina Bottino in terms of Brazil and the Global South, as Europe seeks to address the concerns of data quality, and the asymmetries of access to data.
Samson Esayas noted that one of the main purposes of the AI regulation is to foster an environment of human-centric AI development. Thus, Europe has identified specific cases that increase the risks of posing danger to humans and the regulation addresses such cases.
Additionally, he noted that the EU is developing other regulations to complement more general issues not directly addressed by the main AI regulation. The example he mentioned was platform regulation, which focuses on the vulnerabilities of platform workers.
Shaun Pather (Professor at the University of the Western Cape, South Africa)
He mentioned the digital divide that still exists in the world and noted recent ITU studies, which show facts and figures: 2.7 billion people are still offline. Additionally, he highlighted that affordability and costs of connectivity services add to the digital gap.
Another significant shortcoming refers to skills. A study of the state of AI in Africa shows that:
- There was no dedicated AI legislation in the continent;
- There were only 4 national AI strategies; and
- Data protection regulation is one of the most developed issues, particularly in terms of personal data processed for automated decision-making.
Moreover, there is an important question of whether populations and groups are involved or represented in data collections for the AI tools that may be offered to them. Not being a part of data sets may lead to ineffective AIs or unequal and even potentially discriminatory results.
As a solution, there should be an integrated effort to involve more of the population and foster greater participation. There should be more coordination and coordinated responses and both the projects and the frameworks should involve a more international participation.
In terms of ethical principles and responses, even if ethics may have a local component, we should strive to find more universal frameworks that are still accommodating to regional and local realities and policies. It is significant that more research, especially on practical tools and mechanisms for checking software in all stages, should be developed to provide a more inclusive AI.
Sandra Cortesi (Fellow at the Berkman Klein Center for Internet & Society at Harvard University, USA, Senior Research and Teaching Associate at the University of Zurich, Switzerland)
Reflecting on the state of AI governance in North America, and the United States in particular, Sandra Cortesi showed that many conversations are taking place to develop a diverse range of norms for governing AI. Thus, in terms of inclusion of stakeholders there is a wide variety of participants in the development of AI issues. Significant issues are picking up in municipalities, such as facial recognition limitations and bans, and standard organizations, such as the ethical standards of the Institute of Electrical and Electronics Engineers (IEEE).
There is no national law in the US. The approach of governance is not an overall view, yet to sector specific agencies, for medical AI (Food and Drug Agency).
There are still participation gaps. One example is the participation of young people. They seem not to be part of the wide AI governance agenda. UNICEF, for instance, has documented the gap of youth being mentioned and represented in the debate.
Questions from the audience and responses from the presenters
How can we deal with the different ethical considerations at regional level?
Shaun Pather: There should be a more universal framework still even if we should allow for more regional and continental-wide responses to be developed.
Celina Bottino: Conversations are being lead by Global North countries, other countries and regions should be called upon to participate.
What would be the practical approach for AI regulation and AI-based spaces like the metaverse to ensure public safety, security, health, data sovereignty and accountability of global AI actors, given that different societies have different ethical and legal frameworks? And what should be done for AI related cybercrimes which are borderless?
Sandra Cortesi: There is no one silver bullet. All such conversations should happen at the Global level. Include civil society and different actors. We may not agree on cross-border solutions yet this does not mean we should not strive to find common ground.
Samson Esayas: Fundamental rights and concerns about safety are the focus of AI regulation. Metaverse may not exactly be covered in the AI Act, yet certain important issues that were mentioned in or implied by the question are. Disinformation and systemic concerns are an example.
Shaun Pather: The metaverse implies a different level of globalization than what we have today. A practical approach should be cooperation and convergence. It is possible to find consensus on universally acceptable ethical principles.
Celina Bottino: The Berkman Klein Principled AI Report showcases convergence in ethical principles amongst Ethics principles from different global stakeholders. These may be starting points in our endeavors.
We face questions of digital imperialism or digital colonialism stemming from a huge market concentration. where users are in the South and developers, in the Global North. Examples can be found in the processing of health and education data from the South in the North. The question is whether there is any room for a Global agreement or coalition building so that AI infrastructure for education and health should be developed in an open standard and not only closed and for commercial purposes?
Samson Esayas: Local communities should be engaged in the first place.
Celina Bottino: UNESCO is increasingly becoming a focus for discussing such topics and may be an important forum for such discussions.
Sandra Cortesi: Definitely in such cases, the majority world should be included. This may not be easy but there is a lot of good work on the matter being done worldwide. One group that we should note is youth as they are 1 in 3 internet users, yet do not have a seat in the discussion.
Shaun Pather: It is significant that we bound together and coordinate efforts having all continents represented.