Time
    Thursday, 1st December, 2022 (10:50 UTC) - Thursday, 1st December, 2022 (12:20 UTC)
    Room
    Banquet Hall B

    Organizer 1: Cynthia Picolo de Azevedo Carvalho, Laboratory of Public Policy and Internet (LAPIN)
    Organizer 2: Alexandra Krastins Lopes, Brazilian Data Protection Authority (ANDP)
    Organizer 3: Gabriela Buarque, Laboratory of Public Policy and Internet (LAPIN)

    Speaker 1: Thiago Moraes, Government, Latin American and Caribbean Group (GRULAC)
    Speaker 2: Juan Carlos Lara G., Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 3: Smriti Parsheera, Civil Society, Asia-Pacific Group
    Speaker 4: Wayne Wei Wang, Technical Community, Asia-Pacific Group
    Speaker 5: Bobina Zulfa, Civil Society, African Group

    Moderator

    Cynthia Picolo de Azevedo Carvalho, Civil Society, Latin American and Caribbean Group (GRULAC)

    Online Moderator

    Alexandra Krastins Lopes, Technical Community, Latin American and Caribbean Group (GRULAC)

    Rapporteur

    Gabriela Buarque, Civil Society, Latin American and Caribbean Group (GRULAC)

    Format

    Other - 90 Min
    Format description: The aim of this proposal is to explore how regulatory frameworks for AI have been shaped in the Global South as well as to what extent they align with UNESCO’s Recommendation on the Ethics of Artificial Intelligence. As such, we trust a hybrid session with an expository presentation (panel) followed by a round-table discussion will fit the objectives we look for. In the first moment, speakers will be invited to briefly present the most important AI regulatory initiatives from their countries (5 min. each). Their initial remarks will be based on questions previously made by the moderator. After that, we will move to a more round-table format, where the moderator will provoke the panelists to provide information to help answer the three policy questions. Speakers may intervene into one another’s consideration to further investigate an issue, add an observation or even question on a specific topic. We will also have a Q&A round with questions from the onsite/online audience. A debate with an expository part followed by a more active discussion will make this session both informative and participatory. For more details on the session dynamics, please check the topic “Ensuring Implementation of a Hybrid Session”.

    Policy Question(s)

    The policy questions our proposal intends to respond are: 1. What steps have the selected States taken in creating a regulatory framework for AI? Are they taking diversity and multistakeholderism into account? 2. What are the oversight and enforcement mechanisms being structured? Is it centralised, polycentric, or totally diffused? 3. To what extent are the initiatives aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence?

    Connection with previous Messages: The proposed session will explore how AI is being regulated in the Global South. The analysis on regulatory frameworks will consider inclusion, especially of marginalized groups, oversight and enforcement mechanisms. All these elements will be put into perspective of what UNESCO recommends in terms of an ethical AI. Thus, the session is directly related to the IGF 2021 message of Economic and Social Inclusion and Human Rights.

    SDGs

    10.2
    16.10
    16.3
    16.6
    16.7
    16.b
    Targets: The objective of the proposed session is more than exploring regulatory frameworks for AI. It intends to reinforce the importance of diversity and multistakeholderism in these processes, so that the development and use of AI is truly human-centered. Thus, several voices must be heard. Connected to this is the question of oversight and enforcement mechanisms. Ensuring that AI actors follow principles and rules, being held accountable for damages caused, is crucial for upholding the rights of affected individuals - often marginalized groups. We will draw attention to these factors during our session, linking them to international recommendations already endorsed by the selected States. Advocating for a transparent, inclusive regulatory process that provides for mechanisms to ensure accountability is necessary (SDG 16), so that AI serves the common good, without discrimination of any kind (SGD 10).

    Description:

    The session intends to critically analyze and map convergences and divergences in initiatives taken to regulate AI in the Global South. To this end, the discussion will focus on countries such as Brazil, Chile, Índia, Nigéria, and China, as they were identified as having either robust regulatory mechanisms under implementation or being actively debated. Considering that a regulatory framework does not necessarily rely only on laws, we will first explore what sort of initiatives the selected States have advanced. Are there any legislation, bills, policies and/or national strategies seeking to establish rules or recommendations for the development and use of artificial intelligence? If so, what are their main features? Maybe even more importantly, how these countries are defining their AI oversight and enforcement regime? These are the guiding questions that will set the floor to explore particularities in such initiatives in terms of inclusion, respect for people’s rights and adherence to international commitments. From there, we will investigate if and to what extent different actors have been involved in regulatory discussions. The investigation will be based on criteria such as: (i) Diversity (race, gender, geographic area, expertise); (ii) Multistakeholderism (vulnerable groups, civil society, academia, public and private sectors); and (iii) Avenues that allow stakeholders to be heard (public hearings, working groups, local debates, etc). Finally, the previous assessments will be analysed in comparison to UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This final evaluation will provide insights from a broader perspective on transposing (or not) official endorsements taken at the international arena to the national level.

    Expected Outcomes

    With this session, we expect to provide the audience with a big picture on Global South regulatory frameworks and their respective oversight and enforcement mechanisms, departing from their hard structure to how their processes have incorporated people’s rights and international standards. Moreover, we intend to identify trends among the selected States, so that a reflection on their roots can be made. The reflections of this debate will serve as inputs for a comprehensive research report, which shall be released with an interview with one or all of the speakers. In addition, we intend to prepare a workshop on trustworthy AI for Brazilian stakeholders involved in AI regulation, sharing the findings and lessons learned from other experiences.

    Hybrid Format: Preparatory meetings between LAPIN team and speakers will take place before the event. This will help to structure the presentation and create synergy among participants. That occasion will also serve to clarify doubts about the session and IGF rules. Moreover, at least one month before the event we will release content on social media about the topic. This 'warm-up' will serve to arouse the audience's interest, curiosity and doubts. From that moment, we will receive questions and observations that will be passed on to the panelists. At the presentation day, we will organize different forms of engagement to respect onsite and online formats. First, remote speakers’ participation will be projected on a screen. Second, LAPIN will have a dedicated person who will serve as a point of contact between online and onsite participants, and provide assistance in case of technical problems. This will ensure a smooth session as much as possible. We plan a 1h30 session, which will consist of: (i) 5 minutes to introduce the topic and present the speakers; (ii) 25 minutes for initial presentations of the speakers (5 minutes each); (iii) 55 minutes of dynamic discussion between speakers and audience, guided by the moderator. Questions will come from the moderator and the online/onsite audience, alternately. Online questions can be sent through IGF platform’s chat, LAPIN’s social media or Jamboard. We will take note of not addressed questions to forward them to the panelists, whose answers will be shared on our social media. (v) 5 minutes to sum up the points covered and close the panel.

    Online Participation

    Usage of IGF Official Tool.

    Key Takeaways (* deadline at the end of the session day)

    The AI ethical framework in the Global South relies on hard and soft law. Countries like Brazil, Chile and China are closer in the development of hard law, while in other regions like Africa and India the soft law approach predominates. In any case, there is an intense connection with development and innovation when it comes to AI and regulations need to consider ethical guidelines, human rights, diversity and multistakeholderism.

    Call to Action (* deadline at the end of the session day)

    Government: be more transparent and inclusive, considering most vulnerable groups in the debate. Civil society: keep strengthening underrepreresented voices and raising issues related to impact to human rights when it comes to AI use and development.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    The moderators Cynthia and Alexandra started the panel by introducing the regulatory context of artificial intelligence. Then, they introduced the respective panelists, the dynamics and objectives of the workshop, which, in short, was intended to explore how the regulatory landscape of AI has been developed in the Global South.

     

    The panel's initial question was asked by moderator Alexandra: “What steps have your State taken in creating a regulatory framework for AI? Are there any legislation, bills, policies and/or national strategies seeking to establish rules or recommendations for the development and use of artificial intelligence? If so, what are their main features?

    The first panelist to respond was Smriti Parsheera, from India, representing civil Society. She is a Fellow with the CyberBRICS Project from the Fundacao Getulio Vargas.

    Smriti responded that the focus of discussion in India has been issues of promotion, innovation and capacity building. She also mentioned liability and regulation, noting, however, that these have not been the main focus. She argued that the main processes are not legislative and binding, but soft law mechanisms. She also mentioned that government committees have been created that look at privacy and security aspects. She mentioned the 2018 “AI For All” document, which sets out principles for responsible artificial intelligence, and stressed that people are already beginning to talk about the need for risk-based regulation aligned with principles such as those enshrined by UNESCO. Despite the existence of discussions and proposals for regulation, she stated that the focus is still on compliance and self-regulation, with a long way to go before binding legislation emerges in India.

     

    The next panelist was Wayne Wei Wang, from China, representing technical Community, through the University of Hong Jong and Fundação Getúlio Vargas.

     

    Wayne argued that China has a governance model for AI and data protection, mentioning that Oxford held a conference called “The Race to Regulate AI” in mid-2022, where three approaches to AI regulation were mentioned. He pointed out that regulation is often not specific and can be used together with data protection legislation. He argued that Chinese regulation encourages the large population to participate in the digital transformation, for example, in 2015, where there was the Made In China 2015 Plan and the Internet of Plus Initiative. In 2017, a formal document called The New Generation AI development emerged that defined the commercialization of AI as a market goal. And in the last two years, China has established a government AI committee that, although centralized, allows multistakeholder participation. Wayne summarizes that China regulates AI through hard law mechanisms, such as trade and data protection legislation, and also soft law, mentioning national incentives and strategies. He ended by mentioning that China has introduced specific legislation, “The Preventions of Internet Information System”.

     

    In continuity, the panelist Thiago Moraes, from Brazil, representing government, through the Brazilian Data Protection Authority, replied that the debate in Brazil has a national strategy (2020) and a bill in progress, also mentioning the importance of guidelines of the OECD in this process.

     

    He mentioned that the national strategy is based on the horizontal axes of legislation, regulation and ethical use, AI governance and international aspects, in addition to six vertical axes with related themes. In the legislative field, he mentioned the bill 21/20 in which there is a Commission of Jurists with 18 specialists to prepare a substitute text based on the debate with public hearings and a multisectoral approach, in order to understand the socioeconomic reality of Brazil. In the field of supervision and governance, he argued that the idea of multiple authorities coordinated by a central authority is a possible proposal for Brazil.

     

    The panel continued with Bobina Zulfa, from Uganda, representing civil Society through Pollicy.

     

    The panelist pointed out that she would address an overview of what is being discussed in Africa as a continent, since unfortunately in Uganda there is still not much regulation on the subject. She mentioned that in Africa few countries have written about the subject and that progress in this field is still slow. For example, only about six countries have national AI strategies and only one country, Mauritius Islands, have an AI legislation. Much of the regulation stems from data protection and soft law discussions, such as the Malabo Convention of 2014 (which only thirteen countries have signed), and Resolution 473 of 2021 which aims to study AI and robotics in terms of benefits and risks. At the moment, it is suggested that attention is being paid to the principles being developed in other regions in the hope that this will reach people on the African continent in a positive way. On the other hand, it was mentioned that in these discussions there is still a lot of opacity, being necessary to add transparency and participation.

     

    The last panelist, Juan Carlos Lara G., from Chile, representing civil Society, through Derechos Digitales.

    Juan Carlos points out that technology has been seen in his country as an opportunity for development and participation in the global dialogue with countries that are at the forefront of this implementation process. In Chile there is a national artificial intelligence policy for the years 2021-2030 and an action plan to implement the policy, coming from the Ministry of Science, Technology, Knowledge and Innovation. The Chilean experience is to assess the country's capacity and stage in the implementation of AI, in an optimistic and economic view that says very little about the boundaries of the technology, being much more focused on assessing potential to the detriment of ethical and responsibility challenges. There is still an ethical gap regarding the discussion of risks, impacts, accountability and damages. However, recommendations are being made and it is important to include new voices and participants in the debate, in addition to deepening it to understand local needs.

    Moderator Cynthia highlighted the use of hard law by some countries and soft law by others presented, as well as the need for inclusion, participation and ethical guidelines. Next, she asked Smriti and Thiago a question: “Is and how diversity and multistakeholderism taken into account in the regulatory debates? (race, gender, territorial, vulnerable groups, academia, civil society, private, public sector, specialists)”

    Smiriti responded that diversity and multisectoriality can be analyzed at various levels. The first level is who has a seat at the table when the discussion is taking place; the second level is that of those who participate in the debate with deliberative capacity; and the third level is that of who is producing knowledge in this process. She argued that India has a very diverse social context, which highlights the concern with non-discrimination and bias.

    In this context, she stated that the government sector has involved the private sector and part of academia as multistakeholders in the discussion and that Centers of Excellence have been implemented in technical institutes around the country where startups, entrepreneurs and academia are called to dialogue about innovation. Furthermore, the National AI Portal is being developed, the result of government collaboration with an industrial sector, which aims to be an institutional repository of AI in the country. She also mentioned government committees, which include people from academia and industry. However, she concluded that the discussion is still not open to all who represent the diversity of academic perspectives and that the participation of civil society is critical because it is being little heard. Therefore, it is necessary to improve the transparency and participation of the process.

    The moderator Cynthia emphasized that this gap is a point in common with Brazil, due to the difficulty of including some groups in the debates, which are dominated by the preponderant presence of the private sector. She highlighted the relevance of the discussion to ensure the participation of affected vulnerable groups who are not included and who need space for deliberation.

    Thiago recalled the importance of multisectoriality in Brazil, as in the case of the “Comitê Gestor da Internet”, in which all participants must be taken into account, as in the process of drafting the “Marco Civil da Internet” and the “General Data Protection Law". The challenge was highlighted in a country of great diversity in Brazil and that indigenous peoples are still little heard, despite being an extremely important part of the country. Thiago pointed out that in Brazil there is an effort on racial and gender diversity, but that there are still many challenges to face. It was pointed out that Bill 21/20 is an interesting experience because it was proposed in 2020, in the pandemic year, and this discussion was not deepened, so that the private sector took on a lot of prominence in the debate. Only in 2021 was the Commission of Jurists proposed, where more voices were expanded.

    In continuity, a questioning was made by the on-site hearing, where it was questioned, for the Chinese case, how the inspection has been carried out, considering the legislation that deals with algorithmic transparency and recommendation systems. Representatives of civil society were also questioned about experiences of participation and diversity.

    Juan Carlos responded to the last question that it is important to highlight public consultations, which rely not only on individual responses and external experts, but on the invitation to introduce and create with people from civil society and who do not necessarily have technical knowledge. On the other hand, he mentioned the participation that started on digital platforms and that did not have accessibility or other languages, including indigenous ones. Furthermore, 70% of the people who answered the query were men. So, there is still a lack of processes to overcome inequalities.

    Wayne, in turn, replied that in China there is the Cybersecurity Administration (CSA), which adopts a routine for supervising activities called the Clean Cyberspace Campaign. Other supervisory authorities are Companion-Life Enforcement Activities and the Ministry of Industry and Information Technology that also adopt this type of campaign. These authorities examine, for example, applications in terms of data protection, security, etc. China has also released a guideline on algorithm registration systems.

    One more question from the on-site audience was asked. This time it was asked if there are specific examples of how the government has engaged target groups in discussions.

    Bobina responds that she is not aware of specific legislation in Africa, but civil society groups and academia, such as the African Commission for Human and People’s Rights, have tried to broaden the debate at initial levels.

    Juan Carlos, for his part, responded that Chile has the example of a national cybersecurity policy for the years 2015-2021, where some groups focused on the issues were heard. It was still a restricted initiative, but guided by the government. Juan Carlos went on to add that this initiative is not just up to the government, but can also be promoted by civil society. This collaboration can come from the promotion of training, petitions or invitations to participate in procedures, being fundamental to think of ways to collaborate and also cultivate the knowledge of the academy.

    Smiriti pointed out that technology policy must be transparent with civil society.

    In continuity, a question was asked by the remote audience to the panelist Thiago. Questions were asked about transparency and how it works in terms of consultation and engagement.

    Thiago replies that in Brazil there is an effort towards transparency and that there are challenges, but also some examples of what might work. At this point, for example, the Access to Information Law, which is about ten years old and which can help with these challenges of transparency. This is legislation that deals with the collection, access and request of information to the government. He pointed out that the legislation may conflict with data protection in some cases, but there is still optimism about its functioning. The central question would be what degree of transparency is achieved, combined with the difficulty of technical capacity and financial resources, which is usually addressed to the private sector and which is a difficulty that can be faced in other countries as well.

    One more question from the on-site audience was asked. This time Bobina was asked how the debate on facial recognition in public safety has been conducted and what are the main concerns related to the topic.

    Bobina replied that facial recognition technology for public safety purposes has already been used on the African continent, in Zimbabwe, Uganda, etc. and that such mechanisms culminated as instruments of mass surveillance, with many researchers emphasizing the harm resulting from this use.

    Juan Carlos said that the issue touches on public interest and that in Latin America, regardless of any regulatory debate, facial recognition in public safety is inadequate for fundamental rights and is usually being questioned in the courts. He also argued that the systems are neither technically robust nor legally authorized and that this has been happening in cities like Santiago and São Paulo.

    The last question came from the on-site audience and asked what would need to be done and discussed in the future for the AI regulatory framework in the Global South.

    Wayne replied that one of the biggest challenges to be resolved and that has been guided by the Chinese discussion is the paradox of transparency and commercial secrecy, making it difficult to assess accuracy with the protection of secrecy. He also mentioned the co-regulation model and the expansion of stakeholder participation. Finally, he raised the point of “ownership” of data and algorithms, which have been commercialized in China.

    Thiago replied that there are many steps to be taken and that it is necessary to think about an intelligent regulation that can be applied. He mentioned that hard law often fail to keep pace with the development of innovation, and it is necessary to think of alternatives. At this point, he also argued about means of partnerships between sectors that can favor dialogue and that have been promoted in the last five years, such as hackathons, innovations hubs and regulatory sandboxes. All of them have specific characteristics and give the public manager the opportunity to approach the regulated field. But there is still a need to think about designs for transparency, diversity and amplifying voices.

    Moderator Cynthia concluded by pointing out that the use and development of artificial intelligence is usually accompanied by the rhetoric of innovation, however, it is necessary to talk about the risks and impacts on fundamental rights. It is a challenge and the panel ends raising several questions and reflections. Finally, the use of facial recognition for use in public safety was mentioned, considered as a discriminatory measure, without security and that harms vulnerable groups in Brazil.

    The panel ends with a reflection on the balance between the protection of rights and innovation. Moderator Cynthia reminds the audience that they can ask questions through the institutional contact of the Laboratory of Public Policy and Internet, and moderator Alexandra closes the panel.

    GENDER INFORMATION - In the virtual audience, about 7 women were present, apart from the moderator and the rapporteur. At the on-site hearing, two women were present, apart from the moderator. About 7 men were present at the virtual hearing and about 8 men at the on-site hearing.