Session
Roundtable
Duration (minutes): 90
Format description: A 90 minute roundtable is instrumental to allow meaningful interactions with the large number of stakeholders participating to the session. The onsite moderator will set the scene and engage with local stakeholders, while introducing speakers and subject matter experts, who will then engage in a roundtable conversation. The session will particularly focus on global majority perspectives, including a diverse group of speakers from different geographies and different stakeholder groups. The session will be divided in two parts, and after each part the moderators (onsite and online) will open the discussion for the audience and facilitate the conversation. At the end, each speaker will be given an opportunity to summarize major takeaways from the discussion.
This is a joint session of the Data and AI Governance Coalition (DAIG) and the Dynamic Coalition on Data and Trust (DC-DT) This session will launch the DAIG Annual Report on "AI from the Global Majority", focusing on the rapidly evolving landscape of data and Artificial Intelligence (AI) governance, fostering the inclusion of global majority perspectives. Acknowledging that data and AI governance must encompass a heterogeneous spectrum of viewpoints and experiences, this session will offer a platform for representatives of stakeholder groups to share their insights, concerns, and proposed strategies. From gender to ethnicity, nationality to socioeconomic status, this session strives to amplify the voices often underrepresented in discussions of data and AI governance. Participants will foster a holistic approach to AI governance, exploring a variety of issues which are particularly relevant for the inclusion of global majority perspectives, such as Equitable Development and Access, Data Privacy and Security, Transparency and Accountability, Regulatory Frameworks, and Cultural Implications. The following questions will be used to guide the debate: 1) How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority? 2) How can data privacy and security be effectively safeguarded, including by fostering collective protection of rights as regards personal data processing by AI system?. 3) What regulatory strategies can promote and ensure meaningful transparency in AI decision-making processes and holding stakeholders accountable for their actions? 4) What regulatory frameworks and enforcement mechanisms can be used as examples and inspire policymaking processes in the majority world? 5) What strategies could be used and what good practices exits to foster the inclusion and understanding of cultural sensitivities and promote diversity and respectful engagement with global majority communities, in the development and implementation of AI technologies and AI governance processes? 6) What mechanisms are most effective for ensuring that global majority needs are reflected in technology standards that define industry best-practice?
1) How will you facilitate interaction between onsite and online speakers and attendees? To facilitate interaction between onsite and online speakers and attendees, we will leverage a hybrid event platform that provides real-time communication channels. For the onsite attendees, we will project the virtual attendees and their questions/comments onto the screen to ensure that both groups can engage with each other. Additionally, we will use a moderated chat on Zoom for online participants to interact with onsite speakers and vice versa. 2) How will you design the session to ensure the best possible experience for online and onsite participants? The session will be designed with both online and onsite participants in mind. The session will be structured with interactive segments to engage all attendees, such as Q&As and debates to cater both online and onsite participants. 3) Please note any complementary online tools/platforms you plan to use to increase participation and interaction during the session. To increase participation and interaction, we plan to use an online document to allow participants to contribute their thoughts in a shared digital space. We will also utilize social media platforms for pre-session and post-session engagement, such as Twitter/X for live updates.
- Luca Belli, Center for Technology and Society at FGV Law School, Brazil
- Bianca Kremer, CGI.br and Center for Technology and Society at FGV Law School, Brazil
- Regina Filipová Fuchsová, Industry Relations Manager, EURid, Czech Republic
Setting the scene
Luca Belli, Professor at FGV Law School, Rio de Janeiro
First segment with in-person speakers (5 MINUTES EACH)
- Ahmad Bhinder, Policy Innovation Director Digital Cooperation Organization
- Ansgar Koene, EY Global AI Ethics and Regulatory Leader
- Melody Musoni, Policy Officer at ECDPM, former Data Protection Advisor of the South African Development Community Secretariat
- Bianca Kremer, Board Member CGI.br, Researcher and Visiting Professor at CTS-FGV
- Liu Zijing, PhD candidate at Guanghua Law School of Zhejiang University; and Ying Lin, PhD researcher at Cyber and Data Security Lab (CDSL) of Vrije Universiteit Brussel
- Rodrigo Rosa Gameiro, researcher at the Massachusetts Institute of Technology, in person, confirmed & Catherine Bielick, MD, MS (data science), researcher at the MIT, Instructor at Harvard Medical School.
Second segment: Brief and punchy online interventions (3 MINUTES EACH)
First slot: Regional approaches to AI
- Sizwe Snail ka Mtuze , Adjunct Professor , Nelson Mandela Univeristy , Visiting Professor at FGV Law School Rio de Janeiro
- Stefanie Efstathiou, EURid Youth Committee Member; PhD cand. on AI and Arbitration, LMU Munich.
- Yonah Welker, EU Commission projects, MIT, Former Tech Envoy EU/MENA, Ministry of AI
- Ekaterina Martynova, Ph.D. candidate, Lecturer at the School of International Law of Higher School of Economics, Moscow.
- Rocco Saverino, LSTS, Vrije Universiteit Brussel (VUB).
Second slot: social challenges of AI
- Rachel Leach, Research Assistant and Undergraduate Student at the University of Virginia.
- Avantika Tewari, PhD Candidate at the Centre for Comparative Politics and Political Theory, Jawaharlal Nehru University, New Delhi
- Amrita Sengupta, Research and Program Lead, Centre for Internet and Society.
Third slot: Global Majority facing AI
- Elise Racine, MPA, MSc. University of Oxford doctoral researcher. BlueDot Impact AI Governance Fellow.
- Hellina Hailu Nigatu, PhD Student at UC Berkeley;
- Isha Suri, Research Lead at the Centre for Internet and Society, India
- Guangyu Qiao-Franco, PhD. Assistant Professor of International Relations, Radboud University, the Netherlands; Senior Researcher of the ERC-funded AutoNorms Project.
Feedback from participants: 10 MINUTES
Luca Belli, Center for Technology and Society at FGV Law School, Brazil
Regina Filipová Fuchsová, Industry Relations Manager, EURid, Czech Republic
Regina Filipová Fuchsová, Industry Relations Manager, EURid, Czech Republic
3. Good Health and Well-Being
4. Quality Education
5. Gender Equality
8. Decent Work and Economic Growth
9. Industry, Innovation and Infrastructure
10. Reduced Inequalities
16. Peace, Justice and Strong Institutions
17. Partnerships for the Goals
Targets: The ways in which data and AI governance are framed have direct impact on an ample range of SDGs, including 4. Quality Education 5. Gender Equality 8. Decent Work and Economic Growth 9. Industry, Innovation and Infrastructure 10. Reduced Inequalities 16. Peace, Justice and Strong Institutions 17. Partnerships for the Goals
Report
Participants agreed that the discussion on AI from the global majority is key and must be further explored.
Participants highlighted that AI and data intensive technologies can have considerable impact on the full enjoyment of human rights, on democracy and on the rule of law, and can affect cybersecurity, safety, equity, and non-discrimination.
Participants agreed to develop further works based on the previous outcome reports not the DAIG coalition to connect the various initiatives explored.
Given the short period of time before the next IGF, participants agreed to elaborate a shorter outcome in 2025 connecting the debate on AI sovereignty and AI from the global majority previously explored.