Policy Network on Artificial Intelligence: Global AI governance & questions of interoperability, gender, race and environment: How to translate recommendations into action?

Time
Tuesday, 10th October, 2023 (23:30 UTC) - Wednesday, 11th October, 2023 (01:00 UTC)
Room
Main Hall
Description

Artificial Intelligence (AI’s) development and impact are global and transcend national boundaries and local interests. Therefore, international multistakeholder dialogue and joint action are needed. In the past months, IGF’s Policy Network on AI (PNAI) has explored AI and related aspects of data governance from the Global South Perspective. The resulting PNAI report delivers recommendations from the global multistakeholder community. The IGF2023 PNAI session builds on the report and debates ways to bring its recommendations into action.

Topics to be discussed include:

  • Interoperability of AI governance. Countries and regions around the world are making plans and pursuing their strategies to regulate and govern AI. PNAI has explored interoperability of AI governance at the global level. Interoperability is often understood as the ability of different systems to communicate and work seamlessly together. PNAI goes beyond the most cited examples of national and regional activities in governing and regulating AI and illustrates the most prevalent types of policies, practices, and issues internationally and in the Global South. How could the different initiatives to regulate and govern AI across the world work together?
  • AI and Gender/Race. When developed and deployed responsibly, AI systems have the potential of helping to improve gender and racial equality in our societies. AI systems biases can also reinforce or generate new ways to operationalize racism, sexism, homophobia, and transphobia in society and harm marginalized groups. What are the most efficient ways to mitigate the impact of gender and race biases in AI and data governance? What can be done at the global level to ensure that AI promotes equity and inclusion?
  • AI and Environment. Advances in AI, including the recent leaps in generative AI, show significant potential for environmental conservation. At the same time, enormous amounts of energy are needed to train and support user queries, resulting in increased greenhouse gas emissions. Without robust data governance, AI can amplify intersectional inequities, particularly for the Global South. What collaborative efforts are needed to address the complex challenges at the intersection of AI, data governance, and the environment in the Global South?

Speakers

  • José Renato Laranjeira de Pereira, Founder and Advisor, Laboratory of Public Policy and Internet ( LAPIN) and Expert Member of the Brazilian AI Strategy, Civil Society, GRULAC
  • Professor Xing Li, Tsinghua University, Civil Society, APAC
  • Owen Larter, Director of Public Policy, Office of Responsible AI , Microsoft  Private Sector, WEOG
  • Dr. Sarayu Natarajan, Founder, Aapti Institute, APAC
  • Jean-Francois Bonbhel, AI and Emerging Technologies Regulatory Advisor, ARPCE Congo, Africa
  • Maikki Sipinen, Policy Network on Artificial Intelligence

Moderators

  • Onsite Moderator – Prateek Sibal,  Programme Specialist, UNESCO 
  • Online Moderator – Shamira Ahmed, Executive Director, Data Economy Policy Hub

Rapporteurs

  • Mx. Umut Pajaro Velasquez, ISOC Gender Standing Group
  • Ms. Rosanna Fanni
Report

Moderator - Prateek SIBAL (Programme Specialist, UNESCO)

The Policy Network on AI (PNAI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2022 in Addis Ababa. The PNAI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.

The first report produced by the PNAI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.

One of the noteworthy aspects of the PNAI is its working spirit and commitment to a multi-stakeholder approach. The working group of PNAI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.

Panelist – Maikki SIPINEN (Policy Network on Artificial Intelligence)

The PNAI is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. The network’s report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like PNAI.

One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions, because capacity building is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.

Diversity and inclusion also feature prominently in the report's arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.

In conclusion, the PNAI report is a valuable resource that highlights the significance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader objective of establishing responsible and fair global AI governance frameworks.

 

Call to Action (* deadline 2 hours after session)
Build a global education initiative on AI literacy and awareness accessible to all groups of society, including children.
Enhance fairness, accountability and transparency across the AI value chain and evaluations of AI-generated content in global governance frameworks.
Session Report (* deadline 26 October) - click on the ? symbol for instructions

Moderator - Prateek SIBAL (Programme Specialist, UNESCO)

The Policy Network on AI (PNAI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2022 in Addis Ababa. The PNAI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.

The first report produced by the PNAI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.

One of the noteworthy aspects of the PNAI is its working spirit and commitment to a multi-stakeholder approach. The working group of PNAI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.

Panelist – Maikki SIPINEN (Policy Network on Artificial Intelligence)

The PNAI is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. The network’s report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like PNAI.

One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions, because capacity building is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.

Diversity and inclusion also feature prominently in the report's arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.

In conclusion, the PNAI report is a valuable resource that highlights the significance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader objective of establishing responsible and fair global AI governance frameworks.

Panelist – Nobuhisa NISHIGATA (Government of Japan)

Nobuhisa provided insights into the ongoing Hiroshima process focused on generative AI. He highlighted that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Flexibility and adaptability in global AI governance are key as AI is a rapidly evolving field and necessitates governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.

Collaboration and coordination between organisations and governments are seen as crucial in AI policymaking, skills development, and creating AI ecosystems. Nobuhisa suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building. WIth regard to PNAI, he suggests that the recently released report can inform G7 work by bringing in the Global South perspectives, for example, the Outcomes of Interoperability of AI Governance and how it can be implemented in each region in the world. In the case of Japan, technical standards are seen key for alignment.

Panelist – Owen LARTER (Director of Public Policy, Office of Responsible AI, Microsoft)

The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner, and has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices. Microsoft has also established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.

Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.

However, Owen emphasised the need to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. Owen suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.

Owen also mentioned the need for a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the sociotechnical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.

Panelist – Xing LI (Tsinghua University)

In terms of AI governance, Xing suggests learning from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments. He also argues that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.

Regarding education, Xing emphasises the need for new educational systems that can adapt to the AI age. Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world. Eventually, the establishment of a global AI-related education system, also advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.

Panelist – Jean Francois ODJEBA BONBHEL (AI and Emerging Technologies Regulatory Advisor, ARPCE Congo)

Considering and mitigating potential risks while maximizing the advantages offered by AI are a key concern by Jean Francois. He stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.

Education is also emphasized as a key aspect of AI development and understanding. He cites the establishment of a specialized AI school in Congo at all educational levels as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program's focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.

Panelist – Sarayu NATARAJAN (Founder, Aapti Institute)

Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society, as false information can be easily created and spread through the internet and digital platforms. However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.

Sarayu advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation. This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated, and implementing legal measures accordingly. The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south. These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.

Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models, which represents a positive step towards inclusivity and accessibility in the field.

While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs. Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. 

Online Moderator – Shamira AHMED (Executive Director, Data Economy Policy Hub)

Shamira highlighted the importance of data governance in the intersection of AI and the environment. Moving on, she advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. This approach emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI. By adopting a decolonial approach, these injustices can be addressed and a more equitable and just AI landscape can be achieved.

In addition, the panelist underscored the importance of addressing historical injustices and promoting interoperable AI governance innovations. She emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard.

Furthermore, Shamira highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future. This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion.

Panelist – José Renato LARANJEIRA DE PEREIRA (Founder and Advisor, Laboratory of Public Policy and Internet (LAPIN) and Expert Member of the Brazilian AI Strategy)

Jose underlines that representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance. Further, the intricate link between labour issues and advancements in the tech industry is a point of concern. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers' well-being.

The impact of the tech industry on sustainability should also be assessed better: For instance, stakeholders are concerned about the interest shown by tech leaders in Bolivia's minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.

The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system's structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.

Jose also highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives. Global collaboration and cooperation become even more important to ensure ethical and responsible use of AI technologies: As countries from the Global South argue for the need to actively participate and push forward their interests in the governance of AI technologies, forums like BRICS and G20 could become platforms to voice these concerns and advocate for more inclusive decision-making processes.

Comment by the summary report authors: Text generated with the support of the DigWatch Hybrid Reporting Tool: https://dig.watch/event/internet-governance-forum-2023/policy-network-o…

3. Key Takeaways

The Policy Network on AI report can inform G7 work by bringing in Global South perspectives. Capacity-building should enable multiple groups to engage with different types of AI technology in different contexts. Calls to action based on the session discussion:

  1. Establish a public-private global education initiative on AI literacy and awareness accessible to all groups of society, including children.

  2. Enhance accountability across the AI value chain and evaluations of AI-generated content in AI governance frameworks, including the G7 Hiroshima Process.