IGF 2025 WS #111 Geopolitical Barriers to AI Development

    Organizer 1: Civil Society, Asia-Pacific Group
    Organizer 2: Civil Society, Western European and Others Group (WEOG)
    Speaker 1: Peixi XU, Civil Society, Asia-Pacific Group
    Speaker 2: Yik Chan Chin, Civil Society, Asia-Pacific Group
    Speaker 3: Brenden Kuerbis, Civil Society, Western European and Others Group (WEOG)
    Format
    Roundtable
    Duration (minutes): 90
    Format description: Since the session will focus on a specific digital governance topic, ie. Geopolitical Barriers to AI Development, with diverse speakers in terms of perspective, gender, region, and stakeholder group. And the topic is focused but also very complicated, which will require in-depth and interactive deliberation and debate by participating experts
    Policy Question(s)
    1) Is the development of AI applications a zero-sum game in which the winner takes all, or a cooperative game in which the general public can benefit from globally coordinated activity? 2) Are current competitive policy measures by the U.S. and China retarding or advancing the science and technology of machine learning, improving access to AI-based services or restricting access? 3) What alternatives to a geopolitical AI race are feasible?
    What will participants gain from attending this session? Participants can directly question the rationales for international cooperation or confrontation on AI. Participants will also be able to learn from engaged experts about how export controls and other policy measures affect the development of AI applications. The session will create opportunities for citizens of the adversary nations to see the issue from the perspective of the other side. It will also allow AI companies and users affected by the adversarial relationship to understand how to position themselves in these controversies.
    Description:

    The Paris “AI Action Summit” in Feb 2025 recognized “the need for inclusive multistakeholder dialogues and cooperation on AI governance.” However, the Summit also made it clear that many governments, especially the U.S. and China, see machine learning application development as a competitive “race.” The “AI race” concept leads to policies that attempt to handicap other countries and limit cooperation and trade. This occurs through export controls, restrictions on market access, limits on knowledge sharing, etc. One example is the Biden administration’s AI Diffusion Rule, which caps the export of essential American AI components to many fast-growing and strategically vital markets. One leading AI company, Microsoft, has argued that these rules could be counterproductive. The rule “imposes quantitative limits on the ability of American tech companies to build and expand AI datacenters” in countries such as Switzerland, Poland, Greece, Singapore, India, Indonesia, Israel, the UAE, and Saudi Arabia. Similarly, China’s DeepSeek application is being blocked in many countries because of fears that the data it collects will undermine national security. Both the U.S. and China are imposing restrictions on foreign investment in AI technologies. This panel examines how geopolitical competition affects efforts to develop machine learning applications. It explores how better cooperation and trade can contribute to the accessible development and proper governance of the technology. It focuses on the U.S. and China, the two countries with leading industries and scientists in AI development, but includes perspectives from the UK as a leading country in terms of AI security governance as well. This panel will assess the impact of geopolitical competition on research, development, and trade in digital goods and services that contribute to artificial intelligence.
    Expected Outcomes
    1) Deliberate and identify the rationales for international cooperation or confrontation on AI 2) Deliberate and identify export controls and other policy measures that affect the development of AI applications 3) Deliberate and identify the security risks of global AI cooperation and market development? 4) Develop a framework for collaborative response that includes multi-stakeholders (not only governments) such as the technical sector, civil society, the private sector and other specialists to provide a meaningful platform that tackles the geopolitical barriers to AI development 5) Policy recommendations and key messages report to the UN and regional/national IGF communities, and other relevant epistemic communities.
    Hybrid Format: 1) The workshop has an onsite moderator and an online moderator. And both moderators will ensure that all speakers and attendees have the equal opportunity to speak, raise questions and engage in each session of the workshop. 2) The online and onsite moderators will open the session to provide participants an overview of the policy questions discussed in the session. The moderators will invite each speaker to express their views on a set of questions and guide the debate amongst speakers. Next, moderators will invite questions from the onsite audience and online participants, the question time will last about 30 minutes to provide sufficient interactions amongst speakers, the audience and online participants 3) The online moderator will participate in the online training course provided by the IGF Secretariat's technical team to ensure the online participation tool will be properly and smoothly used during the proposed session.