IGF 2025 WS #123 Responsible AI in Security: Governance, Risks and Innovation

    Organizer 1: Intergovernmental Organization, Asia-Pacific Group
    Organizer 2: Private Sector, Asia-Pacific Group
    Speaker 1: Alexi Drew, Intergovernmental Organization, Western European and Others Group (WEOG)
    Speaker 2: Jamal Hassan, Government, African Group
    Speaker 3: Jingjie He, Civil Society, Asia-Pacific Group
    Format
    Roundtable
    Duration (minutes): 60
    Format description: It is proposed that this session will provide time for the following elements: - 10 minutes of presentation, by UNIDIR, on its work related to multistakeholder engagement on AI in international peace and security, including defence. Prior to this, UNIDIR will use online deliberation tools to collect the audience's views on this issue. - 25 minutes of dialogue between the speakers, notably building on the responses shared by the audience in the online deliberation tools. - 20 minutes of dialogue with the audience, for the organizers & speakers to also learn from the IGF community. - 5 minutes wrap-up and for speakers to share their 1 key takeaway. The duration is enough to provide room for insights from the speakers and an interactive dialogue with the IGF community. The roundtable setting represents the need for inclusivity and importance of cross-sectoral and cross-regional engagement, and will enable a smooth dialogue.
    Policy Question(s)
    1. How can international legal frameworks and governance mechanisms effectively regulate AI in security and military contexts? Where do gaps and uncertainties remain? 2. What are the primary risks associated with the deployment of AI in security applications, and how can responsible innovation and multilateral cooperation help mitigate them? 3. How can multistakeholder initiatives like RAISE contribute to bridging the AI governance gap between civilian and security applications, fostering global norms and best practices?
    What will participants gain from attending this session? - A deeper understanding of real-world applications of AI in international peace and security, including opportunities and challenges. - Insights into uncertainties in existing international law and governance mechanisms related to AI in security, and how they can be addressed. - Best practices and policy recommendations for mitigating AI-related risks in security and military applications. - Strategies for multistakeholder cooperation to ensure AI transparency, accountability, and ethics in national security decision-making.
    Description:

    The rapid advancement of artificial intelligence (AI) in international peace and security is reshaping national, regional and global security landscapes. Yet, while the discussion on AI ethics and governance in civilian contexts has gained momentum, the security domain lags, with limited multilateral engagement on the responsible development and deployment of AI in international peace and security, including defence. Under the IGF 2025 sub-theme of [Building] Sustainable and Responsible Innovation, this session will explore how AI is being deployed in security settings, and the emerging & applicable norms, policies, and legal frameworks aimed at mitigating risks while ensuring accountability. This session is organized by the Roundtable for AI, Security, and Ethics (RAISE), a UNIDIR-led initiative dedicated to fostering dialogue and cooperation on AI governance in international peace and security. RAISE brings together leading developers, academic experts, civil society, international organizations, and policymakers transcending geopolitical divides to identify risks associated with AI in security, support national, regional and multilateral AI governance, and promote AI’s role in strengthening global security. In collaboration with the Chinese Academy of Social Sciences, the International Committee of the Red Cross (ICRC), the Kenyan Ministry of Defense, and Microsoft, this workshop will examine real-world responsible applications of AI in security. Panelists will assess the intersection of AI innovation, international law, and multilateral governance, highlighting opportunities to align AI development with global security norms, prevent misuse, and enhance international cooperation. The discussion will build on the inaugural Global Conference on AI, Security and Ethics, hosted in Geneva in March 2025 by UNIDIR, where approximately 500 policymakers, industry leaders, and civil society experts addressed AI’s security implications. With diverse perspectives, this IGF workshop will provide participants with concrete insights on shaping responsible AI governance in security, ensuring that AI innovation contributes to stability rather than exacerbating risks and inequities.
    Expected Outcomes
    - Heightened awareness of AI’s role in security and the critical need for responsible innovation. - Actionable policy recommendations for international organizations, governments, and industry stakeholders to strengthen AI governance in security applications. - Stronger multistakeholder collaboration through RAISE, enabling continued dialogue between policymakers, academia, industry, and civil society to shape AI norms, standards, and regulatory frameworks. - A session output that will contribute to broader international discussions on AI governance, including within the UN, and support the development of future regulatory frameworks for responsible AI in security settings.
    Hybrid Format: 1. The use of online deliberative tools at the beginning will ensure the voices of online participants are well taken into account and will shape the conversation. 2. The onsite and online moderators will liaise in real time to ensure both online and onsite participants get equal considerations. 3. One third of the session will be dedicated to an interactive dialogue between the speakers and both onsite and online participants.