Session
Organizer 1: Civil Society, Asia-Pacific Group
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Civil Society, Asia-Pacific Group
Organizer 4: Civil Society, Asia-Pacific Group
Speaker 1: Anoosha Shaigan, Civil Society, Asia-Pacific Group
Speaker 2: Yasmin Afina, Intergovernmental Organization, Asia-Pacific Group
Speaker 3: Jimena Sofia Viveros Alvarez, Intergovernmental Organization, Intergovernmental Organization
Roundtable
Duration (minutes): 60
Format description: The sensitive nature of AI in warfare demands a format that fosters open and frank discussions and a 60-minute roundtable provides this crucial environment. - Face-to-Face Exchange: A roundtable positions participants on an equal footing, facilitating direct dialogue and encouraging all voices to be heard. Nuanced viewpoints and challenging perspectives can be expressed constructively. - Mutual Respect & Trust: By enabling direct, respectful conversations, the roundtable format fosters collaboration and maximizes the potential for finding common ground on this critical issue. - In-Depth Exploration: The roundtable format allows for in-depth discussion of complex issues compared to a traditional panel. Participants can delve into specific concerns and potential solutions, ensuring all viewpoints are considered.
1. How can international law frameworks be adapted to address the unique challenges of AI-powered weapons, ensuring both national security and adherence to ethical principles like proportionality and civilian protection? 2. What international cooperation and collaboration are needed to ensure the responsible development and deployment of AI in the military sphere? 3. Who should be held accountable for actions taken by AI-powered weapons? 4. Can XAI (Explainable AI) technologies be effectively applied to AWS to ensure human oversight and understanding of how targeting decisions are made?
What will participants gain from attending this session? Participants will gain a deeper understanding of the intersection between AI, warfare, and international law. They will explore the potential benefits and risks of AI in this domain, and engage in discussions regarding ethical frameworks for the responsible use of AI in the military and leveraging AI's potential for upholding law during conflict, monitoring human rights violations and supporting investigations. Moreover, attendees would also delve into AI Laws & policies that grapple with these issues. This session would serve as a platform for shaping the future of AI in warfare.
Description:
The rapid development of Artificial Intelligence (AI) in warfare, with global military expenditure reaching $2443 billion, has raised serious ethical concerns regarding autonomous weapons systems (AWS). While some argue for increased efficiency and reduced casualties, the potential for civilian harm and the lack of human oversight remains paramount. This workshop explores the paradoxical possibility that AI could also be a force for good, upholding international law and human rights during conflict. We will examine: AI and Adherence to Law: Can AI be programmed to understand and adhere to the complexities of international law governing warfare? AI for Monitoring Violations & War Crime Investigations: Can AI-powered tools be used to monitor potential human rights violations during conflict, identifying patterns and gathering evidence for investigations? This session will bring together diverse stakeholders (governments, military, legal experts, civil society) for a multi-faceted discussion, fostering innovative approaches to: Responsible AI Development: Promote best practices and international collaboration for responsible development and deployment of AI in the military sphere. Ethical Frameworks: Identify legal and ethical frameworks needed to ensure transparency, accountability, and minimize risks associated with AI in warfare. By fostering a solution-oriented dialogue, this workshop aims to pave the way for a future where AI can serve as a tool for upholding IHL and protecting human rights amidst the complexities of autonomous warfare.
- Improved Understanding: Increased awareness of the challenges and opportunities presented by AI in warfare, particularly regarding international law and human rights. - Actionable Recommendations: Identification of best practices and recommendations for responsible development, deployment, and oversight of AI in the military sphere. - Multi-Stakeholder Engagement: Fostering collaboration between governments, military, civil society, and the tech sector to address ethical concerns and promote responsible AI use in warfare. The session will produce a summary report outlining key takeaways and recommendations, which will be shared with participants and relevant stakeholders. It will also act as a launcpad for advanced research in this domain. This will contribute to ongoing discussions on shaping the future of AI in warfare and promoting its use for positive outcomes.
Hybrid Format: To facilitate interaction between online and onsite participants, we'll leverage Zoom's features for seamless Q&A, live polls, and chat. A dedicated online moderator will monitor the chat and address questions, ensuring all voices are heard. We'll encourage active participation from online attendees through polls & Q&A. Additionally, we'll also utilize live captioning tools for accessibility, and record the session for later viewing by those who can't attend live.
Report
The Role of AI in Warfare: Legal, Ethical, and Governance Challenges
This discussion brought together experts from various fields to explore the complex issues surrounding the use of artificial intelligence (AI) in warfare and its implications for international law and ethics. The speakers, including representatives from the United Nations Institute for Disarmament Research (UNIDIR), the Global Commission on the Responsible Use of AI in the Military Domain, and the International Committee of the Red Cross (ICRC), addressed the challenges and responsibilities associated with AI in military applications.
International Law and AI Governance
Yasmin Afina from UNIDIR emphasized that international law should be a core component of AI governance in the military domain. She introduced UNIDIR’s RAISE program (Responsible AI in Security and Ethics) and mentioned an upcoming global conference on AI security and ethics. Afina stressed the importance of translating legal requirements into technical specifications for AI systems and advocated for a “compliance by design” approach.
Jimena Sofia Viveros Alvarez, Commissioner at the Global Commission on the Responsible Use of AI in the Military Domain, argued for a broader, coherent global AI governance framework addressing both civilian and military applications. She highlighted the transfer of discussions from Group of Governmental Experts (GGEs) to the UN General Assembly and called for binding treaties aligned with international law to govern AI use in warfare.
Anoosha Shaigan, a technology lawyer with a background in human rights law, discussed specific legal issues such as liability, command responsibility, and developer liability in the context of AI in warfare. She emphasized the importance of international humanitarian law principles like distinction, proportionality, and necessity. Shaigan also mentioned the Outer Space Treaty in relation to AI-guided satellites and suggested developing an international military AI tribunal.
Ethical Considerations and Challenges
The discussion delved into several ethical challenges posed by AI in warfare. Anoosha Shaigan raised concerns about data bias and model drift in AI systems, using the example of potentially discriminatory targeting based on appearance. She also addressed the challenges posed by generative AI, deep fakes, and disinformation in military contexts.
Privacy concerns in conflict zones were addressed, with speakers noting the challenge of balancing military needs with civilian privacy rights when deploying AI technologies. The concept of explainable AI for autonomous weapons systems was introduced, emphasizing the importance of human understanding and oversight of AI decision-making processes in warfare.
Accountability and Human Control
A significant point of agreement among the speakers was the necessity of maintaining human control and accountability in AI-powered warfare systems. Mohamed Sheikh-Ali from the ICRC stressed that human oversight and control are essential for weapons systems, particularly for life-and-death decisions. This view was strongly supported by other speakers, who emphasized the need for human responsibility and accountability in the use of AI in military contexts.
The discussion touched on the complex issue of liability for AI actions in warfare. Anoosha Shaigan highlighted the need to clarify who bears responsibility when AI systems make mistakes or cause harm, whether it be the operator, commander, developer, or the state itself.
Multi-stakeholder Engagement and Corporate Responsibility
Yasmin Afina introduced the importance of multi-stakeholder engagement in shaping AI governance in the military domain. This approach calls for input from industry, civil society, and academia, in addition to government actors.
The role of private sector companies developing AI technologies for military use was emphasized by both Anoosha Shaigan and Mohamed Sheikh-Ali. They agreed on the need to engage tech companies from the design stage and ensure corporate accountability for military AI suppliers. Sheikh-Ali specifically mentioned the ICRC’s engagement with technology companies in Silicon Valley and China.
Future Developments and Recommendations
Looking towards the future, the speakers offered several recommendations:
1. Develop binding treaties aligned with international law to govern AI use in warfare (Jimena Sofia Viveros Alvarez)
2. Create specific standards for military AI that incorporate legal and ethical considerations (Anoosha Shaigan)
3. Engage technology companies from the early stages of AI development for military applications (Mohamed Sheikh-Ali)
4. Implement a “compliance by design” approach, incorporating international law considerations from the outset of AI system development (Yasmin Afina)
5. Establish an international military AI tribunal to address legal issues arising from AI use in warfare (Anoosha Shaigan)
Conclusion
The discussion underscored the complex challenges of balancing technological advancement with ethical and legal considerations in the use of AI in warfare. While there was a high level of consensus on core principles, such as the importance of international law and human control, the speakers differed in their specific approaches and areas of emphasis. This reflects the multifaceted nature of the issue and highlights the need for continued dialogue and collaboration among various stakeholders to develop comprehensive and effective governance frameworks for AI in warfare.
The urgency of addressing these challenges was evident throughout the discussion, as speakers called for increased awareness of the current use of AI in conflict situations and the pressing need for effective regulation and oversight. As AI technologies continue to advance, the international community faces the critical task of ensuring that their use in warfare remains within the bounds of law, ethics, and human control.