IGF 2024 WS #31 Cybersecurity in AI: balancing innovation and risks

    Organizer 1: Igor Kumagin, Kaspersky
    Organizer 2: Yuliya Shlychkova, Kaspersky
    Organizer 3: Jochen Michels, 🔒Kaspersky
    Organizer 4: Fonarev Dmitry, Kaspersky

    Speaker 1: Sergio Mayo Macias, Technical Community, Western European and Others Group (WEOG)
    Speaker 2: Melodena Stephens, Technical Community, Asia-Pacific Group
    Speaker 3: A Wylde, Technical Community, Western European and Others Group (WEOG)

    Moderator

    Yuliya Shlychkova, Private Sector, Intergovernmental Organization

    Online Moderator

    Jochen Michels, Private Sector, Western European and Others Group (WEOG)

    Rapporteur

    Fonarev Dmitry, Private Sector, Eastern European Group

    Format

    Theater
    Duration (minutes): 90
    Format description: The format of the session will be a combination of a panel discussion with round table lasting approximately 90 minutes. Great emphasis will be placed on discussion with participants – onsite and online. In addition, small surveys will be included to engage participants further and obtain feedback on individual questions

    Policy Question(s)

    A. What are the essential cybersecurity requirements that must be considered while developing and applying AI systems and how to ensure that AI is inherently secure by design? B. What are the roles and responsibilities of various stakeholders engaged in AI system development and use? C. How can we engage in a permanent dialogue and maintain an exchange on this issue?

    What will participants gain from attending this session? The goal of discussion is to identify core principles of cybersecurity-by-design for the development of AI. These principles can serve as a basis for further technical governance models.

    Description:

    The technological landscape has recently witnessed the emergence of AI-enabled systems at an unprecedented scale. However, nascent technologies go hand-in-hand with new cybersecurity risks and attack vectors. The concept of security in the development of AI systems has been thrust to the forefront of various regulatory initiatives, such as the EU AI Act or the Singapore Model AI Governance Framework for Generative AI, to minimize the associated cyber-risks. Despite these regulatory strides, a gap between the general frameworks and their practical implementation at a more technical level remains. In the forthcoming multi-stakeholder discussion, we seek to explore which fundamental cybersecurity requirements should be considered in the implementation of AI systems, and how policymakers, industry, academia, and the civil society can contribute to the development of new standards. Our initial thoughts are: (1) AI systems must undergo thorough security risk assessments. This involves evaluating the entire architecture of an AI system and its components to identify potential weaknesses and threats, ensuring that the system's design and implementation mitigate these risks. (2) Cybersecurity for AI systems should not be an afterthought but integrated from the initial design phase and maintained throughout the system's lifecycle (cyber-immunity). (3) Cybersecurity measures must address the AI system as a whole to demonstrate holistic approach that ensures all its parts are secure and resilient to multiple types of cyberthreats. (4) Continuous review and improvement of cybersecurity measures to ensure that security measures keep pace with new technological advancements and emerging cybersecurity threats. (5) An institutional process to share information about AI incidents should be established to ensure industry is informed about latest attacks and prepared to mitigate them.

    Expected Outcomes

    Following the session, an impulse paper titled “Balancing innovation and risk: fundamental security requirements for AI systems“ summarizing the results of the discussion will be published and made available to the IGF community. The paper can also be sent to other stakeholders to gather additional feedback.

    Hybrid Format: The moderators will actively involve the participants in the discussion, through short online surveys (1-2 questions) at the beginning and end of the session as well as after the initial statements. The survey tool can be used by participants both online and onsite via their smart phones. This will generate additional personal involvement and increase interest in the hybrid session. During the ’Roundtable’ discussion, onsite and online participants can also participate actively, as we encourage all attendees to contribute their ideas actively. Both onsite and online participants will have the same opportunities to get involved. Planned structure of the workshop • Introduction by the moderator • Survey with 2 questions • Brief impulse statements by all speakers • Survey with 2 questions • Moderated discussion with the attendees onsite and online – Roundtable • Survey with two questions • Wrap-up