Session
Organizer 1: Technical Community, Asia-Pacific Group
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Technical Community, Western European and Others Group (WEOG)
Organizer 4: Civil Society, African Group
Organizer 2: Civil Society, Asia-Pacific Group
Organizer 3: Technical Community, Western European and Others Group (WEOG)
Organizer 4: Civil Society, African Group
Speaker 1: Farzaneh Badii, Civil Society, Asia-Pacific Group
Speaker 2: Harish Pillay, Private Sector, Asia-Pacific Group
Speaker 3: Melissa Munoz Suro, Government, Latin American and Caribbean Group (GRULAC)
Speaker 4: Turra Daniele, Technical Community, Western European and Others Group (WEOG)
Speaker 2: Harish Pillay, Private Sector, Asia-Pacific Group
Speaker 3: Melissa Munoz Suro, Government, Latin American and Caribbean Group (GRULAC)
Speaker 4: Turra Daniele, Technical Community, Western European and Others Group (WEOG)
Format
Roundtable
Duration (minutes): 60
Format description: A 60-minute hybrid roundtable structured as follows: 5 minutes of opening remarks and speaker introductions. Three segments (10 minutes each) focused on technical frameworks, open source definitions and controversies, and policy challenges. Two interactive audience interventions (5 minutes each) after key discussion points. A final 10-minute Q&A session. The session leverages digital tools like Mentimeter to facilitate seamless interaction between onsite and online participants. Moderators will pose targeted questions on technical, ethical, and governance aspects of open source AI, ensuring minimal added burden for organizers by using pre-prepared questions. Digital polls and real-time Q&A will integrate diverse viewpoints from both onsite and remote participants. This format encourages focused dialogue and leverages existing open-source community insights without requiring extensive additional resources.
Duration (minutes): 60
Format description: A 60-minute hybrid roundtable structured as follows: 5 minutes of opening remarks and speaker introductions. Three segments (10 minutes each) focused on technical frameworks, open source definitions and controversies, and policy challenges. Two interactive audience interventions (5 minutes each) after key discussion points. A final 10-minute Q&A session. The session leverages digital tools like Mentimeter to facilitate seamless interaction between onsite and online participants. Moderators will pose targeted questions on technical, ethical, and governance aspects of open source AI, ensuring minimal added burden for organizers by using pre-prepared questions. Digital polls and real-time Q&A will integrate diverse viewpoints from both onsite and remote participants. This format encourages focused dialogue and leverages existing open-source community insights without requiring extensive additional resources.
Policy Question(s)
1. What should policymakers and AI practitioners consider when defining and employing “open source AI” given its complexity compared to traditional FOSS?
2. How can open source AI frameworks and communities promote rapid development while ensuring transparency, safety, and ethical governance?
3. How can stakeholders balance the benefits of open access (e.g., lower costs, rapid innovation) with the risks (e.g., misuse, bias, data security)?
What will participants gain from attending this session? Participants will gain insights into how open source AI is reshaping the tech landscape, learn from real-world case studies of community-driven projects, and engage in a balanced discussion on the challenges of defining and governing open source AI. They’ll leave with actionable ideas for fostering innovation without compromising security or ethical standards.
Description:
The Deepseek-moment wiped over a trillion dollars in value of ‘Big Tech’ companies ripping open the conservative notions about the potential of open source AI. But is it truly open? And how can empowering technology like LLMs with immense destabilization risks be governed by engaging various stakeholders in the ecosystem? These are the issues tackled by this session by first addressing the debates highlighted by the Open Source Initiative’s draft definition of Open Source AI that underscores that AI systems are inherently more complex than traditional free and open source software making it easy to abuse ‘open source AI’ through open-washing proprietary technologies. Unlike conventional software, AI models involve not only source code but also model weights, proprietary training data, and development methodologies that need to be accounted for. This session will critically examine whether recent breakthroughs such as DeepSeek R1, and Alibaba’s Qwen 2.5 represent true progress or simply a reconfiguration of established themes. We will explore the implications of these innovations on transparency, ethical governance, and security to come up with governance frameworks that involve participation of the multiple stakeholders connected to this space. Ultimately, the conversation aims to clarify how emerging definitional issues could influence policy and practice in the rapidly evolving AI landscape. This session builds on last year’s ‘IGF 2024 WS #208 Democratising Access to AI with Open Source LLMs’ by diving into the contentious issues that emerged during the discussion on the evolving definition and governance of “open source AI”.
The Deepseek-moment wiped over a trillion dollars in value of ‘Big Tech’ companies ripping open the conservative notions about the potential of open source AI. But is it truly open? And how can empowering technology like LLMs with immense destabilization risks be governed by engaging various stakeholders in the ecosystem? These are the issues tackled by this session by first addressing the debates highlighted by the Open Source Initiative’s draft definition of Open Source AI that underscores that AI systems are inherently more complex than traditional free and open source software making it easy to abuse ‘open source AI’ through open-washing proprietary technologies. Unlike conventional software, AI models involve not only source code but also model weights, proprietary training data, and development methodologies that need to be accounted for. This session will critically examine whether recent breakthroughs such as DeepSeek R1, and Alibaba’s Qwen 2.5 represent true progress or simply a reconfiguration of established themes. We will explore the implications of these innovations on transparency, ethical governance, and security to come up with governance frameworks that involve participation of the multiple stakeholders connected to this space. Ultimately, the conversation aims to clarify how emerging definitional issues could influence policy and practice in the rapidly evolving AI landscape. This session builds on last year’s ‘IGF 2024 WS #208 Democratising Access to AI with Open Source LLMs’ by diving into the contentious issues that emerged during the discussion on the evolving definition and governance of “open source AI”.
Expected Outcomes
1. A policy brief summarizing key insights and recommendations on supporting open source AI innovation responsibly.
2. Practical guidelines for integrating open source AI tools while ensuring security and transparency.
3. A shared understanding among stakeholders of the nuanced definition of open source AI and its implications for governance.
Hybrid Format: To ensure an engaging hybrid session, we will use Zoom for seamless interaction between online and onsite attendees. A dedicated online moderator will ensure remote participants are actively involved.
Mentimeter will be used for live polling and Q&A, while Google Docs will allow for asynchronous collaboration. A large screen will display online participants, making them visible in the discussion. Organizers will also ensure all remote speakers have reliable internet and proper audio/video setups, bridging the gap between online and onsite engagement.