Session
Holistic AI
Ella Shoup, Holistic AI, Private Sector, Western European and Others (WEOG)
Siddhant Chatterjee, Holistic AI, Private Sector, Asia-Pacific Group
The speakers for this event will be confirmed closer to the date of the IGF. It will include members of Holistic AI’s policy and data assurance teams, along with representatives from Civil Society and Government. what
Ella Shoup
Ella Shoup
Ella Shoup
5.b
16.7
16.b
Targets: SDG 5: As with AI systems in general, generative AI poses the risk of exacerbating long-standing biases against women and girls in institutions at all levels. Through discussion on how to mitigate such risks, this talk will offer reflections on how technical solutions can support non-discriminatory policies.
SDG 16: To develop effective, accountable, and transparent institutions, it will be critical to integrate AI governance into an organization’s core practices. This workshop will help stakeholders across IGF think about the steps they can take to begin or continue this process. The risk of bias and as a result, discriminatory practices, within an organization or institution is heightened by the use of AI systems. Audits and assurance for all types of AI systems can help minimize this risk.
Classroom
At the start of the session, the organizers will send out a poll using Mentimeter to both online and onsite participants. The poll will survey the audience on the generative AI risks specific to their organization or sector. The speakers will present the results of the poll to the audience and discuss common themes and differences.
We will then ground our discussion of these risk types in the framework developed by Koshiyama et al. (2021) (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3778998). This seminal five-vertical risk framework was developed within academia and has since been applied in practice. As such, it offers a valuable example of how socio-technical disciplines can manifest and be implemented across sectors. The framework broadly places risks into the following verticals:
Robustness: Risks associated with AI systems being susceptible to malicious external adversarial attacks.
Bias: Risks arising from AI systems generating biased outputs due to flawed training data, inappropriate contextual application, or limited inferential capabilities.
Privacy: Risks linked to AI systems inadvertently exposing sensitive or personal information.
Explainability: Risks stemming from AI systems producing opaque decisions that are incomprehensible to developers, deployers, and users.
Efficacy: Risk of AI systems failing to meet performance expectations relative to their designated use cases.
The results of the poll will be categorized into these five risk pillars, showing how specific risks often fall into similar challenges across sectors and how many risks are often interrelated.
This session will examine the specific risks associated with generative AI and explore socio-technical strategies that individuals and organizations can adopt to address them.
While 2023 marked the arrival and mass-adoption of tools like ChatGPT, Midjourney and Stable Diffusion, 2024 is witnessing a rapid-fire release of increasingly powerful generative AI models – many of which are becoming integrated into our daily lives. This swift deployment and utilization of such tools underscore a critical imperative for organizations using AI: ensuring the integrity, safety, security, fairness and reliability of these systems.
Concurrently, research investigating the potential risks associated with foundation models, Large Language Models (LLMs), and other forms of generative AI has steady expanded over the past year. Nevertheless, the comprehension of these risks collectively remains dynamic and may vary among different stakeholders. Within this landscape of uncertainty, it is worth examining both, academic literature concerning the safety and ethical implications of such models, as well as the collective insights and lived experience of stakeholders across geographies and fields. This can help cultivate a shared understanding of the potential risks involved, supporting the inclusive ethos of the multistakeholder approach espoused by the IGF. Our session aims to further this approach through a blend of practical and theoretical perspectives, encompassing both legal, social, scientific and technical considerations.
Our discussion will prioritize lessons learned on how stakeholders of varying capacities can effectively identify, manage and finally mitigate these risks. Furthermore, we'll explore how such mitigation strategies align with the broader goal of fostering transparent and accountable decision-making.
The IGF this year presents a distinctive opportunity for organizations and individuals to reach consensus on the nuances of generative AI risks and mitigation methodologies. This work will ultimately bolster the overarching objective of establishing a unified standard for responsible AI governance.
Prior to IGF, organizers will use the session’s page on the IGF website and social media channels to share informative and preparatory materials to help the audience better contextualize the topic. We will have online and onsite moderators to facilitate the Mentimeter poll and questions during the Q&A portion of the workshop.
The Mentimeter poll will especially help foster interaction with the topic at the start and during the session for both types of audiences, online and onsite.