Holistic AI
    Ella Shoup, Holistic AI, Private Sector, Western European and Others (WEOG)

    Speakers

    Zekun Wu, AI Researcher, Holistic AI, Private Sector, Asia-Pacific Group Ella Shoup, Holistic AI, Private Sector, Western European and Others (WEOG)

    Onsite Moderator

    Ella Shoup

    Rapporteur

    Ella Shoup

    SDGs

    16.5
    16.b
    17.6

    Targets: SDG 16: To develop effective, accountable, and transparent institutions, it will be critical to integrate algorithm auditing and assurance into an organization’s core practices. This lightning talk will help stakeholders across IGF think about the steps they can take to begin or continue this process. The risk of bias and as a result, discriminatory practices, within an organization or institution is heightened by the use of AI systems. Algorithm audits and assurance can help mitigate this risk. SDG 17: By reaching a shared understanding of ‘trustworthy AI’ and the accountability mechanisms that operationalize such a principle, international cooperation on AI will be better prepared to respond to risks that may arise from the deployment of sophisticated AI systems.

    Format

    The lightning talk will include 25 minutes of presentation by the speakers, which will include slides, followed by 5 minutes of Q&A from the audience.

    Duration (minutes)
    30
    Description

    As stakeholders across sectors – including policymakers, civil society, and industry – undergo rapid digital transformation, Large Language Models (LLMs) will inevitably become an integral part of their work. The versatile adaptability and progressive sophistication of LLMs will lead to their deployment in a variety of domains, including healthcare, education, environmental conservation, and more. With such a diverse set of actors using these systems, it will be challenging to ensure accountability across institutions. Left unchecked, LLMs can inadvertently amplify biases, generate false information, or be manipulated for malicious purposes. How both public and private sector institutions will develop and maintain appropriate accountable and transparent process in response to their use of AI will be a challenge. Thus, the adoption of trustworthy AI mechanisms, such as LLM auditing and assurance, will be more important than ever. This lightning talk will help stakeholders across the IGF think about the steps they can take to adopt or scale this process. To establish a foundation for our discussion, we'll initially question the notion if relying solely on regulation is sufficient to mitigate the risks associated with these systems. This inquiry will set the stage for our exploration, emphasizing the significance of incorporating agile socio-technical measures to complement endeavours in governance. Subsequently, we will delve into implementing ethics and safety in LLMs, focusing on two primary approaches: model evaluations and algorithm audits. This section will involve a detailed examination of the various types, characteristics, and limitations inherent in these approaches. In the final segment, we will discuss the practical implications of integrating these interventions, and particularly how their effective application and deployment can guide the responsible release of these models into the external world.

    Prior to IGF, organizers will use the session’s page on the IGF website and social media channels to share informative and preparatory materials to help the audience better contextualize the topic. Holistic AI will be taking questions during the Q&A from both the in-person and online audience. The online moderator will be active throughout the livestream to manage incoming questions and point the online participants to further resources.