Session
The Galway Strategy Group
Jim Prendergast The Galway Strategy Group, Private Sector, WEOG
Nidhi Hebbar, Google, Private Sector, WEOG
Samantha Dickinson – Lingua Synaptica, private sector. APAC
Charles Bradley, Adapt, Private Sector, WEOG
Nidhi Hebbar, Google, Private Sector, WEOG
Charles Bradley, Adapt, Private Sector, WEOG
Charles Bradley
Jim Prendergast
Samantha Dickinson
4. Quality Education
9. Industry, Innovation and Infrastructure
16. Peace, Justice and Strong Institutions
17. Partnerships for the Goals
Targets: SDG 4: Quality Education: By improving our understanding of the sources and processes involved in generative AI, we can enhance educational opportunities related to AI and data science. This knowledge can be disseminated to train the next generation of AI practitioners, ensuring they have the necessary skills and ethical considerations.
SDG 9: Industry, Innovation, and Infrastructure: Provenance in generative AI supports innovation by fostering transparency and accountability in AI systems. This can lead to the development of more reliable and trustworthy AI technologies, which in turn can drive economic growth and support sustainable industrialization.
SDG 16: Peace, Justice, and Strong Institutions: Understanding provenance in generative AI promotes accountability and fairness in AI systems. By tracing the origins and processes involved in generating AI-generated content, we can mitigate potential biases, discrimination, and misinformation, thus contributing to the promotion of just and inclusive societies.
SDG 17: Partnerships for the Goals: Collaboration between academia, industry, government, and civil society is essential for advancing our understanding of provenance in generative AI. By fostering partnerships and knowledge-sharing initiatives, we can accelerate progress towards building more responsible and sustainable AI systems.
Roundtable
Welcome and introductions - 5 minutes
Introduction of the paper “Pairing inferred and assertive provenance to support a healthy information ecosystem.” 20 minutes
Moderated Q&A and discussion - 30 minutes
Wrap up - 5 minutes
As tools powered by generative AI become more accessible and widespread, debate around the trustworthiness of content – especially synthetic images, video, and voice – have become more acute. While concerns around misinformation and disinformation are not new, generative AI capabilities do bring a new dimension to the conversation, one which will require careful consideration and discussion.
Nidhi Hebbar from Google will launch a paper which is currently in development titled “Pairing inferred and assertive provenance to support a healthy information ecosystem.” (link to be provided closer to event)
This paper focuses, first, on situating the issues of synthetic content within the broader context of information literacy and information quality, for which a deep and rich evidence-base already exists. The question of “Is this generated?” is not equivalent to “Is this trustworthy?” Though these two questions can overlap, they do not always – and additional contextual information is often needed to make an accurate assessment of a piece of content’s trustworthiness.
Second, this paper will outline the benefits and drawbacks of both assertive and inferred provenance, both of which will be required as part of a holistic solution space to empower users to make sound decisions on content they encounter in the information ecosystem. Assertive provenance, which features heavily in current policy debates, focuses on providing labels, watermarks, and/or metadata that can indicate whether a piece of content has been generated or not. Inferred provenance meanwhile focuses on empowering users with contextual information that can help them determine where a piece of content came from, what claims are being made about it, and who is responsible for those claims.
The goal of our session is to show that by addressing the challenge of provenance in the age of generative AI, we can help mitigate the risks associated with fake content, preserve trust in digital media, and uphold the integrity of information ecosystems.
Using Zoom will allow both onsite and online participants to see and hear each other. We will ask all participants, both in person and remote to be logged in so we can manage the question queue in a neutral manner, but when in doubt will defer to remote participants as sometimes they are more difficult to spot. Our onsite and online moderators will be in constant communication to ensure that we can facilitate questions and comments from both onsite and online participants.
We will also consider the unique challenges and opportunities that remote participants face, such as time zone differences, technical limitations, and differences in communication styles.
We will urge our speakers to use clear and concise language, avoid technical jargon, and provide context for all information discussed during the session to ensure that both onsite and online participants can follow along and understand the content.
Finally, we will explore the use of a polling tool to ask questions and get feedback from both onsite and online participants in real-time.