IGF 2023 Day 0 Event #21 Under the Hood: Approaches to Algorithmic Transparency

Sunday, 8th October, 2023 (06:40 UTC) - Sunday, 8th October, 2023 (07:40 UTC)
WS 6 – Room E

Human Rights & Freedoms
Non-discrimination in the Digital Space

Human Rights & Freedoms

Zoe Darme, Google, Private Sector, WEOG

Jim Prendergast, The Galway Strategy Group, private sector WEOG

Farzaneh Badii, Digital Medussa, Civil Society APAC


Zoe Darme, Google, Private Sector, WEOG

Charles Bradley, Adapt, Civil Society, WEOG

Farzaneh Badii, Digital Medussa, Civil Society APAC

Onsite Moderator

Farzaneh Badii

Online Moderator

Jim Prendergast


Samantha Dickinson


16. Peace, Justice and Strong Institutions

Targets: SDG 8.2 -- “Achieve higher levels of economic productivity through diversification, technological upgrading and innovation, including through a focus on high value-added and labor-intensive.” With proper levels of algorithmic transparency and trustworthy use of AI, both business and government can help drive economic growth and productivity. SDG 10.2: Reduced Inequalities - “By 2030, empower and promote the social, economic and political inclusion of all, irrespective of age, sex, disability, race, ethnicity, origin, religion or economic or other status.” Greater understanding of AI systems, through transparency and explainability, will help counter biases and the “black box” nature of some of these systems. SDG 16: Peace, Justice and Strong Institutions: AI can be used to improve public safety by analyzing crime data and predicting where crimes are likely to occur. AI can also be used to monitor and analyze social media for hate speech and other harmful content.


Tech Demo/interactive experience followed by audience engagements to gauge reactions and a discussion of the key policy questions.


Civil society and policymakers are increasingly calling for greater algorithmic transparency and AI explainability for a range of reasons: concerns about filter bubbles and echo chambers, seemingly “black box” demotions, and bias. The OECD AI Principles stress the importance of “transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.” Greater understanding of the details and nuances of this topic will help foster more developed policy conversations and approaches. This interactive session will explore the complexities of algorithmic transparency and the following key policy questions: 1. What does algorithmic transparency and AI explainability mean? Why is this important? 2. What are tech companies doing to meet calls for algorithmic transparency and AI explainability? 3. What are key considerations for policymakers when considering legislation about algorithmic transparency and AI explainability? 4. How do users benefit from greater algorithmic transparency and AI explainability? This session is designed to elicit multistakeholder participant feedback about practical proposals for algorithmic transparency. By reviewing already-established efforts to provide more transparency around algorithms – from transparency reporting to in-product features – we hope to discuss with participants what efforts are already underway, while using their feedback to determine if more is needed, and why. The EU Digital Service Act requires audits, calling for auditors who have “the necessary expertise in the area of risk management and technical competence to audit algorithms.” The UK Online Safety Bill requires the release of “information about systems and processes which a provider uses to deal with illegal content.” Other bills and laws propose similar requirements, but often these are high level and non specific. Zoe Darme (Google) will give an “under the hood” peak at the complex systems that power tools like Google Search. Participants will workshop a new experience called “Life of a Query,” which is a streamlined version of training for all new Search engineers. We will then engage in a frank and open discussion about technical complexities and mitigations for abuse. Whether end users, policymakers, academic experts or software engineers, we share the same goal – increased trust and accountability in the use of algorithms in decision-making processes. This session will help us develop a better understanding of the needs, limitations, challenges and opportunities.

Using Zoom will allow both onsite and online participants to see and hear each other. We will ask all participants, both in person and remote to be logged in so we can manage the question queue in a neutral manner, but when in doubt will defer to remote participants as sometimes they are more difficult to spot. Our onsite and online moderators will be in constant communication to ensure that we can facilitate questions and comments from both onsite and online participants. We will also consider the unique challenges and opportunities that remote participants face, such as time zone differences, technical limitations, and differences in communication styles. We will urge our speakers to use clear and concise language, avoid technical jargon, and provide context for all information discussed during the session to ensure that both onsite and online participants can follow along and understand the content. Finally we will explore the use of a polling tool, such as Mentimeter or Poll Everywhere, to ask questions and get feedback from both onsite and online participants in real-time.

Key Takeaways (* deadline 2 hours after session)

"Algorithmic Transparency" is difficult to define. It means different things to different people. In most cases it boils down to distilling down what information you actually want when you call for algorithmic transparency.

Call to Action (* deadline 2 hours after session)

After walking through a demonstration of "life of a query", participants are asked to provide feedback to help fine tune the presentation. Many participants thought repeating this exercise in the future would be beneficial.