IGF 2025 - Day 2 - Plenary Hall - Main Session 2 - The Governance of AI

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> KATHLEEN ZIEMANN:  Welcome to the main session on the governance of AI.  My name is Kathleen Ziemann.  I lead an AI project at the German Development Agency, GIZ.  The project is call FAIR Forward.  I will be moderating the session today together with Guilherme Canela Godoi.

>> GUILHERME CANELA GODOI: Good morning, everyone.  My name is Guilherme Canela.  I am the Director of UNESCO in charge of inclusion policies.    It's a real pleasure to be here with Kathleen and this fantastic panel.

>> KATHLEEN ZIEMANN: So, I am excited to have representatives from different regions and sectors here on the panel that will discuss AI governance with us, and dear panelists, thank you so much for coming. 

Let me briefly introduce you.  So to our left, we have Melinda Claybaugh, Director of Privacy Policy at Meta.  And next, Jovan  Kurbalija, Executive Director of DiploFoundation based in Geneva.  And next to you Jhalak Kakkar, welcome. 

Happy to have Jhalak Kakkar who is the Executive Director of Center for Communication Governance in New Delhi, India.  We are happy to welcome you.  Mlindi Mashologu, you are filling in for the Deputy Minister from the Ministry of Communications and Digital Technology.  Your title is you are the Deputy Director General at the ministry of Digital Society and Economy.  

Thank you all for coming, and very sad that Mondli couldn't come.  He was affected by the recent activities in Israel and Iran and his flight could not come through. 

Well, everyone, thank you for coming.  Before you will be able to set the scene from your perspective, I would love to give a brief introduction on what we perceive under AI governance at the moment, and also give us an idea how to discuss this further.  As this IGF's theme is Building Digital Governance Together, we want to discuss how we can shape AI governance together as we still observe different levels and possibilities of engagement in sectors and regions.

I would say that currently the AI governance landscape is blooming. 

We have AI governance tools like principles, processes and bodies emerging globally, and I think somehow we also can lose track in that blooming landscape just to name a few.  So in 2019, OECD issued AI principles followed by UNESCO recommendations on ethics of AI in 2022. 

In 2023 companies such as OpenAI, Alphabet and Meta made efforts to implement water marking content and last year the EU AI Act came into force as the first legal framework for governing AI.

Additionally, existing fora and groups are addressing AI and its governance.  For example, last year, the G7 launched the Hiroshima AI process, and G20 has declared AI a key priority this year.  And I think we will be hearing more about that from you later.  And we have various declarations and endorsements and communications issued by many like the AI Africa, an AI declaration signed in Kigali, for example, or the Declaration on Responsible AI that was signed in Hamburg recently.

And as a core document for 193 Member States, the UN Global Digital Compact calls for concrete actions for global AI governance by establishing, for example, a global AI policy dialogue and scientific panel on AI.  So when we look at all of these efforts, it seems like AI governance is not only a blooming, but also a fragmented landscape with different levels and possibilities of engagement.

So how do you, dear panelists, perceive this? 

What are your perspectives, but also ideas on the current AI governance?  What should be changed?  What is missing?  We would love to start with your perspective, Melinda, from the Private Sector.  Feel free to use the next three to four minutes for an introduction statement, and, yes.  There you go.

>> MELINDA CLAYBAUGH: Thank you so much.  And thanks for having me.  It's a pleasure to be here.  Just a little perspective to set the context from where Meta sits in this conversation.  So at Meta, I think everyone is familiar with our products and our services, social media and messaging apps, but in the AI space we sit at two places, one we are developer of large language models, foundational Gen AI models called llama and many may be familiar with them or familiar with applications built on top of them.  So we are a developer in that sense, and we focus largely on open source as the right approach to building large GenerativeAI models.

At the same time we build on top of models and we provide applications and systems through our products.  So we are kind of in both camps just to situate folks.

It really in the last couple of years it's incredible the number of frameworks and commitments and principles and policy frameworks.  It's head spinning at times having lived through that.

And so I think it is really important to remember there is no lack of governance in this space.  But I do think that we are at an interesting inflection point.  I think we are all wondering what now?  We set down these principles, we have these frameworks.  Companies like Meta, for example, has put out a frontier AI framework that sets out how we assess for catastrophic risks when we are developing our models and what steps we take to mitigate them.

And yet there is still a lot of questions and concerns.  And I think we are at this inflection point for a few reasons.  One, we don't necessarily agree on what the risks are and whether there are risks, and how we quantify them.

We also, so I think we see different regions and countries want to focus more on innovation and opportunity.  Others want to focus more on safety and the risks.

There is also a lack of technical agreement and scientific agreement about risks and how they should be measured.  I think there is also an interesting inflection point in regulation.  The EU, for example, was very fast to move to regulate AI with the landmark AI Act, and I think it's running into some problems.

There is now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far, and we don't know whether this is really tied to the state of the science and how to actually implement this in a way.  Now, they are looking to pause and reconsider certain aspects of digital regulation in Europe.  So, and then a lot of countries are kind of looking for what to do and are looking for solutions for how to actually adopt and implement AI.

And so I don't think I have an easy answer, but I think we are at a moment to kind of take stock and say, okay, we have talked about risk.  Can we talk about opportunity?  Can we talk about enabling innovation?  Can we broaden this conversation about what we are talking about and who we are talking with, and make sure the right voices, the right representation from little tech to big tech, from all corners of the world are represented to have these conversations about governance.

>> KATHLEEN ZIEMANN: Thank you very much Mlindi.  I would love to continue with you Mlindi and give the perspective from the South Africa Government, what is important to you at the moment.

>> MINDI MASHOLOGU:  From the South Africa Government what we see, I think it's general knowledge that we see that AI is a true purpose to technology which is the same as electricity or Internet, but also it does affect various sectors of our economy, but also we see that with such a power, transformative power, so it comes with the responsibility which we want to ensure that the AI systems are not only effective, but also they are ethical and inclusive and accountable.  So I think it's one of the first things that we want to do. 

Also to govern AI effectively, we are trying to use common vocabulary and principle foundation as reflected in some initiatives like OECD principles, UN High Level Advisory Board on AI, but also we just need to also focus on making sure that we do have required sector specific policy interventions that are technically informed and locally relevant.  Even AI in Financial Services it will be different in regulating it in AI in agriculture. 

We are trying to come with different methods of regulating AI, but also some other areas we are focusing on as a Government as well is from the regional point of view is to make sure that our approach is grounded from the data justice which puts equity and environmental sustainability the centre of AI, but we recognize that technological advancements must serve further as public good but not to reinforce historical inequities.  So that's one of the concrete proposals we are looking into, but also the other area we are focusing on is the area of sufficient explainability, which is the requirement for the AI decisions.

It's one of the areas we are advocating, especially those that impact human lives and livelihoods.

So we see those decisions.  They need to be transparent, interpretable.  So if you are looking at areas such as credit scoring, healthcare diagnostics, we need to have the right to understand how these decisions have actually come and how the AI systems that try to make those decisions.  But further from that, one of the areas that we are referring as well is the area of human interlocked systems where we are seeing the need to have human oversight in terms of the AI systems, but from the design and as well as deployment humans must guide and when needed override automated systems.

So this includes reinforcement learning with human feedback and clear thresholds for the interventions in the higher risk domains.  The last point that I just want to focus on is all participation in terms of the global AI governance has been uneven. 

The potential and agents are there.  So from our side as a country in terms of the policy that we are currently developing, so we are looking to leverage the areas that have already developed some frameworks which include African Union Data Policy Framework.  So we are building the models, you know, of governance rotating equity as well as innovation.  But we see that AI is not just a technology tool.  We don't want AI to replace humans, but we want the AI to try to work with the humans in terms of assisting us in some of the most pressing needs of our society.

>> KATHLEEN ZIEMANN: Thank you very much, especially the local relevance of AI governance will be discussed in our round later so that is a very important point you made.  Thank you very much.  Jhalak Kakkar, we would love to continue with you.  You are working at an academic institution, but you are rooted in the cooperation with the civil society so if you could bring the two perspectives together that would be appreciated.

>> JHALAK KAKKAR: Thank you, Kathleen.

I think when we think about AI governance, one is what is the process and input into the creation of AI governance either internationally or domestically, and then actually what is the substance of what we are structuring AI governance as.

And if I can first just take a couple of minutes to talk about the process, I think if we learn from the past, it's very important to have multistakeholder input as any sort of governance mechanism is being created because different stakeholders sitting at different parts of the ecosystem are able to bring forth different perspectives, and we end up in a more balanced environment.

I think one of the things that we have increasingly seen is a shift towards multilateralism, and I think at the IGF is a perfect place to talk about the need to focus on multi‑stakeholderism and enabling meaningful participation, and not participation that is being done as a matter of form, but participation that actually impacts outcomes and outputs.

I think the second piece that I wanted to talk about when I talk about process is increasing the need to meaningfully engage with a broader cross‑section of civil society, Academia and researchers, so not only those bringing perspectives from the Global North, but those bringing value and informed perspectives from the Global South.

The way it works in United States, versus Japan or India is pretty much the same, but AI as a technology will be shaped in its, in the way it functions, in the way it impacts very differently in different contexts.

And I think perspectives from a cross‑section of countries from a cross‑section of civil society across the global majority is very important to enable.  And we can talk about maybe later in this conversation what some of the challenges are that have been preventing that currently.

I think if we talk about the substance of AI governance, one is the piece around how do we really truly democratize access to AI?  We have seen a lot of technology development historically has been concentrated in certain regions at a moment in time when we are talking about the WSIS + 20 review, I want to go back to something that was sort of articulated in the Tunis Agenda which spoke about facilitating technology transfer to bridge developmental divides.

While it's happened, perhaps, with ICANN and ISOC supporting digital literacy and training, there has sort of been less substantial moves to operationalize on equitable access to technology.  So I think in this context it's very important to think about how from the get‑go we enhance capacity of countries to create local AI ecosystems so that we don't have a concentration of infrastructure and technology in certain regions.  We talk about mechanisms such as open data set platforms, you know, some kind of AI commons, and think about how do we democratize access to this technology so that we have AI for social good, which is contextually developed for different regions and different contexts.

And I think the last point I want to talk about is regulation and governance is not a bad word.  Very often I'm hearing conversations about we have talked about risk.  Let's focus on innovation now.  I think that's creating a false sense of dichotomy.  I think they have to go hand in hand, and in the past the mistakes we have made is sort of letting not sort of developing governance mechanisms from the get‑go.

And it doesn't have to be heavy governance and regulation from the get‑go.  I think at this stage we don't understand what the risks are, so we need to be documenting risks.  We need to be carrying out AI impact assessments.

This has to be done from a social technical perspective so that we really understand impacts and society impacts on individuals because until we are going around in circles talking about we don't know what the risks are, what the harms are, how it's going to impact us, so let's start setting up mechanisms whether it's sand boxes, whether it is AI impact assessments, whether it is audits, I know that we go back to conversations on, but there is a regulatory burden to this.  It's going to slow down innovation, but are there ways we can start to think about how we can operationalize these in light touch ways so that we can start to understand what are the harms, what are the impacts that are coming up, so that we don't create dependencies for ourselves later on where we are doing band aide solutions.

Rather we are able to shape the evolution of technologies such that it's beneficial to our society and individuals rather than landing in a space where it's developed in a direction we didn't quite sort of emphasize and we didn't realize unintended consequences that would come with shaping it in a particular way.  I will stop here and then come in with more points later.

>> KATHLEEN ZIEMANN: Thank you very much.  We will also come back to the role of IGF, but also on how we could use tools for collaboration and exploring AI governance in a more concrete level.  Jovan Kurbalija, you are from an academic background, I would say, but you have a lot of practice in AI.  You call yourself a Takumi master in AI. We would love to hear more on your perspective observing either IGF's role in AI, but also on how AI is governed in Europe.

>> JOVAN KURBALIJA: Thank you, Kathleen, it's a great pleasure to be here today.  When I was preparing cognitively for the session, I thought I ask myself how we can make a difference.  One point which is fascinating is that in three years' time, AI landscape has changed profoundly.

Just three years ago when ChatGPT was released it was magical technology.  It can write your poetry, it can write your thesis, whatever you want, and at that time you remember the reactions were let's regulate it, let's ban it, let's control it.  There were knee‑jerk reactions, let's establish something analog to the nuclear agency in Vienna for AI and there were so many ideas.

Fast forward today we have a realism, and for those colleagues from Latin America, it could be that AI governance is a magical realism.  Like Joseph, Markus and others you have the magic of AI like any other technology and many of us in this room are attracted to Internet and AI and digital because of this magical element.

But there is a realism.  And I will focus now on this realism.

First point is that AI became commodity.  I think statistics are same for countries worldwide.  Therefore, AI is not something which is reserved for a few people in the lab.  It's becoming an affordable commodity.

It has enormous impact.  One, impact is that you can develop AI agent in five minutes exactly.  Our record is four minutes 34 seconds.  That's basically unthinkable only a few years ago.  It was a matter of years of research.

That's the first point.

For the whole construct about risks is basically shifting towards this affordable commodity.  Second point is that we are now on the edge where we will have basically AI on our mobile.  And then the question we can ask is today we will produce some knowledge here in our interaction.  Should that knowledge belong to us, to IGF, to our organisations or to somebody else?

This is the second point of bottom up AI.  We will be able to codify our knowledge to preserve our knowledge individual or group, family, and that will profoundly shift AI governance discussions.

And the third point in this context which I would like to advance in this introductory remark is that we have to change our governance language.

If you read documents, both Tunis and Geneva, the key term was knowledge, not data.  Data was mentioned here and there.  Now, somehow in 20 years' time it will be reflected in WSIS+20, knowledge is completely cleaned.

You don't have it in GDC.  You don't have it in the WSIS documents.  You have only data.  And AI is about knowledge.  It's not just about data.  That's interesting framing issue.  In discussion, I hope that we can come to some concrete issues about, for example, sharing ways, through sharing knowledge, how we can protect knowledge especially from perspectives of Developing Countries because we are on the edge of the risk that that knowledge can be basically centralized and monopolized and we had already experiences in early days of the Internet where the risk that anyone can develop digital solution, Internet solution, and at the end of the day that just a few can do it.

And that should help us in developing AI governance solution, and we can discuss concrete ideas and proposals.

>> KATHLEEN ZIEMANN: Thank you very much, Jovan, also for the references to the whole history of the Internet.  I think that's great to have here as expertise on the panel at IGF.

Thank you all for setting the scene.  I think we got already an idea about the different perspectives we have here, and also the possibilities for synergies, but also maybe for conflict.  And that's also a bit our role as moderators to bring in the two different possibilities of all of you being on the panel.

We would love to start now an open round of discussion here.  We have prepared questions for you.  We will start, but then we will also hope that something evolves in between you and you can refer to each other and answer a bit of the questions you have already put in the room here.

But first of all, we would start with you, Mlindi.  In terms of also giving us an idea, you already spoke about the local relevance of AI, how to insert that into global processes and South Africa is currently holding the G20 presidency.  How will you make sure within your functions that the local relevance of AI and the AI frameworks that South Africa also has established will be included in the global dialogue here?

>> MLINDI MASHOLOGU:  I think it's important to know that the AI is a priority in terms of our G20 presidency.  I think it's because we see that the reason why we put it in there we also picked up that, I mean, how we then govern it.  It will determine how equitable, inclusive and just how our societies will try to be tomorrow.  So from our approach, so what we have tried to do is to ground the governance in to complementary dimensions, one being micro foresight as well as micro precision.  From the micro foresight point of view, we look on the AI from the long-term view, and also to recognize its impact in society for a much longer period shaping our economy.

So but also from our G20 agenda we are championing the development of a toolkit which will try to reduce the inequalities connected to the use of AI.  So this toolkit also seeks to identify the systematic ways in which AI can both amplify and address the inequality, especially from the Global South, but also we see that this foresight requires the geopolitical realism because, I mean, AI, we see that it cannot be dominated by a handful of countries or Private Sector actors, but it has to be multilateral, multistakeholder as well as multi‑sectoral.

That is why then we are working on expanding the scope of the participation, bringing more voices from the continent, from the Global South, and also some of the underrepresented communities in terms of the Sendai governance dialogue, but also if we are to look and mention the micro vision with the micro precision, where we are looking on the ability to address granular specific granularities.  So we see that there is no one‑size‑fits‑all when it comes to AI.  So from there on we advocate for the innovation, which includes sand boxes, human development mechanism, but also adapt tools that can be collaborated to set specific risks and benefits.

One of the areas we are focusing on as well is to ensure that we do capacity building, develop local talent, the ethical oversight mechanisms because we believe that the AI governance must be owned as much as for all sectors of our community being in rural areas to the cities and all of that.  But also from our presidency, we aim to bridge the governance within the regional frameworks, so we align with the African Union's imagining AI strategy.  Then science, technology and innovation frameworks as well as the original general policy Harmon decision through the SADIG, so we see that this innovation at regional levels needs a peripheral, but also foundational in terms of the global governance agenda.

I think finally in terms of our G20 we just like to call our partners in the international institution to support the distributed AI governance architectures so that we can all be inclusive and we can have the equitable as well as make sure the benefits of AI can have much meaningful in terms of our society while we are also addressing the associated risk related to AI.  I thank you.

>> GUILHERME CANELA GODOI: So Melinda, moving to you now.  Actually Jhalak stole my thunder when I was preparing the follow‑up question for you because she touched on a point that I'm sure  several in the audience and online have thought when you were speaking about what she called the false dichotomy between innovation and protecting Human Rights.  At the end the objective of governance if it's done in alignment with the International Human Rights Law is to protect Human Rights for all, not only for the companies.

So how do you respond to this?  You framed, of course, very briefly as if it was an antagonism between those two things.  At the same time we know all companies including yours are investing in Human Rights departments, reports, and when there are specific issues like elections on how to deal with these technologies and the risks, for example, for elections, but yet there is a lot of skepticism regarding the way the Private Sector not only your company is dealing with these situations.  So if you could go a bit deeper on actually what Jhalak was saying about what in her view is a false dichotomy on those two things.

>> MELINDA CLAYBAUGH: I guess I would agree to be provocative back.  I think what I'm trying to say is that we need to look at everything together, and it seems that the debate about AI and by AI, to be clear, I'm talking about advanced GenerativeAI.  I think we tend to talk about AI kind of loosely, but the conversations to date at the kind of international institution level and the framework and commitment level have really been about the most advanced GenerativeAI.

Those conversations have largely been focused around risk and safety risk.  And I think that's an important piece, of course, and we have implemented AI safety framework to address concerns about catastrophic risks.

However, the conversation around harm and risk, two things, one, I think we need to be very specific about what are the harms we are trying to avoid, and as you point out, a lot of the harms we are trying to avoid are harms that already exist that we've been trying to deal with.

So people talk about misinformation, people talk about kids, people talk about all of the things that are existing problems that have existing policy frameworks and solutions to varying degrees that differ in differ places.

What I am trying to convey is that we also need to be talking about enabling the technology, not to say ignoring risk not to say not having that conversation, but we are missing a key element if we are not talking about everything together.

Because otherwise it becomes over weight in one direction and I don't think there is global consensus around the add that advanced GenerativeAI is inherently dangerous and risky.  I think that's a live question that a lot of people have opinions about.

But there is a lot of interest and opinions about the benefits and advances of AI, and so I think that all needs to be brought together into a conversation.

I will also say that there is existing laws and frameworks that are already in place, and so I think even that predate ChatGPT, and so we have laws around the harms that people are talking about around copyright and data use and misinformation, and safety and all of that.  We have legal frameworks for it, and so I would like to see attention around how are those legal frameworks fit for purpose or not with the new technology rather than seeking to regulate the technology.

>> KATHLEEN ZIEMANN: Thank you.  That's a very interesting aspect that Jhalak was touching on a bit, especially about the idea of whether we can use the existing laws and frameworks in the context of this new technology. 

Jhalak, how do you perceive this?  Do we have the rules already, and if not, what is missing?

>> JHALAK KAKKAR: I think there has been a lot of conversation around whether there is existing regulation that can apply to AI, and whether there is need for more regulation.  And I think there are several existing pieces of legislation that would be relevant in the context of AI just to name a few, data protection, competition antitrust law, platform governance laws in different countries, consumer protection laws, criminal laws, so, yes, I think I also agree with Melinda's point that we need to think about how maybe some of these laws are fit for purpose.  Do they need to be reinterpreted, reimagined, amended to account for a different context that AI brings in?  If I can give an example, the way we have seen the need for traditional antitrust competition law to evolve in the context of digital markets, when Internet platforms came in, you could have said we have existing competition law.  We have existing antitrust law, and that's going to apply, and we have seen over the last couple of decades that it is not fit for purpose to deal with the new realities of network effects, data advantage, zero price services, multiple side markets that have come in with the advent of Internet platforms.

Similarly, we already see a hard debate happening around intellectual property right laws whether copyright law is well positioned to deal with the unique sort of situation that has arisen where companies are training their LLMs on a lot of knowledge and data available on the Internet, relying on fair use exception, what was the fair use exception, it was that big publishers should not amass a lot of knowledge with them, and it gives people like you and me access to use that knowledge and reference the knowledge and build on that knowledge.

It's an interesting situation where you have large companies now sort of leveraging fair use.  So I think we already have courts around the world dealing with this issue.  I'm sure legislators are going to deal with it, and it's a question that I think as a society we have to think about is, yes, there is development of new things that these companies are doing.  Fundamentally maybe there is a transformation that they are doing when they are building on this, but what are we losing out?  What are the advance stages, and weighing all of that through.

I think coming back to the false dichotomy point, I want to go back to that.

I think, yes, we know a lot of harms that have already arisen in the digital Internet platform context.  We are well aware of that and we are looking out for that.  Civil society, Academia, researchers, as we see AI and if we are talking more specifically about LLMs.

But those are existing harms we are looking for.  There are a lot of harms that we don't know may exist, and just to give an example, I don't think 15 years back we thought about the kind of harms social media platforms would have on children.  It just wasn't something that was envisaged.  Maybe somebody could have envisaged CSAM content, but the mental health impacts, the kind of cyber bullying, the extent and nature of this.  A lot of unintended and unenvisaged and unless we are scrutinizing systems and it's not only a question of catastrophic risk.

We have to think about individual level impacts and societal level impacts, and unless we are engaging with these systems and understanding these systems from the get‑go, those impacts and implications and negative consequences will only surface five to ten years from now.  We ‑‑ while it's wonderful to see companies heavily investing in Human Rights teams, trust and safety teams, as I space we didn't have trust and safety ten years back.

So it's a new space that has grown.  You have so many professionals coming into the space with specialized skill sets and it's great to see that, but we have also seen that companies have never been particularly adept at only working under the realm of self‑regulation.  I mean, and this is across industries.  I'm not only pointing to tech.  We have seen that time and again over the last 150 years of when we talk about industrial regulation that's been coming through.

So I think we have to move beyond the sense that companies will self‑regulate.  Very often they don't disclose harms that are apparent to them, and we need external regulators, we need communities to be engaging, a bottom up approach, civil society to be engaging, multilateral institutions to be coming in.  We need the development of guidance guidelines to operationalize the AI principles we have all been talking about and working on over the last five, seven, eight years.

So I think we have to move forward into the next phase of AI governance.

>> GUILHERME CANELA GODOI: Thank you.  Very interesting.  So now what's going to happen, I will do a follow‑up question, but then we are going to open to you, so if you want to cart queuing ‑‑ start queuing on the available mics, you are welcome to do it.  Jovan let's get back to the magical realism and issue of getting back knowledge to the discussion.  It's a very interesting point you raised.  You probably remember when there was the Tunis round of the World Summit, UNESCO published a very ground breaking report called Towards Knowledge Societies.

It's very interesting until today every week that report is one of the most downloaded reports in the UNESCO online library, which shows that independently of what we are discussing here in this very limited circles, that people overall are still very much interested in the knowledge component of this conversation.

So doing this preamble to ask you to go a bit deeper.  So how we bring knowledge back to this conversation, of course, connecting with the new topics, of course, data is a relevant issue.  We can't ignore the discussion of data governance, but, I mean, the South Africa presidency has three main topics, correct me if I'm wrong meaning solidarity, equality, and sustainability, and if you read that report of UNESCO of 20 years ago, connecting with the challenges of the then information society you will see those three words appear in different ways so people like man well Castel, and Garcia were telling those things.

So what is your view of how we get back to this important part of the conversation when we are looking to the AI governance frameworks?

>> JOVAN KURBALIJA: Sure.  It's good that you brought this ‑‑ by the way, excellent report.  Two reports are excellent, UNESCO and World Bank report on the digital dividends.  Those are landmark.  I studied and I didn't want to bring it, but you told it, you don't mind controversy.  Even UNESCO which set the knowledge stage with that excellent report backpedals on the knowledge in the subsequent years, which is part of the overall policy fashion.  Data is even in the ethical framework recommendation.  Data is more present.  That is the first point.

The second point, why people down load it, they react intuitively.  They can't understand knowledge.  Data is abstract.  Knowledge is what we are now exchanging, creating, developing.  And my point is that common sense element is extremely important.  And through that, through bottom up AI, through preserving knowledge of today's discussion, maybe excellent questions that we will have, this is knowledge that was generated by us at this moment.  And this is also back to Markus and other magical realism you have to grass be at the moment ‑‑ grasp at the moment, it's financially affordable and ethically desirable if you want this trinity.

But let me just, on your question, just recollect on two points of the discussion.  There are many false dichotomies including in the quest of knowledge.  I can list, you have multilateral versus multistakeholder, privacy versus security, freedom versus public interest.

And we can label them in false dichotomies but I think we should make a step forward, ideally we should have both multistakeholder, multilateral, security, and but some things you have to make tradeoffs, and this is critical, the tradeoffs are done in transparent way, that you can say, okay, in this case, I am going to multilateral solution because Governments have respective roles and responsibilities.  You can find it in many other fields.

Back to your question in bringing the discussion to common sense and references that colleagues made, I would go not only 150 years, I would go to Hammurabi 3400 years ago.  There is a provision in law if you build a house and if the house collapses, the builder of the house should be punished with that sentence.  That was the time.  Harsh.

>> GUILHERME CANELA GODOI: We don't want that.

>> JOVAN KURBALIJA: We are reporting from this session.  Let's take a hypothetical situation, our AI system confuses and says that the two of you said, or all of us said something which you didn't say and you go back to Paris and your boss say, by the way, did you say that, and you say, no, I didn't say it, but Diplo reported.  Who is responsibly ethically, politically, I am responsible, I am Director of Diplo.  Nobody forced me to establish AI system to develop AI system.  Therefore you are losing a common sense which existed since Hammurabi until today.  Somebody who develop AI system and make it available should be responsible for that AI system.  The core principles are common sense principles.  Therefore in that sense, people by downloading knowledge, they are reacting with common sense, I think in governance, AI governance, you should really get back to the common sense and being in position to explain to five‑year‑olds what is AI governance.

And it's possible.  And I would say this is major challenge for all of us in this room and I would say policy, community, to make AI governance common sense, bottom up, and explainable, explainable to anyone who is using AI.

>> KATHLEEN ZIEMANN: Thank you very much.  I don't see a queue behind the mics yet.  There is someone.  That is great.  Welcome.  Happy to have your questions now towards the panel.  It would be great if you could say who you are from which institution, and also to whom you would like to direct your question.

>> AUDIENCE: Thank you.  So my name is Diane HewardMills.  I'm the founder of a Global Data Protection office called HewardMills.  For those that don't know, under the GDPR, there are certain organisations are mandated to appoint data protection officer, and that's an individual or an organisation that has responsibility for independently and objectively reviewing the compliance of the organisation when it comes to data protection, cybersecurity, and increasingly AI.

So I'm a U.K. qualified barrister, I have been working in governance for 35 years, data protection focused and privacy focused governance, and so I have been running this organisation for seven years, which I'm very proud to do as a sort of female founder.  I know I'm a very rare beast.

But importantly, I decided five years ago to go for the standard called a B Corp standard.  I don't know if you are aware, but B Corp is a standard for organisations that can demonstrate high standards in environmental social governance, ESG.

So my sort of comment or recommendation is we oversee carbon sort of offsets and the efforts of organisation in terms of demonstrating ESG.  I had a thought about would it be an idea if organisations could also demonstrate their social offset?  So, for example, if you are a tech business or health business using AI, would it be an idea that you document the existing risks, think about foreseeable risks, and think about actually how you could offset those risks in an objective way, and to have an independent overseer of that type of activity.

I just thought I would throw that out there to the panelists, because we are thinking about creative ideas of making sort of AI governance tangible and explainable.  And I wondered, for example, if that were the sort of requirement 15 years ago for social media platforms to demonstrate the social offset, what sort of world we might be in today.

>> KATHLEEN ZIEMANN: Thank you very much.  I think it was not specifically directed to someone on the panel, so whoever wants to take that question, I'm looking at you, Melinda, but I think it might be relevant for others as well.

>> MELINDA CLAYBAUGH: I'm happy to take a start at it.  I think that that exact kind of, I mean, what you are talking about is really a risk assessment process that is objective and transparent, and auditable in some fashion.  I think that is, you are right, the basis of kind of the GDPR and accountability structure that so many data protection laws have been built on.

I think increasingly we see it in the content regulation space, particularly in Europe as well that there are risk assessments and mitigations and transparency measures that can be assessed by external parties, and interestingly, we are seeing that in some early AI regulation attempts.  I speak most fluently about what's going on in the U.S., but we are seeing very similar structures around documenting, identifying, documenting risks, demonstrating how you are mitigating them, and then in some fashion making that viewable to some set of external parties.

I do think that is a proven and durable type of governance mechanism that makes a lot of sense.  I think we still come to the issue, however, of what are the risks that are, and how are they assessed.  And I say that because it is a particularly thorny challenge in the AI, particularly in the AI safety space, and there is healthy debates around kind of what risks are tolerable or not.

But I do think as a framework that that makes a lot of sense and there are a lot of professionals who already work in that way and companies already have those internal mechanisms and structures.  So I would be surprised if we didn't land in a place, and, in, fact, that's what the EU AI Act proposes as a structure.

>> GUILHERME CANELA GODOI: But in that case, even if there is no consensus about what are the risks, the transparency that you are saying that you are also agree is part of the solution, right, the companies don't need to be forced in agreeing on the risks, but they need to be transparent in telling the stakeholders that they consider risks and how they are mitigated.  The problem is to say this is a risk you need to report on that, but when the requirement is report on how you do risk assessments, then it's a different ball game.

>> MELINDA CLAYBAUGH: I'm thinking about it through the lens of an open source service provider and this is a tricky area of AI governance and regulation.  How you govern closed models and open models may be very different.  And so we provide, we do all kinds of testing and risk assessment and mitigation of our model, and then we release it for people to build on and add their own data to, and build it for their own applications.

We don't know how people are going to use it.  We don't know how they end up using it.  We can't see that, and so there is a ‑‑ we can't predict how the model will be used.  So I think there are just nuances as we think about this in terms of who is responsible for what.  I do think some is common sense who is using it, but so I think that's part of the value chain issue that people talk about.

>> KATHLEEN ZIEMANN: I see that Mlindi wanted to react to the question.

>>  MLINDI MASHOLOGU:  That is why as policymakers we want everybody to play fair when it comes to AI.  There are areas that self‑regulation will be there from the organisations, but equally important is to make sure that we can look at these risks emanating and make sure we deal with the risks both from Private Sector as well as Government.  From us as Government, we don't want to be seen as doing the hard regulation and all of that, which might end at stopping innovation, but we want to make sure that everybody can be protected but while also from the Private Sector point of view you can derive the value that you want to derive from the AI systems.  I think that is what is important.

But also the area that I have touched on before, the area of explainability is very important because, I mean, you try to use these models and they might have some decisions that can be very harmful to human lives so that's why then we say that you need to have these decisions being explainable.  But then it also touches to say that whenever the model does a decision, it needs to have considered a broader aspect of data set from various demographics as well to make sure you don't look on a few demographics and say, okay, the model can try to take a better decision based on the small amount of data that you trained the model on. 

>> MELINDA CLAYBAUGH: That's a big achievement of the open source community to explore explainability of what is happening within the data and the models.

>> KATHLEEN ZIEMANN: We would love to move forward with online questions and we give to this microphone.

>> AUDIENCE: My name is Kunla.  I am from Nigeria, and I'm press of Internet Society, and is we are into advocacy and stuff like that so to say.  So my concern is this, I note it's just about the right time for us to start discussing AI governance.  There is no nay saying that.  However, there are issues that we need to look at critically.  One of those issues has to do with the way data is being collected. 

I listened to Jovan when he was emphasizing the issue of knowledge.  I agree because the end product of Artificial Intelligence is knowledge so to say.  However, how we gather this data, I think it's very important.  Why I'm saying that is because we are looking at an AI that is going to be inclusive, that will be able to have value for every community so to say.  And you will agree with me that this data gathering is being done by experts, and for every individual present, everybody has its own bias.

So I believe that whatever data you gather is as inherently flawed as the bias of the person that has gathered the data in the first place.  So we need to start looking at how we are going to bring exclusivity and how we bring all of this data together considering all of the multistakeholders.  I think that's very important.  That is on one hand.

And for me, I think it will get to a stage that even this AI we are talking about is going to become DPG, data public groups.  I'm saying that because it's going to be available to everybody, and everybody should be able to use it for whatever purpose they want to use it for, but before we go there, how do we ensure that we put everybody on the same pedestal, in the sense that we need to have a framework that is universal.  I have listened to Melinda when she was talking about the framework and I began to see some kind of, okay, different frameworks coming from different stakeholders.  So we need to sync them and bring frameworks together so we consider have a universal framework that will speak to issues that bother everybody and the AI at the end of the day is going to be universal and is going to be able to take care of everybody's concern.  So I want the panelists to react to this, I think Jovan and Melinda should be able to react.  Thank you very much.

>> KATHLEEN ZIEMANN: Jovan.

>> JOVAN KURBALIJA: Two comments, both is controversial but the first is more controversial.  We have had a lot of discussion of claiming biases and I'm not speaking about illegal biases, biases which are basically insulting people, people's dignity.  That's clear.  That should be dealt even by law, but let alone that.

But we should keep in mind that the clear bias machines.  I am bias.  My culture, my age, my, I don't know, hormones, whatever, are defining what I am saying now or what questions you ask.  Therefore this obsession which is now calming down, but it existed let's say one or two years ago with cleaning bias was very dangerous.  Yes, illegal biases, biases that threaten communities, definitely.  But I would say that point we have to bring a more common sense again into this.  Second point that you mentioned is about knowledge.  Knowledge like a bias should have attribution.  Financial, legal, are this.

The question is your knowledge built on your understanding and other things.  The problem currently in the debate is that we are throwing our knowledge into some pot where we don't know it's like I call it AI angle, and it's disappearing and R we are revisiting it, I am testing big systems in Diplo we are testing where we put specific knowledge contextual and we realize it is taken, repackaged and not sold, but maybe in the future.  That's a critical issue.  Your knowledge, local of community knowledge in Africa Ubuntu, written knowledge belongs to somebody or should be attributed, shared with the universal framework definitely, but attributed.  That's critical issue when it comes to knowledge and also to your previous question, what we should do with the knowledge.

And, again, instruments are there, and the risk is that confusing AI governance discussion while anything and everything, magical realism is basically missing the core points and it is like baby crying.  Instead of answering the question with existing tools, we are giving the toys to the baby, which is discussion on ethics, philosophy, which I love philosophy, but there are some issues we can solve with existing instruments.  Related to your question, question of bias and question of knowledge.

>> KATHLEEN ZIEMANN: Melinda before react, I look at Jhalak's face and I say you may not agree with all of the points, especially possibly the one that bias and data can't be neglected?  Is that something you are thinking about?

>> JHALAK KAKKAR: I mean, I don't disagree with you, actually.  I think there is a reality that there is a level of bias in all of us.  And it's not that the world is completely unbiased.  It's not that when judges make decisions, there is no bias over there.

But and ultimately AI is trained on data from this world, and biases will get embedded into that.  It is trained on existing data sets which capture societal bias.

I think the difference is with human decision making in many contexts we have set up processes and systems and there has to be disclosure of the thinking and reasoning going into it, and that can be vetted if someone raises an objection.  I think with AI systems, that's the challenge is explainability has in many contexts of various kinds of AI systems has been challenging to establish.

And I think that's a question that is still being grappled with.  So I think that, and I think disclosure of use of AI systems in various contexts and whether someone knows that an AI system is being used and they are being subject to it and then the kind of bias that comes into decision making that impacts them, I think that's the other piece to it.

>> KATHLEEN ZIEMANN: Thank you, Melinda.

>> MELINDA CLAYBAUGH: Two quick thoughts.  I think it is critical that AI works for everyone, and so part of that is making sure that we do have the data, that there is a way of either training a model on the data or fine tuning model on as representative of data as possible.  I think that's a foundational key concept.

I also think that there needs to be a lot of education around AI outputs.  And so when people are interacting with AI, they understand that what they are getting back may not be the truth.  What is it?  It's actually just a prediction about the next right word.

So I think we are at the very early stages of this in society, and so our expectations of what it is, what it should be, what these outputs should be relied on for I think is very evolving.  I do agree that when AI is being used to make decisions about people or their eligibility for services or jobs that there is an extra level of concern and caution and requirements that should be added in terms of a human in the loop or transparency around a decision that's made.  I absolutely understand the concerns around that.  So I think as a society we get more experience and understand these tools more, and what they should be used for and what they should not be used for.

I think these questions will get more sorted out.

>> KATHLEEN ZIEMANN: Thank you very much.  So at IGF we want to be as inclusive as possible.  That's why we have the online participation for people that can't be here and can't afford to travel here.  We have our online moderator Pedro behind the mic here. 

If you could give us two relevant questions from the online space that need to be addressed to the panel, that would be really great.

>> MODERATOR: Thanks.  We have a question directed to Jhalak and Melinda.  What are the panelists' views about the consensus gathered in the Council of Europe framework Convention on AI, Human Rights, democracy and rule of law, the first international treaty and legally binding document to safeguard people, development, and oversight of AI systems.

We at the Center for AI and Digital Policy advocate for endorsement of this international AI treaty which has 42 signatures for date, including European states.

>> KATHLEEN ZIEMANN: It is not going through.  We have difficulty to understand you.  Can you give us the two main words that need to be discussed?  Was it the EU AI Act in the first one?

>> MODERATOR: The Council of Europe Framework Convention on AI, Human Rights, democracy and the rule of law.  The comments from the panel.

>> KATHLEEN ZIEMANN: I think that went through okay.  Yes?  Jhalak, do you want to react?

>> JHALAK KAKKAR: Are they asking about the Framework?

>> KATHLEEN ZIEMANN: Yes.

>> JHALAK KAKKAR: So I think there has been a lot of conversation globally around what is the right approach to take.  You know, Melinda was saying we need to think about what systems need more scrutiny versus others, systems that are impacting individuals and people directly versus those that are not.

There has been a whole conversation which we have referenced earlier in this dialogue around innovation versus regulation and what is the right level of regulation to come in with this point?  What is too heavy?  What is not enough?

And I think I don't have the answer to that.  I think in different contexts it's going to be different.  In countries which have a high regulatory capacity and context there is more that they can do and implement.  In countries that don't, we have to frame regulation and laws which work for those sort of regulatory and policy contexts.

But what I think is really important is at occasions like, for instance, the India AI impact Summit, which is an opportunity because India is, you know, trying to emerge as a leader in the global majority to really bring together thinking from civil society, Academia researchers, industry, Governments, particularly from the global majority, to talk about what would be the right way forward?  Would it be borrowing from ideas that have developed in another context, and perhaps there are ideas that are relevant to pick up from that, but what is contextually and locally useful and relevant from within the contexts we come from.

I mean, places like India and South Africa may have a lot of AI that is being developed say at Sloan‑Kettering health diagnostic tool which is brought in and employed in the Indian context but demographics are different.  The kind of testing and treatments available at secondary healthcare and tertiary healthcare systems are different.  So there are a lot of differences.  So how do we think about something like that which may not be really a topic of discussion in other parts of the world.

So I think in India and places like South Africa we may have slightly different challenges to grapple with, and I think it's very important that those conversations happen as well.

>> MLINDI MASHOLOGU:  From the South African point of view, as my colleague has highlighted, one of the areas is the area of human rights.  And it's enshrined in the Constitution, so whatever you do from the technology point of view, you need to make sure that it doesn't really impact the Human Rights as well as the Bill of Rights.  So it's one of the things we are trying to do to make sure that whenever you try to put these types of technologies they are not infringing on the rights of people.  But you will find that you do have some of the other laws that we have got like your Privacy of Information Act.  So it says you can't just use my information, but then how do we then make sure that we can use your information for the public good?

So now you are competing with these two laws.  One is trying to use this information for the better good, but one is saying that you can't just use my information.  So I think it's going to be quite a balancing act we are trying to do to say that one of the, some of the things that we can use to make sure we can drive innovation, but what are the things we need to do to make sure that we don't also infringe on the human rights as well as the information of the people.

>> KATHLEEN ZIEMANN: Thank you very much.  I see there are further questions from the floor.  Jovan, you will be reacting briefly.

>> JOVAN KURBALIJA: The EU Act and Council of Europe Convention.  Those are interesting points.  You moved fast and probably too far as we are hearing from Brussels, there is a bit of revisiting especially on defining high risk models.  Council of Europe is interesting organisation.  They adopted Convention on AI, but they are interesting because under one roof you have the Convention, but you have also Human Rights coverage.  Next is also Human Rights court.  You have cybercrime, Council of Europe Convention is host of Budapest Convention.

You have science, therefore it's one of the rare organisations that interplay between existing silos when it comes to AI could be basically bridged within one organisation.  Those are just two points on UA AI Act and Council of Europe.

>> KATHLEEN ZIEMANN: That's pull the last two questions from the floor.  I see two people standing behind the mic over there.

>> AUDIENCE: Yes.  Thank you.  Well, my name is Pilar Rodriguez, I'm the youth coordinator for the Internet Governance Forum in Spain.  And I wanted to follow up a little on what Ms. Jhalak was saying about how countries can achieve AI governance and AI sovereignty if this doesn't lead to an AI fragmentation.  I'm not just thinking from a regulatory perspective, because we have the AI Act in Europe.  We have the California AI regulation, China has a regulation.  So doesn't that lead to more fragmentation?  And coming from the youth perspective, is there a way to ensure that we have, let's say, a global minimum so that future generations can be let's say protected?

>> KATHLEEN ZIEMANN: Thank you very much.  Let's take the next question of the person behind you.

>> AUDIENCE: Hi, Anna from R3D in Mexico.  It's going to sound like I'm making a comment more than a question, but I promise that there is going to be a question because I was very concerned of hearing how there was this under estimation about the risks of AI, making it sound like it's something hypothetical and not that it was actually materialized in several examples around the world.

And Jovan was mentioning this topic of knowledge and education while at the same time speaking about illegal biases.  When I think that in reality there has been several examples of how classism, racism, misogyny is affecting how people can access basic services around the world or how police are predicting who is a suspect or not.

So we shouldn't disinform people about the actual risks.  The question to Melinda would be related to the emergency crisis that we are living, and since she mentioned that companies such as Meta are doing this risk assessments, I wonder how Meta is planning to self‑regulate, for example, when it hasn't done environmental or Human Rights assessments, when it has established hyper scale data centers in places like the Netherlands that have made people publicly pressured for them to stop being constructed there, so then you move them to Global South countries or to Spain in that case so that all of the issues with extractivism, with hybrid crisis, with pollution arrive to other communities where there hasn't been any consultancy, but you are claiming that there has?  That would be my question.

>> KATHLEEN ZIEMANN: Thank you very much.  One would be the point of fragmentation and the other of global AI justice basically.  Melinda, do you want to react first?

>> MELINDA CLAYBAUGH: Sure.  I mean, I can't really speak to the data centerpiece.  I'm more, I think that was your question around kind of basically the energy needs for AI and how where data centers are placed essentially.  I can't really speak to that.

I can say that I think we all know that the AI future is going to require a lot of energy, and I think that there are a lot of questions about where that energy, where the energy needs are, and where the solutions to those energy needs are going to come from., but I can't speak in detail about how particular decisions are made.

>> KATHLEEN ZIEMANN: In terms of fragmentation, that was part of the first question, right, the fragmentation of AI governance, having so many initiatives, so many different stakeholders, that is also basically then the question how could you coming from different sectors and regions cooperate more in that area?  Is there an idea here on the panel how that could look like?  Who would like to react on that?

>> JOVAN KURBALIJA: Is the question how to avoid?

>> KATHLEEN ZIEMANN: How different sectors and regions could cooperate even better on AI governance, how to counteract against the fragmentation?

>> JOVAN KURBALIJA: We have to define what is fragmentation.  Having addressed to Indian, South Africa, Norwegian context, German, Swiss, whatever is basically fine.  Probably the communication or exchanges should be done by standardisation about rates.  Rates are basically the key element into AI systems.  If we may think about some sort of standards and to avoid the situation that we have with social media.  If you are on one platform, you cannot migrate your network to the other.  Now, Digital Services Act in the EU is trying to mitigate that, but the same may relate to the AI if my knowledge is codified by one company and I want to move to the other platform company, whatever, there are no tools to do that.  My advice would be to be very specific and to focus on the standards for the rates and then to see how we can share the rates, and in that context how do we share the knowledge?

>> KATHLEEN ZIEMANN: So joint standardisation.

>>  MLINDI MASHOLOGU:  I think from the continent, we started, you know, this governance as areas 2020, 2021 when we developed the AI appropriate for the continent.  And I think from there on the African Union also worked into developing the AI strategy, but also the individual member countries are also developing their policies and strategies, you know.  So I think there is not much fragmentation, but it's just that from the grassroots level you find that each country will have its particular priorities that they would like to focus on, but I think generically, and if we are to look in all of the published policies and strategies and legislations, you will find that AI normally addressing more of the core principles, issues of ethics, bias, risk, and I think from the South African point of view, we are actually advancing some of the aspects as well.  I think we are looking at this thing, but from the grassroots level, from the country level, you will find that you are not going to have exactly the same.  But it's not that they are always disjointed as well, but different priorities as well.

>> GUILHERME CANELA GODOI: Do you want to add anything?

>> JHALAK KAKKAR: Yes.  I think there is a concern that in the drive for innovation, there is a race to the bottom in terms of the adherence to responsible AI, ethical AI rights frameworks.

We have several existing documents ranging from the UDHR to the ICCPR which can be interpreted which through international organisations norm building can happen which sets a certain baseline.  I think the IGF as WSIS+20 review happens, I think the IGF should be strengthened to really help with not only agenda setting of the action lines but also as a feedback look into CSTD, the WSIS forum, and other mechanisms where there is holistic sort of input from multistakeholders going into these processes, which accounts for many of the concerns that have been raised ranging from environmental concerns so impact of extraction in global majority contexts.  It could be questions of labeling for AI, whether it's labeling and worker related concerns.  So I think all of this needs to be surfaced, and then these conversations need to feedback into the agenda setting as well as the final outcomes that we have because I think that level of international coordination both at the multilateral level but at the multistakeholder level is important and we have to come together and work together to find sort of ways to set this common baseline so that we don't sort of in the race for getting ahead, we don't sort of lose focus on our common values we have articulated in documents like the UDHR.

>> GUILHERME CANELA GODOI: Thank you.  So now we are walking towards the end.  So if the online moderator has a very straight forward question?

>> MODERATOR: Yes.  We have one from Michael Nelson about the two sectors, the two sectors spending the most money on AI are the finance and the military and we know very little bit their successes and failures.  So he would like from the panelists, especially from Jovan and Melinda, what are their fears and hopes about those two sectors?

>> GUILHERME CANELA GODOI: The question is about AI.

>> MODERATOR: Especially finance sector and the military.

>> GUILHERME CANELA GODOI: Fears and hopes but then I will give the floor to you, one minute if you want to comment on that and within the one minute what is your key take away of the session.

>> MELINDA CLAYBAUGH: I don't have an answer on the finance and military sector hopes and fears to be honest.  We are very focused on adding AI personalization to our family of apps products.  I will leave the finance and military to others.  On the key take away from the session, I think it is really, it really is interesting to take stock of where we are at these meetings.  I have been at the last couple of IGFs, and I think the pace of discussion and kind of the developments in the space are really fast, and fast moving.

So I'm encouraged and I would encourage us all to keep having these conversations.  I think multistakeholder will be the word that everyone is going to say here, but it is a unique and important role that the IGF plays in bringing people together.  I know we have a lot of Meta colleagues here.  We take everything we hear here back home and talk to people and inform our own direction.

So I think let's keep having these conversations.  I think the convening power is the most important contribution right now in the space of bringing these particular voices together.

>> GUILHERME CANELA GODOI: Thank you,.

>> JOVAN KURBALIJA: On military and AI it's obviously getting unfortunately centre stage with the conflicts especially Ukraine and Gaza together with the question of use of drones.  There are discussions in the UN on autonomous weapons or robot killers and the Secretary‑General has been very, very vocal for the last five years to ban killer robots which basically is about AI.  What is my take from it?  Diplo is building apprenticeship program which is explaining AI by developing AI.  People learn about AI by developing their own AI agents, and I would say let's demystify AI, but still enjoy in its magic.

>> JHALAK KAKKAR: I think my sort of final thoughts would be I think we need to learn from the past the successes of the past, things like the multistakeholder model, successes we've seen in international cooperation, but we also need to learn from the past in terms of mistakes that have been made around governance, around technology, and not sort of repeat those.

And I think we need to continue to work together to build robust and hole wholesome impactful digital ecosystem.

>>  MLINDI MASHOLOGU:  From my side I want to say that, you know, AI needs to be in the Human Rights.  We need to make sure that the technology empower the individuals, but also when it comes to innovation, we need to do that responsibly by looking at the adaptive governance model which includes regulatory sandboxes.

I think the last point I want to touch on is the issue of collaboration, aligning the national, regional as well as global efforts in terms of ensuring that the AI benefits are spread across everybody in our society.  I think those are my final thoughts.

>> GUILHERME CANELA GODOI: Thank you very much.  I have the difficult task to try to summarize it, which would be impossible, but just a disclaimer whatever I'm going to say now is full responsibility of Guilherme Canela Godoi chat.  It's not the inner view.

I think there is interesting element in this conversation.  Many years ago I was involved in some of these similar debates, AI governance, et cetera, the first thing that appeared was bias.  And bias appeared very late in our panel, which is a good sign because the first things that appear were the processes, even if we disagree.  The dichotomy between innovation or risks, but the key words that we spoke, risks, innovation, public goods, data governance, the knowledge, bringing knowledge back, those are actually more structured frameworks that looking into the real but very specific issues of bias or disinformation, conspiracy theories and so on.

So I think this is a good sign for all of us even if we disagree as you noticed, that we are looking to something that we can take to the next level of conversation from a governance point of view.  Because when we are too much concentrated in specific pieces of content rather than the processes, then the conversation becomes very difficult because it's related to polarization, to a specific opinions which everyone has the right to have on what is false, what is not false, what is danger and not danger, while we are concentrating on transparency, public good, et cetera, all of the key words come with interesting knowledge behind them on how we transform them in concrete governance which doesn't mean only governmental governance.  It can be self‑regulation, co‑regulation, so on.  But we also for obvious reasons left things out of the conversation that need to be part of governance frameworks.  For example, the issue of energy consumption of these machines should be part of governance frameworks, and it appears very late, appeared very late today because of the time and so on.  But I do think that the panel did a good job in putting also some of the divergences of this conversation which is part of the game.

Last thing I want to say, and this is not on the shoulders of the panelists or my co‑moderator, I do invite you to think that being innovative is to "Leave No One Behind" in this conversation.  When Eleanor Roosevelt was holding the Universal Declaration of Human Rights on that famous photo, the conversation was this was the real innovation, how we came together and put those 33 Articles in a ground breaking way that is not solved until today.

So what we really require is an innovation that includes everyone, and not only the 1%.  Thank you very much.  Thank you my co‑moderator.  It was a pleasure.

>> KATHLEEN ZIEMANN: Thanks to all of you.  Thank you.