IGF 2025 - Day 3 - Conference Hall - Open Forum #30 High-level review of AI governance including the discussion

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> YOICHI IIDA: Good morning, everybody, and good morning, good afternoon, good evening the place where you are to online participants.

My name is Yoichi Iida, former assistant vice minister of Japanese ministry. And also the Chair of OECD.

Thank you very much for joining us and today we are discussing the current situation and also the -- some foresight on global AI governance.

And we have very excellence speakers on my left side. So let me introduce briefly my speakers before they take the floor and make their own self-introduction.

So my end, first, Dr. Ansgar Koene, from Global AI Ethics. Next to him, Mr. Singh from the Indian Ministry of Electronics and Information Technology. Thank you very much, Abhishek.

And next to him, we have Lucia Russo from OECD economist at AI and digital emerging technologies division.

So next to her, we have Ms. Melinda Claybaugh, director of privacy and AI policy from META.

Thank you very much for joining us. And last of course but not least, we have Dr. Juha Heikkila, adviser for international aspects of artificial intelligence from European Commission.

Thank you very much for joining.

So AI governance, as all of you know, we are seeing a lot of big changes in technologies, but also the policy formulation. Japanese government started the international discussion on AI governance as early as year 2016 when we made a proposal on international discussion on AI governance at G7 and also OECD.

So this proposal led to the agreement on first international and intergovernmental principle at OECD AI principles India 2019, and also the G7 discussion led to the launch of Global Partnership on AI GPAI in the year 2020.

Also UNESCO started the discussion on ethical AI recommendations, and the European Commission started discussion on AI governance framework which relate to the enactment of the AI act in 2023.

After these years, we saw rapid change in AI technology, in particular at the end of the 2022 rapid rise of ChatGPT. And we saw a lot of new types of risks and challenges brought by new AI technology.

That was the background why we started the discussion at G7 on Hiroshima process.

We wanted to respond to the new risk and challenges brought by generative AI and near the end of the year G7 agreed on Code of Conduct and guiding principles with Hiroshima AI Process.

And this effort led to the launch of Reporting Framework for Code of Conduct over Hiroshima process in the AI 2024, and this year we saw 20 reports by AI companies publicized on OECD website on the 22nd of April.

In the meantime, UN also started the discussion on AI governance, and we saw the agreement on UN resolutions directed to AI. Two resolutions, one led by U.S. and one led by China.

UN also started the discussion on Global Digital Compact which concluded in September 2024. And we are now in the process of GDC follow-up and also in the beginning of discussion on WSIS+20 review.

So this is the rapid -- the short history of AI governance discussion and over the last probably several years. And against this background, I would like to discuss with these excellent speakers on what are the priorities and the emphasis in these discussions for different stakeholders in AI ecosystem and what are their perspectives.

So let me begin with Lucia from OECD. What do you think your priorities and emphasis are in promoting international or global AI governance and the -- what international initiatives and frameworks do you consider very significant at present and also for the future discussion for countries, for international organizations, and other stakeholders? What is your view?

>> LUCIA RUSSO: Thank you, Yoichi, thank you my fellow panelist for this interesting discussion.

As Yoichi mentioned we started work with the OECD with countries like Japan and multi-stakeholder groups on international AI governance back in 2019. And we have continued that work throughout the years to move from these principles that were adopted by countries into policy guidance on how to put them into practice.

And the role of the OECD has been, since then, to be a convenor of countries and multi-stakeholder groups and to provide policy guide and analytical work to support evidence-based and understanding of the risks and opportunities of artificial intelligence.

So I think in terms of the role for the OECD, there are three main strategic pillars. So moving from principle to practice, and that is undertaken through several initiatives that range from a broad expert community that is supporting our work to providing metrics for policymakers.

And this is through our OECD AI policy observatory that provides trends and data, but also a database of national AI policies that allows countries to see what others are doing and also learn from experiences across the globe.

And third, to promote inclusive international cooperation. And in that regard, a key major milestone was achieved last July 2024 wherein the global partnership on AI and OECD merged and joined forces to promote safe, secure, and trustworthy AI that would, again, broaden the [?] scope behind members. We have now 44 members of the Global Partnership on AI. And this includes six countries that are not OECD members, including India, Serbia, Senegal, Brazil, and Singapore.

And so the idea is that that broader geographic scope will also increase as we -- as we proceed. And so that will foster even more effective, inclusive conversations with these countries.

And in terms of priorities that we see, of course it was mentioned Hiroshima AI Process and that is an initiative that we see as very prominent because it allows having a common standardized framework for these principles that were adopted by the G7, but are, of course, open to countries beyond the G7.

But more than that is also the transparency element that is very important. Because it's not only about committing to these principles, it's also demonstrating that companies are acting upon these principles and the sharing in a transparent way which concrete actions they are taking to put them into practice.

And this is really where we see also the learning experience of -- and again, both for countries and for companies themselves that can share these initiatives and learn what they are doing in practice to promote the different principles that we see in the framework.

So I think looking forward, this is -- these are the -- the areas where the OECD would continue working, evidence, inclusive multi-stakeholder cooperation and guidance on policies.

>> YOICHI IIDA: Okay, thank you very much. Actually OECD AI principles in 2019 paved robust foundation for national and international AI governance. And government was very much supportive and also we learned quite a lot from this.

And Japan enacted new AI law only last month, but there are a lot of reflections from AI principles of OECD into our own AI law.

So thank you very much.

So I would like to invite two speakers from governmental bodies. So now I turn to Abhishek. Thank you very much for joining us.

From the government perspective, what do you think your priorities and emphasis in developing AI governance and also what do you evaluate the current situation?

>> ABISHEK SINGH: Thank you, Yoichi and thank you for highlighting this very, very important issue of AI governance and how we can work with the global community, especially with the work done at the OECD and various forums, whether the UN high level body and the G20 initiatives in Brazil and Africa.

The whole world together we're trying to address a common issue with regard to how we can leverage the power of this technology, how we can use it for larger social good, how we can use it for enabling access to services, how it can lead to enabling -- to empowering people.

So that's been the principle mantra of what we have been doing in India. We have a large country and we do believe that AI can be a kinetic enabler for empower people and access to education, health care in the country in various languages and enabling a voice interface for empowering people.

Do this, we need a balance, inclusive approach towards developing technology. We need to ensure that access to AI compute, the datasets, algorithms, and other tools for building safe and trusted AI is more equitable.

Currently the state of the technology is such that the real power of AI is concentrated in a few companies and countries. If you have democratize this, if you have to ensure the countries, the Global South become a stakeholder in the conversations around, we need to agree in all forums.

Happen to mention at the GPA sessions we chaired and following last year in Serbia and coming this year in Slovakia, we have the framework that we came up for GPAI, we need to be inclusive and bring to the Global South [?] and the Global Digital Compact is welcome.

But for the practical steps, how do we make it happen? How do we ensure that researcher in a low income country has access to compute that someone in the Silicon Valley has? We need to create frameworks that was chaired along with India, there was a concept with current AI that came in which required financial commitments to build up institutional framework for funding such initiatives. But adopting AI technologies.

That's something that we need to continue. As we move from the French summit to the India summit that we're hosting next year, we'll need to work with the entire community to ensure this. We are making compute at a low cost in India. It's available at a cost less than a dollar per GPU. Can we build up a similar framework so researchers in low and middle income countries get access to something similar.

Can we build up a sharing protocol in which model sets are trained. They should be delivering datasets in very languages, whether Asia, Africa, anywhere, can they be included when we're training the models. Can we have the cultural context added to the training?

If that is done, the entire development of AI will become much more inclusive, we'll be able to address problem statements. For example, diagnosing diseases, breast cancer, tuberculosis. These can be shared across other countries.

We have a model in the EPA system for the Digital Public Infrastructure. There's a global repository and [?] so that's something we need to work on when we're working on frameworks.

And there are tools. How do we do privacy enhancing, analyzation of data, how do we ensure that we can [?] metrics can cause. How do we [?] misinformation on social media and AI sometimes becomes enabler for that.

Can we develop tools for water marking content, can we develop frameworks so that social media companies become part of this whole legal system so we prevent the risks that democracies have.

If we create a governance framework in which all such issues are addressed, including building capacities across the world, we'll be able to build an AI ecosystem that's more fair, balanced, and equitable.

We are working with the global community towards this and I hope that this will further contribute to such enabling frame of works.

>> YOICHI IIDA: Thank you very much for the comprehensive remark. I cannot agree more when you say the -- we want to make use of the power of this technology. And I believe the ultimate objective of governance is to make use of this technology as much as possible, but also without concern.

So this is a point we need to share and also the common objective of building up a global governance framework.

Having said this, Juha, what is the priority or requirement of EU?

>> JUHA HEIKKILA: Thank you, Yoichi for this invitation. It's useful to understand that the AI act does not regulate the technology itself, it regulates use of AI.

So we have a risk-based approach, and it only intervenes where it's necessary.

So there are these statements that it regulates AI. It doesn't, it regulates certain use of AI which are considered to be either too harmful or dangerous, or too risky so there need to be some safeguards in place. It's innovation friendly because 80% according to our estimate, maybe 85% of AI systems that we see around would be unaffected by it.

And it applies equally everyone to placing AI systems on the EU market, whether they are European, Asian, American, you name it. So in that sense, it creates a sort of level playing field. And it prevents fragmentation.

So we have uniform rules in the European Union, we don't have a patchwork of rules. It isn't as if we wouldn't have regulation without AI act, because they would have proceeded to regulate.

But the regulation is just one aspect of our activities. It's a common misconception that we only do regulation. We actually invest a lot in innovation. We've been doing that a lot over the years and we've only increased our investment in that, made several announcements about that in he are sent months.

And the third pillar in addition to trust, regulation, excellence, innovation research, et cetera, the third pillar is international engagement. So we think that -- because some of the challenges or many of them related to AR know no boundaries. They are global. We think that cooperation is both necessary and useful.

So we want to be involved and we engage bilaterally and multilaterally to support the setting up of a global level playing field for trustworthy human centric AI. And we have colleagues who share those objectives. We want to have AI for the good of us all. We want to promote responsible stewardship of AI. But we look at technical aspects. For example, cooperation on AI safety, support for innovation, and its take up in some key sectors.

We do this both bilaterally with a number of partner countries which is increasing. But we're also involved in all the key discussions, G7, so Hiroshima process already mentioned, Hiroshima friends, G20, the Global Partnership on AI. So we're a founding member of -- the European Union is a founding member of the Global Partnership on AI. So we've been involved in that from the very beginning.

Now in an integrated partnership with the OECD and with the OECD of course we're involved in all the key working groups which relate to AI.

We are a member of the network of AI safety institute, and we've been actively involved also in the summit, summits actually Seoul, Paris, and the upcoming summit in India also of course where we'll be involved in.

And of course we are also via the Member States we are involved in the UN processes. So especially the Global Digital Compact and its implementation which is now in a critical phase.

And basically we do this from two perspectives. On the one hand, we do it to promote our goals, which I listed. And then also to ensure that whatever conclusions, declarations, and statements that are -- that result from these initiatives and events that they are compatible with our objectives on the one hand and the strategy and compatible also with our regulations so that we don't end up in a situation where we have international commitments which are somehow conflicting with what is our strategy in general and then our regulation in particular.

So this is basically the rationale for our engagement and our involvement. Thank you.

>> YOICHI IIDA: Thank you very much for your very detailed explanation. And we really understand, you know, the EU act objective, the innovation friendly environment across EU region.

And we also discussed in G7, you know, the different countries, different jurisdictions have different backgrounds, different social or cultural or historical conditions. So the approaches to AI governance have to be defined from one from another.

But still that is why we need to pursue [?] across different jurisdictions, different frameworks, and I'm personally impressed by the approach by European Commission in the discussion of the code of practice of AI act which is very open to all stakeholders.

So our private sector people were also very much impressed when they joined the discussion and submitted their comments which were much reflected to the current text. And we are expecting the very good result from the discussion as [?] practice as part of the AI act.

Now I turn to the other stakeholders. Melinda, from the perspective of an AI company, how do you evaluate the current situation of global AI governance and also what are the priorities or what are the -- the requirements as a private company in the governance framework and what do you expect?

>> MELINDA CLAYBAUGH: Thank you so much for the question and thank you for the opportunity to be here.

As you were giving the opening remarks and listing all of the frameworks and the acronyms and all of the principles and bodies that are involved here, it's really remarkable the work that has gone on in the last couple years in the international community on AI governance.

And there's just been incredible proliferation of frameworks and principles and codes and governing strategies.

And I think at this moment it's really important to consider connecting the dots. I think we don't want to continue town the road of duplication and proliferation and continued putting down of principles. I think we've largely seen a similarity and a coherence of approach around the various frameworks that have been put out at a high level.

And I think it's really important at this point to think about how do we connect these frameworks and these principles.

Thank you.

How do we connect these principles and these frameworks, because if we do not think about that, then we are at risk, I think it was mentioned, of fragmentation.

And from a private company's perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn't have borders as we're all familiar with, is the risk of the fragmentation of approach. And so I think it's really important to think about what do we have in common and how do we draw connections between these principles.

Another priority is really moving from principle to practice. And I've been encouraged to see this as a kind of theme in conversations throughout a few days here on AI governance.

We have the principles, but how do we put them into practice?

And I mean that in a few different ways. Of course from a company's perspective, what does it mean. And I'm encouraged by the work of kind of trying to translate some of these things into concrete measures.

But I think also from a country's perspective, countries that want to implement and deploy and really roll out AI solutions to public challenges, how do they do that? What is the toolkit of measures and policies and frameworks at a domestic level that is important to have in place?

Things like an energy policy, scientific infrastructure and research infrastructure. Data, compute power, all of those things are really important.

How do companies make sure -- how do countries make sure they have the right elements in place to really leverage AI.

And then I think, of course, from the perspective of, you know, policy institutions, how do they set out toolkits and frameworks to make sure that all stakeholders have the opportunity to adopt AI. And so I think I'm also encouraged as we think about moving from principle to practice that there seems to be a broadening of the conversation.

I think in terms of the focus beyond -- some of the early conversation in the AI governance space was focused solely on kind of advanced frontier risks. And I think it's important, I think the Hiroshima AI principles and process was important in minimizing the risk and what does that mean and how do we expand the conversation beyond risk to make sure it's benefits based. That includes stakeholders who haven't been part of the conversation to date.

I think looking ahead to the India AI Impact Summit, how do we include as many stakeholders as possible in the conversation. Civil Society, you know, everyone from the Global South. How do we include that -- expand that conversation and how do we make sure we're moving to tangible, concrete impacts.

>> YOICHI IIDA: Thank you very much. Very important two points, the coherence or avoiding fragmentation and improving the interoperability and also this point from principle into actions, this is very important and exactly now what we are seeing.

For example, I understand OECD is making the efforts [?] the toolkit for AI process. And Hiroshima AI Process, thank you very much for mentioning that.

We have the 20 reports from companies which describes what the companies are doing inside the company when they assess the risks and also take the countermeasures and publicize what they are doing.

So all those information are on the website of OECD now and there is a lot of learnings from the practical information. But still, we found those reports are little bit difficult to read and understand. So this is another challenge for practicality.

But we -- I believe we are making the progress. So having listened to these, Ansgar, what is your opinion and what do you evaluate to the current situation?

>> ANSGAR KOENE: Sure, thank you very much. And thank you for the invitation to be on this panel.

So reflecting on this space around AI governance, both from how we, within EY are looking at this, but also from what we're seeing amongst our private sector and public sector clients whom we are helping with setting up their AI transformation and their governance frameworks around this, we are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens, or have significant impacts on the ability of the organization itself to operate.

It is becoming very critical for organizations to have the confidence that they have a good governance framework in place which will allow them to assess and measure and understand the reliability of the AI system, the use cases for which it truly operates, what are the boundary conditions within which it should be use and where it should not be used. The and I can of information that also people within the organization and people outside need to have in order to be able to use the AI systems correctly.

And so if we reflect from that point of view of the need that organizations have to have a good governance framework for the use of AI on to these global exercises and global initiatives, I think there are effectively two dimensions in which these global initiatives are important.

One is the direct one, which is things like the OECD AI principles helped all organizations to have a foundation that they could reflect on as they're thinking what are the key things we need to have in our governance thinking.

The G7 Code of Conduct has helped elaborate that further and it has helped to pinpoint in more detail what goes into questions such as what is good transparency, what is a way to think about inclusiveness, for instance, of people that need to be reflected on when developing these systems.

And now the Global Digital Compact also helps to provide a broader understanding of also the way to think about AI governance within the broader context of good governance in itself.

But then there's also the indirect way from the point of view of companies, the indirect way in which these global instruments, of course, help to make sure that different countries have a common base from where to approach how to create either regulations or voluntary guidelines, whatever works best within their particular context.

But it gives a common baseline so that businesses that want to operate also across jurisdictions will have a common basis on which to be able to engage in those different [?]

>> YOICHI IIDA: Thank you very much. Exactly what you said was we need to improve interoperability and coherence across different governance frameworks. And we have to admit there are differences in approaches, but we need the common foundation, probably as humans in OECD and democratic values and including transparency or accountability or data protection.

So thank you very much for the comment. And so we believe our approaches and the world is proceeding in the right direction by sharing the experiences and the knowledges and try to improve coherence and the interoperability.

Then we have different frameworks going on. So second question. What do you think you need to do as a stakeholder, what is your role and what is your strategy in coming years and, in particular, what do you expect from UN Global Digital Compact which is now discussing the global AI governance?

So at this time, I would like to start with Abhishek.

>> ABISHEK SINGH: As I mentioned, our strategy for AI implementation is to ensure that we use this technology for enabling access to all services, to all Indians in all languages that really empower people globally.

What do we expect from the Global Digital Compact to make this a reality?

We have a lot of expectations because we are catching up with the rest of the [?] technology. How to we kind of enable access. The first request that we had to the U.S., compute. How do we ensure that compute is available in India. 90% of end where is controlled by one company to ensure that we have access to at least 50,000 GPUs in India. That becomes one practical requirement that we have.

Second is to ensure that the models which are again developed primarily in the West and Deep Seek in China, how do they become more inclusive on how they're trained in datasets from across the world, that becomes our second request.

And the third which is the most important part is building capacities. How do we ensure that they have -- we talk about setting up a [?] initiative. How do we ensure that skills and capacities in all countries are developed, enhanced, and further to be able to take advantage of the evolving technologies.

And then we also need to build safeguards like the [?] are there for responsible AI, for ensuring safe, trustworthy development of AI. But to ensure that one would need tools and even regulators, especially being in the government, when we feel that there's a need to regulate, but then how to we enhance the regulatory capacity.

Even if you want to test a particular solution, whether it meets the standards, meets the benchmarks, do you have the capacity to test that. Enhancing that, enhancing cooperation on that will become very, very critical.

I would say my ask for the Global Digital Compact would be at the operational level. Everybody talk the same language at every forum, but how do we translate the talk into action. That would be the real requirement that we will have. And we are happy to work with the global community in making this a reality not only for India, but the entire global south and the world community.

>> YOICHI IIDA: Thank you very much. Inclusive will be one of the keywords in the coming months in global AI governance discussion. And we have a lot of expectation for the India's AI Impact Summit next year.

So thank you very much for the comment.

Now I invite Melinda for your views.

>> MELINDA CLAYBAUGH: Thank you so much.

So under the theme of moving from principles to practice, three ideas.

One is to continuing to build policy toolkits, which I think the OECD is really well placed to do for countries that want to advance their AI adoption.

Two I think is libraries of resources along the lines of evaluations and benchmarks and third-party resources of testing of AI that's been done and really putting that in one place. There are a lot of entities engaged in this, and I think building the knowledge base will be important.

And third is continuing the global scientific [?] on that point this is where I lead into the Global Digital Compact.

The UN scientific on AI as a scientific body to continue research and conversation and making sure that we're having the best scientific voices coming together. And then the global dialogue on AI governance through UN forums. I think that is the convening power there is what's really important and bringing the right stakeholders to place.

>> YOICHI IIDA: Thank you very much. Very important three points.

So Melinda mentioned OECD, so now I would like to invite Lucia for your comment.

>> LUCIA RUSSO: Yes, indeed. We started this project to build a toolkit to implement the OECD principles and it comes exactly from this demand to have more actionable resources that would guide countries on how to go from these agreed principles into concrete actions.

And it was agreed by the ministerial council meeting at the OECD just at the beginning of June and what is this toolkit going to do and how it's going to be built.

It will be a -- an online interactive tool that will allow users, we expect mostly government representatives to make use of these resources, by consulting and interrogating the large database that we have on policies, but it would be a guided interaction that will allow countries to understand where they need to act and that would concern both the more values-based section of the principles but also the policy areas that include, as we have heard, issues around compute capacity, data availability, research and development resources.

And it will guide countries through understanding their needs, but also what the priorities may be. And then provide suggestions that would be Policy Options that other countries in similar level of advancements or in a region that is the same as the country that is navigating this toolkit to have these suggestions on policy option and practices that have already been put in place and that have been proven effective.

And so on one hand, we want to build this user experience. On the other hand, the objective is to enrich the repository of national policies and strategies that we already have for 72 jurisdictions on the OECD database of national strategies.

And that is one of the, I think, the priorities that also we see that we need to build further upon as we heard the global outreach of the UN. So the OECD and the UN have established a memorandum of understanding and we though have increased cooperation on themes such as this one.

And the idea is to -- to build this toolkit, again, through co-creation with countries. And for that we are organizing workshops in different regions. One such workshops will be, for instance, with ASEAN countries and in cooperation with Japan so that we understand the needs.

Because as we've heard, I think everyone is agreeing on the broader actions, but then when it comes to practice, we better need to understand what are the challenges. And that is where we want to work with countries around these challenges.

And yeah, I think we have heard from several speakers the opportunities is where we want to put the focus on. We've also been advancing work in understanding AI uptake across sectors. This is in view of moving from this conversation that is very broad that concrete applications and their understanding better what are the bottlenecks and what are the pathways to increase adoption when it comes to agriculture, when it comes to health care, when it comes to education, for instance.

And perhaps just to close on that point, I think when it comes to the Hiroshima reporting frameworks, it's interesting to see that the framework doesn't only talk about risk identification, assessment, and mitigation. The last chapter also talks about how to use AI to advance human and global interests.

And it's interesting to see that in this first reporting cycle by 20 companies, there are initiatives that are reported on how companies are actually engaging with governments and civil societies to have projects that, indeed, foster AI adoption across these key sectors.

So once again, these would be the priorities and we see these as the key actions moving forward.

>> YOICHI IIDA: Thank you very much. Actually OECD [?] Hiroshima process, all those initiatives are backed up by OECD secretariat. So we are -- we look forward to working closely in the future.

Time is limited, but first I invite Ansgar. What is your point?

>> ANSGAR KOENE: Sure. Well, very much I'd like to echo the point that was made regarding the need to move from principles to practice. As well as the point around capacity building.

Within those, I would like to also highlight work that OECD's doing around the incidents database, which is really helping to get a better understanding about where are real failures within AI occurring as opposed to hypothetical ones.

But also I think it's very important for us to be supporting and encouraging broader participation in the standards development in this space, which are often a key tool that industry uses in order to be able to understand how to actually go towards implementation. And it is a good reference point so that industry feels, yes, this is an approved -- the wider community agrees this is a good approach to do it.

However, all of these things in order to really achieve their intended outcome of being able to provide end users with the confidence and trust in these kinds of systems will require also reliable, repeatable assessments that can be done on how these systems are being implemented, how the governments' frameworks are being implemented.

And in order to have these we need greater transparency in what they assessments are intended to achieve and how we're doing this so we have expectation management so that users will understand how to interpret what this assessment has actually tested for.

We need greater capacity building also within the community to build an ecosystem of assessment and assurance providers in this space. And we've seen some interest work around that happening also already in some jurisdictions such as the UK and the OECD is helping in this space as well.

And effectively, we just need the community to be able to provide clarity as to what is a good governance framework, how to approach this, hence the standards, and how to assess whether it has achieved and being done in the appropriate way through things like assessments.

>> YOICHI IIDA: Thank you very much.

The engagement of all communities including Civil Society is very, very important and multi-stakeholder approach is definitely essential.

So we believe that the role of IGF in AI governance is increasingly important.

So sorry for the time remaining, but Juha, what is the role of Europe and the -- how do you think Europe will be working with the world?

>> JUHA HEIKKILA: So we -- we are, of course, very much involved in sort of also discussants of the GDC, the Global Digital Compact that I mentioned earlier. And to echo what Melinda said, we think the independent scientific panel is quite a crucial component of this and I think that the text, the GDC text is very useful.

I think what was agreed last year in that regard was very successful and we hope that will be then translated into -- in the implementation the way -- the way that it was expressed and do that in the spirit of the text.

And I think that in this regard also for the dialogue, the governance dialogue, we think it's important that it doesn't actually duplicate existing efforts. Because there are quite a lot of them. And that's why also in the GDC text it's mentioned that it would be under margins of existing UN -- UN events. And I think that would be very useful.

I think overall there is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI.

I think that this kind of multiplication is -- is not necessarily sustainable in the long run. I think we have made partial steps forward in the integrated partnership that was formed between the Global Partnership on AI and the OECD.

We welcome that because we had some overlap between the expert communities. And also I think now that initiative has a better sense of purpose also backed by the structures of the OECD which make it more impactful from our perspective. And we look forward to how that will develop further. And it will also then have a role in taking these discussions to a greater audience and membership.

One thing that I wanted to mention just very briefly is that despite this multiplication of efforts and the seeming almost chaotic nature in some respects to accelerate a bit, there are some sort of constants, however.

And one of these constants is that they go in the same direction. And one aspect which has been included in many of them is the risk-based approach. Which I mentioned is the foundation of the AI act, but it's also, for example, reflected in the Hiroshima AI Process guiding principles and the Code of Conduct.

It's also reflected elsewhere in some of the statements that have been made in summits. So you know, we have some common ground. But I think it would be desirable over the long run to try and seek some convergence and streamline.

>> YOICHI IIDA: Okay, thank you very much.

So there are a lot of efforts going on and the GDC is also one of them. And WSIS+20. But the role of the UN will be very important, but we need to avoid duplication and we need to streamline and focus our power on the most efficient way.

So I hope in the development of AI governance discussions the role of IGF will be very important and this needs to be the place where the people get together and discuss not only Internet Governance, but also AI governance and all digital technology governance to be discussed by multi-stakeholders here in IGF.

Thank you very much. And I wanted to take one question, but I'm not sure I'm allowed. Okay, please. Yeah, please. But maybe you need microphone.

>> ABISHEK SINGH: Go there and ask.

>> YOICHI IIDA: Yes. I'm sorry.

>> Thank you very much for great discussions. My name is [?] from the University of [?] Japan.

I'd like to understand AI governance compared to the Internet governance. There were various challenges supporting people around the world, but in particular, while the Internet was controlled by any single -- was not controlled by any single entity, in the case of AI, there are many suborganizations, including tech companies have become key players.

As we now implement AI governance, what do you see as the key differences compared to the time of the Internet expansion?

>> YOICHI IIDA: Okay, thank you very much for this complicated question. Okay, Juha.

>> JUHA HEIKKILA: It's a complicated question. I comment on sort of one aspect and I let of course my fellow panelists to comment.

But I think broadly speaking, I heard this comment the day before yesterday that AI is on the Internet and therefore Internet Governance, you know, is suitable for it. There is more to AI than what is on the Internet.

Think of embedded AI, for example, robotics, intelligence robotics, autonomous vehicles, et cetera. So not all of AI is on the Internet.

There may be some inspiration AI governance can take from the principles of Internet Governance, but I think there are numerous issues related to AI governance which cannot be, if you like, taken over from Internet Governance specific to AI which have sort of characteristics where you don't find any matching aspects in Internet Governance.

So I would personally see those as broadly different with potentially some inspiration for AI governance taken from Internet Governance.

>> YOICHI IIDA: Thank you very much. Who else wants to --

>> ABISHEK SINGH: I would broadly agree with him. The only thing I would say is that AI and Internet are two different things. AI includes a lot more than Internet, as he mentioned. AI is controlled by a few corporation there's.

In order to make it more [?] and bringing the principles in to AI principles, it would have multi-stakeholder and have to ensure that the way we approach towards managing AI governance is more inclusive, it involves people who are technology providers, and also people who are technology users.

And when we're able to do that balance, we'll be able to make it more fair and more balanced, for equitable. And this will require a lot more of global partnership than what Internet Governance has done so far.

But the frameworks and the mechanics and proposals of the Internet Governance Forum can be a good guiding light for working on the AI governance principles.

>> ANSGAR KOENE: Maybe if I can add what links closely to what Juha has mentioned is the risk-based approach. In AI the risk depends on the use case. Whereas, because AI is a whole kind of technology that you can use in so many different kinds of applications and application spaces whereas the Internet is more of a uniform kind of thing.

>> YOICHI IIDA: Anymore? Okay. Thank you very much. Our time is up. But I hope you enjoyed the discussion and please send the uploads to the excellent speakers.

(Applause)

>> YOICHI IIDA: This is too excellent to close now, but our time is up. Thank you very much. You don't believe they are given the questions only at midnight yesterday. Thank you very much. Thank you.