The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ANNOUNCER: Ladies and gentlemen, welcome to the Opening Ceremony of the 18th Annual Meeting of the Internet Governance Forum. May I draw your attention, please, simultaneous interpretation service in six United Nations language and in Japanese is available with the receiver. Please, turn on the power button and switch to channel one for Arabic, channel two for Chinese, channel three for English, channel four for French, channel five for Russian, channel six for Spanish, and channel seven for Japanese. May I kindly ask you at the end of the session, please hand over the receiver to the staff at the exit doors. Thank you.
And please be noted that the Annual Meeting of the Internet Governance Forum will be streamed online. Lastly, if there is an emergency, please follow the instruction of staff including security services. The opening performance is scheduled to start shortly from 10:30 a.m., please kindly be seated for a while. Thank you for your patience.
>> MODERATOR: Excellencies, Distinguished Delegate and participants, I would like to welcome you to the 18th Annual Meeting of the Internet Governance Forum. My name is Ribeka and it's my honour to serve as emcee for the Opening. Prior to the Opening Ceremony of the 18th Annual Meeting of the Internet Governance Forum we now present as the opening act reconceptualizing traditional Japanese theater Kabuki Two Lion with San‑sui media installation by Naoko Tosa. Kabuki is a novel and unusual performing art that appeared in Kyoto at the beginning of the Edo period. In this new work, a heroic lion spirit appears in a climactic scene of Renjishi, a masterpiece of Kabuki. The scene in which the lion smells a flower and then makes a violent movement known as madness was performed by a professional Kabuki actor. Creating a digital Kabuki performance suitable for an international IT Conference. So, please, enjoy this opening performance.
(Applause).
Now, I would like to welcome Mr. Junhua Li, Under Secretary‑General of United Nations and His Excellency, Mr. Kishida Fumio, the Prime Minister of Japan to the stage. So please welcome Mr. Li and His Excellency Mr. Kishida.
Ladies and gentlemen, thank you very much for joining us today for the 18th Annual Meeting of the Internet Governance Forum organized by the United Nations and hosted by Ministry of Internal Affairs and Communications. Now, I would like to start the Opening Ceremony. First of all, we would like to welcome Mr. Antonio Guterres, Secretary‑General of the United Nations to the screen. Video message will be introduced by Mr. Junhua Li, Under Secretary‑General of the United Nations. Mr. Li, please proceed to the podium.
(Applause).
>> JUNHUA LI: Good morning. Your Excellency, Mr. Kishida Fumio, Prime Minister of Japan, Excellencies, distinguished participants, I have the honour to introduce the Secretary‑General of the United Nations, Mr. Antonio Guterres. And will deliver a video address, please.
>> ANTÓNIO GUTERRES: Excellencies, ladies and gentlemen, I'm pleased to greet the Internet Governance Forum as you gather in Kyoto. Let me begin by thanking you for your invaluable efforts bringing together Governments, the private sector, civil society and the technical community for the essential task of advancing an open, safe and global Internet. For nearly two decades this multi‑stakeholder cooperation has proven remarkably productive and remarkably resilient in the face of growing geopolitical tensions, proliferating crises and widening divisions. Your work is now more important than ever. We need to keep harnessing digital technologies enabled by the Internet to help deliver on the SDGs, take climate action and build a better world.
I see three areas for action. First, we must work together to close the connectivity gap and bring the remaining 2.6 billion people online, in particular women and girls in Least Developed Countries. Second, we must work together to close the governance gap including by elevating and better aligning the work of the IGF and other digital bodies across UN system and beyond.
Third and fundamentally, we need to reinforce a human rights and human‑centred approach to digital cooperation. It is imperative that the Internet, including the physical infrastructure that underpins it, remains open, secure, and accessible to all.
This means that the Internet's long‑established multi‑stakeholder institutions are more support, not less. The Leadership Panel I have established for the Internet Governance Forum is aimed at providing strategic guidance, supporting stable funding and identifying the impact of your important work. To help advance the search for concrete governance solution, I'm appointing a High Level Advisory Board on Artificial Intelligence which will provide preliminary recommendations by the end of this year, and the Global Digital Compact proposed for adoption in 2024 aims to set out principles, objectives and actions to secure a human‑centred digital future.
Governments, private sector and civil society must come together to ensure that the commitments enshrined in the Compact are followed up. We cannot afford another retreat into silence. We must work to prevent gaps from emerging in new digital technologies, avoid duplication and address emerging risks effectively. I look to the gathering in Kyoto to provide critical input to advance our collective efforts. Together we can realize the ambition spelled out in the theme of your forum and build The Internet We Want to Empower All People. Thank you.
(Applause).
>> JUNHUA LI: Thank you, Mr. Secretary‑General. Please allow me to join the Secretary‑General to extend our gratitude to the Government of Japan for hosting us. Excellencies, Distinguished Delegates, as the Secretary‑General reminded us, the challenge confronting the global community in reaching the 2030 Agenda for Sustainable Development are vast and complex. The Internet will play an integral role in navigating these complexities, moving us towards a better and more resilient future.
But this requires responsible policies that liberate the benefits of digital technologies while mitigating the risks.
The Internet Governance Forum must respond through the UN's convening role to bring every country and every individual together regardless of its stakeholder groups or backgrounds. IGF needs to further strengthen its role as being the global digital policy forum in finding points of convergence and consensus and in identifying the digital solutions in reaching the 2030 Agenda. In this connection, I welcome the focus of the forum on the urgent and relevant digital issues and look forward to your contributions and recommendations on a way forward.
Ladies and gentlemen, 18 years ago in 2005 the IGF received its mandate through the World Summit on Information Society. In 2025, the United Nations General Assembly will review this mandate. Member States will consider the impacts and outcomes of the forum and determine its future. With this in mind, I invite you to consider three questions during this year's IGF.
First, has the IGF delivered on its mandate and purpose?
Second, how can the Internet better support and accelerate achievement of SDGs?
Third, how can the IGF best support both the preparations of and the follow‑up to the Global Digital Compact and Summit of the Future?
This forum is aimed at empowering all countries to deliver the better digital policies to support more open, inclusive and safe access to digitization for all people. We must ensure that it is delivering on that aim. We call for the closer collaboration and partnership among stakeholders, greater digital innovation for accelerating SDG implementation, and, of course, the technical assistance to the Global South to bridge the digital divide. Together, let us liberate the inclusive multi‑stakeholder approach of the Internet Governance Forum to build The Internet We Want. I thank you.
(Applause).
>> MODERATOR: Thank you very much, Excellency. Next, I invite His Excellency Mr. Kishida Fumio, the Prime Minister of Japan, to deliver his remarks. His Excellency, Mr. Kishida Fumio, please proceed to the podium.
(Applause).
>> FUMIO KISHIDA: I am Fumio Kishida, Prime Minister of Japan. Good afternoon, everyone, and welcome to Japan. I would first like to express my respect to all of the forum participants, both here in person and online for continuing discussions on how to make the Internet better, and for your relentless efforts to make this a reality.
My true respects to all of you. The basic philosophy of the Internet Governance Forum which values open, democratic and inclusive processes is truly in line with the fundamental values of my own country.
We are very pleased to be able to welcome you here for the first time as the host nation of the Internet Governance Forum Annual Meeting. Digital technologies such as the Internet is the engine of economic activity and of growth for people around the world.
The Internet functions as a free and diverse forum for expression that enables access to information and services that transcend time and space. It has not only become essential to our daily lives and socioeconomic activities, but it also forms a critical foundation for democratic societies.
A free and unfragmented Internet is also essential for solving humanity's challenges such as development, health, and security as well as for the further development of human kind. On the other hand, it is also true that the Internet has given rise to the proliferation of unlawful and harmful information, including disinformation, cyber-attacks, and cybercrime which threaten our safety and free socioeconomic activities.
We cannot afford to turn our backs on these challenges. I am convinced that we can maximize the benefits of the Internet while reducing its risks by bringing together participants from all over the world in various positions and with different perspectives to share their wisdom through the multi‑stakeholder approach.
I believe the overall theme of this year's meeting, which is The Internet We Want, Empowering All People is a powerful expression of our determination to realize an inclusive Internet that leaves no one behind and to pave the way for a sustainable future for humanity.
As the host nation, my Government believes it is our important responsibility to contribute to this discussion. We believe that the Internet must remain open, free, global, interoperable, secure, and trustworthy in order to promote Data Free Flow with Trust, DFFT, and to continue its contribution to human development.
And Japan remains committed to supporting Internet Governance by diverse multi‑stakeholders. Last but not least, our host City of Kyoto is imbued with rich history and traditions. So I hope that as you engage in lively discussions about the future of the Internet and network with other participants, that you also enjoy the culture, food, and hospitality that Tokyo and Japan has to offer.
I do hope that this meeting in Kyoto will be meaningful and fruitful for the future of the international community and for each and every one of you. With that, I would like to conclude my opening address. Thank you very much.
(Applause).
>> MODERATOR: Thank you very much excellency!
And thank you. Please give a round of applause once again to the speakers.
(Applause).
So now the stage will be rearranged for the next session. Please kindly remain seated. The High‑Level Panel V, Artificial Intelligence, will start shortly from 11:00 a.m.
Thank you for your patience.
Thank you very much. Now, I would like to welcome the guest of honour and speakers for the High‑Level Panel V, artificial intelligence to the stage and we will begin with the photo session.
First, our official photographer at the front will take a photo.
So at first the official photographer at the front, please take a photo.
I would now like to take time for the press in the backseat. Please stay put and take a photo. Thank you for your cooperation.
Thank you very much. Excellencies, please proceed to their individual seats on the stage. Thank you.
Ladies and gentlemen, may I draw your attention, please. We will now start the High‑Level Panel V, Artificial Intelligence, the 18th Annual Meeting of the Internet Governance Forum. I will now invite the guest of honour to deliver keynote speeches. First, I would like to welcome His Excellency Mr. Kishida Fumio, the Prime Minister of Japan to deliver keynote speech. His Excellency, Mr. Kishida Fumio, please proceed to the podium.
>> FUMIO KISHIDA: On behalf of the host country, I would like to welcome you to the special session on AI at the Internet Governance Forum Kyoto 2023.
As the potential and risks of rapidly developing Generative AI are being debated around the world, it is gratifying that the topic of global AI governance is being discussed by representatives with diverse fields today in Japan.
I would like to thank you all for taking part in this session. Generative AI has been called as a technological innovation comparable to the Internet. Just like the Internet has brought about remarkable democracy and socioeconomic development by connecting people beyond the constraints of time and space, Generative AI is about to change the history of mankind.
This year I, myself, have participated in discussions with young researchers and AI developers only to realize the unlimited possibilities that Generative AI holds. Generative AI not only improves operational efficiency, but also to accelerate innovation in various fields such as drug discovery and development of new treatment, thereby bringing about dramatic changes in the world.
We expect the world will be changed dramatically. The Japanese Government is planning to compile economic policy package by the end of this month that includes support for strengthening AI development such as for building computational services and foundational models as well as AI introduction support by SMEs and medical application. We will incorporate strong support for both AI development and utilization in that package.
On the other hand, risks of sophisticated false images and disinformation that cause social disruption or other threats to the society are pointed out. A wide range of stakeholders need to play their roles in the development of AI. For example, in order to promote the distribution of reliable information, it would be effective to develop and promote the spread of technologies that can prove and confirm the originator of the information or provenance technologies.
The international community as a whole must share this understanding and deal with these issues in solidarity. It is important that we should now gather our wisdom of mankind to strike a balance between promotion and regulation while taking into account the possibilities and risks of Generative AI in order to reduce the risks it poses to the economy and society while maximizing its benefits to all of us.
With this in mind, at the G7 Hiroshima Summit, I proposed the creation of the Hiroshima AI process to further international discussions towards the realization of trustworthy AI, which was agreed upon by the leaders, and the G7 leaders instructed their ministers in charge to deliver the results within this year.
The Hiroshima AI process is to develop by the end of this year the international guiding principles for all AI actors as common principles, indispensable for the realization of trustworthy AI.
In particular, as a matter of urgency, we are working on international guiding principles and a code of conduct for organized nations developing advanced AI systems including Generative AI in preparation for the G7 Summit online meeting to be held this fall.
Generative AI is a cross‑border service, therefore concerns people all over the world. For this reason, the Hiroshima AI process will also take advantage of this IGF opportunity to incorporate a wide range of views through multi‑ sector discussions including Government, academia, civil society, and the private sector.
By being informed by the opinions of diverse stakeholders beyond the G7 who are participating today we will drive the creation of international roles that will enable entire international community including the Global South enjoy the benefits of safe, secure, and trustworthy Generative AI and to achieve further economic growth and improvement of living conditions.
Before closing, I would like to express my hope that this special session on AI will be a landmark meeting where meaningful discussions will be held among representatives of international organisations, Governments, AI developers, researchers and civil society that will later be remembered as a turning point in the discussion on Generative AI.
With this, I would like to conclude my remarks. Thank you very much for your kind attention.
(Applause).
>> MODERATOR: Thank you very much excellency! Next, I would like to welcome Ms. Maria Ressa, CEO and President of Rappler, Inc., 2021 Nobel Peace Prize winner to deliver keynote speech, please proceed to the podium.
(Applause).
>> MARIA RESSA: I'm so sorry I'm short. I will tiptoe. Thank you so much. Thank you to the host country, to Japan, to the Internet Governance Forum, I am new to Internet Governance Forum and so I bow to your collective wisdom. I really hope to be a voice to urge you to think about where we are today and to urge you to act because thank you to our initiative on Generative AI, but let me just remind you of the problems we face right now.
Today truth is under attack. We are engulfed in an information war where disinformation, the bullets of information operations spread like wildfire to obscure and to change reality. What power used to consolidate power is technology, social media, the first human contact with AI. In 2018, and this is probably changed since then, MIT released a study that said lies spread six times faster on social media than these really boring facts.
And what Rappler data has shown is that it spreads even faster when it's laced with fear, anger, hate. Every human being, all of us has two systems of thinking. Thinking fast, our emotional instinctive side, and thinking slow, our rational side. This rational side is where conversations like this one happen, where rule of law, journalism, democracy happens.
Technology hacked our biology to bypass our rational minds to trigger the worst of who we are and to keep us scrolling in our information economy. Attention, that is the prize. Your attention is commodified, changing how you feel, what you think, and how you act.
That fundamental design choice, and this is the first social media contact, right, that lies spread faster, surveillance capitalism or surveillance for profit turned our world upside down. And here I'm sorry to be irreverent, Netflix Stranger Things if you watched it, you know how they go into the upside down? We are literally living in the upside down. And while it seems deceptively familiar, everything is covered with goo, and there are monsters in every corner.
Because that design of the new gate keepers to our public sphere was exploited by authoritarians. If you can convince people lies are facts, then you can control them. And the same three sentences I've said since 2016 without facts, you can't have truth. Without truth, you can't have trust. Without these three, we have no shared reality, no rule of law, no democracy.
So I have two minutes left to tell you what we should do. And actually The Internet We Want has those five values. I thank the Secretary‑General for appointing the Leadership Panel. We each have two years. It's extremely honest, open and we hope to urge you to act, but I will leave you with two last thoughts. One is the impact beyond the individual. This is what I've laid out for you, right, the behavioral aspect for us.
If you don't have integrity of facts, you cannot have integrity of elections, and 2024 becomes a critical year for elections, which is part of the reason everyone in this room from civil society, parliamentarians, Government officials, NGO's, journalists, we each have a role to play.
I keep saying we are in the last two minutes. If you play basketball, last two minutes for democracy. In my last minute, I just want to tell you about an initiative that aligns with the Internet Governance Forum that was launched last year at the Nobel Peace Summit in DC. This year over 300 Nobel laureates, civil society groups, the same multi‑stakeholder arrangement, we need to come together. We launched a 10 point Action Plan that has three buckets, and these would be the same that you would need to operationalize in every single one of our agreements.
The first, stop surveillance for profit. Give us back our lives.
Two, stop code bias. If you are a woman, LGBTQ+ you are further marginalized in the virtual world and we want a secure, safe and trustworthy Internet.
Third, journalism as an antidote to tyranny.
Thank you so much!
(Applause).
>> MODERATOR: Thank you very much, Ms. Ressa.
Next, I would like to welcome Mr. Ulrik Knudsen, OECD Deputy Secretary‑General to deliver keynote speech. Mr. Ulrik Knudsen, please.
>> ULRIK VESTERGAARD KNUDSEN: Thank you very much. It seems I have the opposite challenge compared to the previous speaker so I will not be tiptoeing. What an honour it is to speak after a Prime Minister and a Nobel prize winner, and what an honour it is to join this High‑Level Meeting on global AI governance and Generative AI convened in the context of the G7 Hiroshima AI process led by Japan. Thank you very much.
Rapid technological transformation is holding a brand new era of boundless opportunity and at the same time great risks, some even talk about existential threats. My organisation, the OECD was founded over 60 years ago. It is organised on a simple premise that international cooperation is essential for economic growth and social prosperity. In the decades gone by, we have leveraged evidence‑based policy, expertise, mutual exchange, data and analysis to keep ahead of the global cross‑border challenges.
Key examples include codes of liberalization, guidelines for multinational enterprises and the inclusive framework of tax with almost 140 tax jurisdictions around the world. To sum it up, through international cooperation and shared values, the OECD's goal has been to drive forward better policies for better lives.
Let me be as frank as I can, digital policies are no exception to that. The OECD has delivered landmark standards. For example, the last year the declaration on access to personal data held by the private sector entities. These standards and many others in areas like broadband connectivity, data governance and digital security provide guidance to support countries in reaping the benefits of digital transformation, fostering innovation, while addressing mitigating risks, advancing responsibility, and promoting trust.
In the last decade we have increasingly dedicated our attention to AI. With AI and in particular with the public availability of Generative AI applications, humanity is facing what is really a watershed moment. Our wellbeing, our economic prosperity, and our very identity, perhaps, even as humans will be affected by the collective action we take today.
AI already now demonstrates its revolutionary potential for productivity, for scientific discoveries, healthcare, education, climate change, however, AI also carries significant risks including to privacy, safety, autonomy, and to some extent at least jobs.
SG 7 members have underlined under the Japanese presidency, Generative AI creates a real risk of false and misleading content threatening democratic values and social cohesion, the upside down world of Stranger Things.
Generative AI raises complex questions of copyright, and the computing power highlights issues of supply chains access and divides.
What we need now, ladies and gentlemen, is a global effort for the governance, the safe development and deployment of AI. The OECD has helped lead the way on policy making with a landmark 2019 OECD recommendations on AI, the very first intergovernmental standard on AI. We are now gathering the evidence on AI through the OECD AI Observatory, the framework for classification of AI systems, the catalog of tools and metrics and also now latest, the AI incidence monitor.
These achievements have gained traction and influenced AI policy making around the world. But with technology developing now at break‑neck speed, we need to make collective decisions to ensure this technology will be safe and beneficial to societies. Unfortunately, as you all know, there are many, many questions, and not too many answers.
Do we need hard rules about the design of AI systems? How do we marshal the innovation, governance and regulations of AI? Do we use existing approaches and frameworks that have proven effective from, for example, airplanes to food safety or do we need radically new approaches? How do we prepare society for this transition? How do we make sure powerful technology doesn't rest solely in the hands of a few, be that countries or companies?
And perhaps most importantly, how do we make sure that we seize the boundless opportunities for people and planet in a just, equitable and democratic manner, that we don't answer these questions that I raised above with policies that hamper progress.
I don't have the answers to all of those questions, but I do think I know one thing, the decisions we make in response to these questions require international cooperation and coordination, and it's the ambition of the OECD to work with international partners to provide the forum and convening power for these discussions.
The G7 has a key role. We are here under the auspices of G7 presidency and Japan has been a visionary in identifying the policy importance of AI. Japan's 2016, G7 presidency kick started development of the AI principles which then served as the basis for the G20 AI principles in 2019.
In this vein, the Hiroshima processes sets a necessary objective, a code of conduct for organisations developing advanced AI systems.
The OECD is very proud to be informing this process in many ways, not least, later this year by launching the global challenge alongside our key partner organisations like UNESCO and others, and we also look forward to providing comprehensive guidance across different axis and different aspects of AI. Before I end, let me say that we cannot advance the global effort on AI governance without effective stakeholder engagement. Multi‑stakeholder participation has been the OECD approach to policy development. Examples include one AI expert group with over 400 international experts from Governments, industry, academia and civil society.
The recently launched global forum on technology is another example of this way of building outreach and engagement. Honestly, with your involvement can we develop policies that work for all parts of society.
Prime Minister, ladies and gentlemen, Stephen Hawking defined intelligence and I quote, as "The ability to adapt to change," unquote. Let us continue working together to ensure that our intelligence, both human and artificial, will keep political pace with developments and continue to guide us. We cannot afford not to. Thank you.
(Applause).
>> MODERATOR: Thank you very much.
Unfortunately, His Excellence Mr. Kishida Fumio will not be able to stay. Please give him a round of applause. Thank you.
(Applause).
Now, the panel discussion shall commence. I would like to ask Ms. Ema Arisa, Associate Professor Institute of Future Initiatives, the University of Tokyo, to moderate the session. Ms. Ema Arisa, the floor is yours.
>> MODERATOR: Would you please wait for a moment to rearrange the stage. Thank you very much for your patience.
So good morning, ladies and gentlemen. My name is Ema Arisa. Ema is my family name. I'm Associate Professor at University of Tokyo and it's a great honour for me to moderate this great panel session.
So, first of all, I would like to introduce the panelists. I go from my side to the other end. So the person next to me here is Mr. Nick Clegg, President of Global Affairs, Meta. The second person is Mr. Luciano Mazzade Andrade, Director of the Department of Science and Technology and intellectual Supreme Court at the Brazilian Ministry of Foreign Affairs, Brazil. The third person is Miss Denise Wong, Assistant Chief Executive data innovation and protection group for common media development authority, IMDA Singapore.
The next panelist is Mr. Nezar Patria, Vice Minister, Indonesia. The next panelist is Mr. Kent Walker, President of Global Affairs, Google and Alphabet. On the right‑hand side, we have His Excellency Mr. Kishida Fumio, Minister of Internal Affairs and Communications.
The panelist next is Mr. Junji Suzuki, Minister of Japan. Next is Mr. Vint Cerf, IGF Leadership Panel Chair and so called father of the Internet.
(Applause).
Next to Mr. Vint Cerf we have Professor Murai Jun from Keio University known as father of Japan's Internet. Last but not least, the panelists on the other end is Mr. Doreen Bogdan‑Martin, Secretary‑General of the International Telecommunication Union.
So we have an excellent lineup of panelists, but before we jump into the panel discussion, I would like to invite Minister Suzuki to share with us the brief overview of the current state of Hiroshima AI process led by the Japanese Government now. So Minister, the floor is yours.
>> JUNJI SUZUKI: Good morning, everyone. I am Junji Suzuki minister of Internal Affairs and Communications. I would like to extend my gratitude to all those attending the Internet Governance Forum Kyoto 2023. I would also like to thank Ms. Maria Ressa and OECD Secretary‑General for their very insightful keynote speeches. Now, I would like to introduce the status of discussion on the Hiroshima AI process to set the stage for a panel discussion with multi‑stakeholders to be held in the session.
The rapid development of Generative AI is now a matter of responsibility for us. International community to maximize its benefit to humanity while mitigating its risks to the economy. G7 Minister meeting held this year agreed to discuss the opportunities and risks posed by Generative AI and to utilize the OECD and national public partnership on AI to establish the fora for international discussion on Generative AI including five issues such as AI governance and promotion of transparency.
And in the G7 Hiroshima leaders communicae, it was decided to continue the discussion of the Hiroshima AI process. Subsequently, in September this year, the Hiroshima process G7, tech Ministers formulated and agreed on the following points.
Point one, issues such as ensuring transparency. Two, establishment of international guiding principles for all AI actors and code of conduct for organisation developing advanced AI system. Three, project‑based cooperation, including promotion and public research that contribute to countermeasures against disinformation. Four, importance of exchanging views with stakeholders of the G7 Government.
In today's session, we would like to receive opinions from the confidence of the international guiding principles and code of conduct for organisations developing AI systems which are under consideration. The international guiding principle for organisations developing advanced AI system is to put together the principles of AI. All AI developers expect to realize, safe, secure and transparent AI. It is also to provide a set of concrete actions to the code of conduct the first point is to mitigate the risks of advanced AI system to society.
This includes measures to identify and mitigate risks before bringing AI to market as well as measures to address even after market placement.
So what types of risks should AI developers bear in mind when implementing measures? The second point is to disclose information on the risks and proper use of advanced AI systems and to share such information amongst stakeholders. To ensure users can use their systems with confidence, businesses should clarify the assessment of capabilities and limitations of their AI system.
This resource includes developing and disclosing their own policies on AI privacy and AI governance while they can establish a mechanism to develop and share best practices among various stakeholders.
The third point is to promote research and investment to technologically mitigate risks posed by AI, for example, development and introduction of mechanisms that enable users to identify AI generated content such as digital watermarking.
Prioritize the development of advanced AI systems to tackle global issues such as climate change and global health, and ensure appropriate data which is fed into the advanced system. We believe that today's special session is a valuable opportunity to hear ideas from people of diverse backgrounds. I would appreciate your frank opinions. I hope that this session will be meaningful, not only for the panelists, but also for all of you who are listening to the discussions in the audience and online. Thank you very much for your kind attention.
(Applause).
>> MODERATOR: Thank you very much for this informative presentation, Minister Junji Suzuki. I would like to invite panelists to share their views on some of the important aspects in the Hiroshima AI process from their perspectives. So I would like to ask my first question, so what types of AI systems in particular advanced AI models such as foundation models are you developing or placing on the market? How are they used? What solutions and benefits do they seek to offer?
What do you see as the major risks and challenges that are associated with the advanced AI systems you are developing and how you are addressing those risks and challenges? So I hope this question and answer to this will give us an overview of the current situation of Generative AI and foundation model of host countries.
I would like to ask this question to two speakers from global AI developers, Google and Meta. So, first, President Kent Walker, what is your view on this? You have five minutes.
>> KENT WALKER: Thank you very much, and thank you for the chance to be here today. The power of AI is vast, and that is exactly why we think it needs to be developed both boldly and responsibly. AI goes well beyond a chat bot. It has the potential to change the way we do science, the way we develop technology. I thought it was very nice that the rounds of applause for today's panel went to our two technologists joining us here today, because that will be the foundation of the next great technological advance we have.
Of course, you have been using AI for a dozen years, if you have used Google search or translate or Maps, but it's going to go well beyond that now. We are seeing dramatic advances that are going to change quantum mechanics, quantum science, material science, precision agriculture, personalized medicine, bring water, clean water to people around the world. The potential is extraordinarily exciting. Just one example, our team has helped fold proteins, understand how proteins express themselves for the 200 million proteins known to science, that would have taken hundreds of millions of years for biologists to do.
It's as though you took every man, woman and child in Japan and trained them to be a biologist, and then had them do nothing but fold proteins for three years. And as a result, these tools are now being used by more than a million researchers around the world to help advance the study of medicine. There are many more advances like that coming, but at the same time, we recognize that all of the opportunity agenda must be balanced by responsibility agenda and a security agenda, and it's not for one company or even one group of companies to do alone.
We have worked together across the industry other than groups like the frontier model forum, the partnership on AI, ML Commons and more to develop frameworks, norms for the right kinds of research we need to do, the right standards that need to be applied.
And beyond that, we need the role of Government and all of you, civil society, for the frameworks that are going to matter to everybody on the planet. This is why we salute and appreciate the leadership of Japan and the Hiroshima process to drive forward with an innovation agenda that recognizes the opportunity, but also the need for thoughtful balance and the hard tradeoffs that will need democracies to exchange ideas about, how do you balance security versus openness, how do you balance the various notions of efficiency and equity in these different tools?
These are fundamental and important questions and we welcome the participation of groups like the IGF who have brought wisdom to those debates over the Internet as we apply them to the latest and great new round of technology.
>> MODERATOR: Thank you very much. Then I would like to invite President Nick Clegg you have five minutes.
>> NICK CLEGG: AI is not new. It's been talked about since the 1950's and companies like Google, like Meta, research heavy organisations have been conducting research using AI and integrating into their products for many, many years, but clearly this latest development of these large language models is qualitative and quantitative, because it's a very expensive leap forward. So we are asking ourselves in forums like this and many other forums is it good, bad, how is it going to reshape the world? There has been breathless hyperbolic predictions about what might happen in the future, and I would venture three points at this stage.
Firstly, where possible, and I think the Deputy Secretary‑General asked this is this technology going to be for the many or for the few? Where possible it is desirable in my view that this technology should be shared, it should be open innovation, it should be open sourcing as much as possible of these foundation models.
Companies like Meta, Google, I don't want to speak for Google, in Meta's case, over the last decade we have open sourced over a thousand open AI models. Where possible, the more that this technology can be shared because otherwise the risk is it really is technology which is only developed by a very small number of highly resourced institutions, public and private institutions around the world with deep enough pockets, with enough GPU capacity and enough large scale data to get their hands on.
That's why, we, for instance, have open sourced our large language model, and we have around 30 million uses of it already from researchers, innovators and developers around the world including here in China. So that's the first point. The second point is it is human nature to worry about the worst.
But I think it's also worth remembering that AI is also, it's a sword not just a shield. If I look, for instance, at the work of Meta in social media and the constant adversarial work we have to do to try to minimize bad content, hate speech, the prevalence of hate speech and this is publicly available audited data on Facebook now stands somewhere between 0.0.1% and 0.0.2 precious so if you constantly scrolling on Facebook for far too long you would find maybe one or two Brits of hate speech over 10,000 bits of content you might find. I wish it was going to be zero, but it's never going to be zero.
Here is the point, that is down by about 60%, just over the last 18 to 24 months for one reason alone R, which is AI. So AI, yes, of course, it imposes new challenges. It is a fantastic tool for us to do as the Prime Minister himself said to minimize the bad, amplify the good. And then the final thing I would say is this, as we grapple with risks, yes, there has been lots of talk about long‑term potential existential risks, the prospect of so called autonomous AI or general AI which will sort of develop an autonomy and agency of its own, but there are things we need to do now, which was mentioned earlier.
We need to have some kind of agreement across the industry and with Governments and stakeholders on how we identify the provenance and we can detect AI generated content, not text content, but certainly visual content. The more you can have uniform standards developed quickly, the safer all of those elections that people have talked about taking place next year will take place.
So I think it's important to focus other than the here and now not just on the theoretical tomorrow.
>> MODERATOR: Thank you very much Mr. Clegg. I believe that everyone has lots of questions, there are lots of questions I have today. So now I move onto the second question, so in this previous question, we heard that AI companies are developing highly advanced AI systems and various applications. At the same time, they are making efforts to respond to risks and challenges brought by Generative AI. So the guiding principles and code of conduct for organisations developing advanced AI systems set out how organisations should take measures and actions against risk and challenges.
Prior to the model released and market placement, and should continue to work on addressing vulnerabilities and mitigating risks after the release. What risks and challenges do you think are most important for those organisations to addressing their efforts. What technical measures and actions do you think to be most effective? I would like to invite Mr. Nezar to answer this question and give your insight. You have three minutes.
>> PATRIA NEZAR: Excellencies, distinguished speakers, ladies and gentlemen, good afternoon. First of all, allow me to thank the organizer of the session for the opportunity to be in the same stage with Honorables today to share about artificial intelligence, a hot topic recently.
The development of AI has greatly improved efficiency across commercial sectors. In 2021, AI added 26.7 million workers in Indonesia equivalent to 22% of the workforce. Yet we must acknowledge that AI also comes with various risks such as privacy, intellectual property violations, the potential of biases as well as hallucinations that requires our attention.
Against such back drop, Indonesia believes we must intensify our approaches on mitigating the risk of AI both at policy and practical levels. One of the milestone evidence such commitment was made four years ago in Japan when we supported the G20 AI principle during Japan G20 presidency to set a common understanding of the principles of AI.
With the recently issues G7 Hiroshima process as presented by the Minister, the effort to involve different stakeholders, even beyond G7 members are applaudable. The urgently growing need for governance to mitigate risks of AI, specifically on Generative AI has demanded us, the global community, to act promptly but not at risk. Indonesia is not waiting in silence. We have begun the development of our AI governance ecosystem from 2020 through the first policies, namely national strategy of artificial intelligence, aligning on the development of AI governance ecosystem in Indonesia.
Secondly under classification of business line for a business developing AI‑based programming and proficiency for automated data protection under law on personal data protections. Though not specifically addressing AI‑based personal data processing, it provides foundation for more complex personal data processing activities.
Last but not least, we are also in the process of developing AI ethics that is hope to embody principles from global references, infused with our local wisdom to respond to the demand of AI governance.
Ladies and gentlemen, we understand that the Government cannot act alone. As such in the process of improving our governance, we invite the involvement from various stakeholders to contribute on the development of our policies as well as ecosystem. Specifically, we are in the process of expiring use cases, potential risks as well as approaches and technologies to mitigate risk of AI utilizations.
We also realize that AI governance itself is not sufficient to mitigate the risks and threat of AI. We still need additional proficiency to ensure positive impact of AI for everyone. This includes implementation of supportive policies encompassing areas such as the content moderation, ensuring fairness and non‑discrimination in the market as well as digital literacy effort. Indonesia is ready to further the discussion of AI global governance. Especially to play the role as bridge builder between various countries with different maturity level of AI to ensure our AI might advance the wellbeing of our society now and in the future. Thank you very much.
(Applause).
>> MODERATOR: Thank you very much, I would like to ask the same question to Mr. Mazza.
>> LUCIANO MAZZA ANDRADE: Thank you very much. First of all, I think one of the main things we must realize as I challenge is how we can bring more voices from Developing Countries to this debate. That's hard because sometimes we are given the true complexity of the issues at hand that is not something that's simply done. So while thanking very much the Japanese Government for the kind invitation to be here today I want to commend it for the open and inclusive way it's making, the efforts to make this process as open and inclusive as possible. I think that's very important.
I think we must be realistic about the huge asymmetries in the AI landscape, and how they affect the way different countries and actors approach this issue when it comes to discussing risks and mitigation measures.
Large language models have been developed by few companies based in very few countries, of course, mostly the G7 countries. The Hiroshima process may be of particular relevance considering the status quo. In any event we are talking about a very concentrated market that may change the future, hopefully it will, but that's the reality today.
To our perspective, as was touched upon before, organisations particularly in developing world should be mindful of the need to bring a sense of local ownership to the countries and communities where they operate. The main issues will be I think the adaptation of those models to local realities and crucially here I think that is an issue of how to adjust the training of the models that is more reflective of local circumstances, that is a main topic that must be addressed.
Also essential in our reviews to incentivize is local ecosystems in order to allow for the development of a growing number of applications by domestic companies. Countries should strive to have dynamic AI ecosystems even if they are not able to have their own open AI style companies, because we know that would be unrealistic to expect.
So we believe that this effort to incentivize local ecosystems would be a possible way forward which would democratize this market that is very concentrated today. Another topic I wanted to raise is when it comes to risks and mitigation of risks we think it's important to widen a little bit the scope of what we understand by risks.
We should not lose sight of the big risks that AI have of exponentially amplifying digital divides between developed and Developing Countries of the this should be counted as a risk. So we have seen for some time now that there is a composite of safety by design that is well accepted by many actors in this field who are working on it. We should work on the notion that new technologies including AI models should be inclusive by design in a way that social and Digital Inclusion should not be an afterthought, but should be at the forefront of our considerations.
Thank you very much.
>> MODERATOR: Thank you very much Mr. Mazza. I would like to move on to question three. So we heard that the draft guiding principles and draft code of conduct include principles and actions for AI developers to responsibly share information on security and safety risks posed by the models, the measures taken to address these risks and to publish transparency reports and to establish and disclose privacy policies and AI governance policies. What information do you think those organisations should be engaged, encouraged to share and with who? What elements do you think should be included in transparency reports?
How can information sharing be best done along the value chain, especially with downstream developers who further develop and fine tune models. I would like to invite Chair Vint Cerf to give your answer to this, so you have three minutes. The floor is yours.
>> VINT CERF: Thank you very much. First of all, I want to say I am very, very grateful to the Prime Minister for his opening observations about AI, and the Internet Governance Forum. I found them most hopeful. And very encouraging. I also would like to point out to you some parallels. First of all, the Internet is simply a very large software artifact. So is artificial intelligence and machine learning. As a young programmer, I became fascinated by the idea that you could use software to create your own little universe and it would do what you told it to do.
Then I discovered that it does what you told it to do, but not necessarily what you wanted it to do. And the difference between those two is called a bug. And I discovered how easy it was to create bugs and how hard it was to find them and fix them in the software. So why is that relevant? I think all of the things that you are hearing about artificial intelligence and machine learning apply generally to software.
And so we should be thinking about not just the rules for AI and ML development, but also generally software. We have become intensely dependent on software. It is by far the most powerful and adaptable technology ever created and I would argue that the machine learning world has taken a step beyond that. But we, with dependency comes risk and you have heard that theme repeatedly.
The result is that the risks are a function of the application to which the machine learning and AI models are put. And this leads to the question about single points of failure, and the side effects of becoming increasingly dependent on these pieces of software. That leads to a very important point about responsibility, and the responsible development and use of software. It leads to questions of ethics.
In research and academia, what kind of research do you perform and under what conditions how does business apply and use machine learning tools and software in general and finally, how are these systems governed? We have been hearing major and important initiatives. Now, to come to your specific question about information sharing, there are several obvious things we would want to share. The first one is the source of the training material. Where did this content come from?
When these machine learning systems are actually used, it's important to have some idea of how the source material was actually applied, and so we can have some sense of judgment about the quality of the resulting system.
We also need to be able to understand under what conditions these systems will misbehave. It's become more and more difficult to predict because the systems are so complex, and their function is less like the if/then/else kind of software that I grew up and more like a highly probabilistic system that has a probability of being correct and a probability of being not correct. So if we are going to share information, we should be able to share our experiences, we should be able to alert the consumers and users of these applications as to the potential hazards that they might encounter.
I would like to applaud the European Union's effort to grade the risk factors of applications. So there are some high risk applications like healthcare, health advice, medical diagnosis which should get considerably more scrutiny, the software that's used to provide those services where as if it's just entertainment, perhaps the risk factor is lower.
I suspect I have runway over my time as I can see our moderator wielding her microphone. I will stop there and thank you for your time.
>> MODERATOR: Thank you very much, Mr. Vint Cerf, I wish I had more time. Know, I would like to invite Ms. Wong to respond to the same question.
>> DENISE WONG: Singapore has always cared a lot about AI governance, we had an AI governance framework which we updated in 2022 and are working on the next update. In June of this year, we launched to have a conversation and a global platform for discussion on AI governance issues and we also wrote a discussion paper highlighting some of the risks and issues as well as practical solutions to deal with Generative AI, its risks and potential pathway forward.
Specifically on this question, we do think that there is space for policy makers and industries to cocreate a shared responsibility framework as a first step in order to clarify the responsibilities of all parties in the model development lifecycle as well as the safe guards and measures that they need to respectively undertake.
Now, there is some useful information that can be shared, especially by model developers. For example information about how the models are developed and tested as well as transparency on the type of training data sets used. Specifically to the end user, information can be provided, for example, limitations on the performance of models as well as information on how and whether data input by a user into the model will be used by developers to further enhance the model.
We do think that such a shared responsibility framework which is common in the world of software development will allow us to parse out the different responsibilities and admittedly having a layer of complexity because of the foundational nature of these models. We do think that for clarity, establishing standardized information to be shared about the model will allow deployers and users to make proper risk assessments.
We do agree that labeling and auto marking of AI generated content will allow consumers of content to make more informed decisions and choices, and there is certainly great, much to commend the globally and internationally aligned efforts with many stakeholders involved in this process. Thank you. Applause.
>> MODERATOR: Thank you. I would like to move onto the next question. The guiding principles and code of conduct include principles and actions for AI developers to invest in and develop security measures as well as technical measures for content authentication such as water marking and content and data input control measures, what types of measures do you think would be most effective for organisations to invest in or develop? So now I would like to invite President walker again, to respond to this question.
>> KENT WALKER: Thank you, the large language models we are seeing today come out of a problem in search. Originally in search you are trying to take a word and search the Internet for matching words and then you realize you need to search for synonyms and then for related concepts, how does the king of England relate to the queen of Spain.
Research that was being done about a dozen years ago mapped every word in English and ultimately in many languages around the world in mathematical terms and vectors, and then about five or six years ago further research that was published helped identify something called transformers, an architecture which allowed us to understand all of the richness in human language, and soon to be a thousand languages around the world.
Now, we have learned many things in working with content on the Internet that will carry over to these new challenges of security and content authenticity. So, for example, when it comes to security, we believe we need to work collectively. We have proposed something called a safe AI framework, SAIF for short that establishes an ecosystem approach to making sure that model weights and other core information are kept secure when necessary, but made open and available when possible.
There are a number of efforts we are progressing. We have an effort called SynthID that identifies video and images available at the pixel level, so even if they are transformed or turned upside down or changed in different colors, you can still authenticate where they came from.
A second effort has to do with about this image in search, allowing you to understand the provenance of when an image was first uploaded to the Internet. And finally, we have adopted a new policy that requires the disclosure of the use of Generative AI for election ads in ways that are misleading or that could change the results of elections.
These efforts and many more like them across the industry will be an important part of answering this question of how do we make sure we can trust the products of AI. But at the same time, I must say that something can be authenticated and still false. And so we collectively and all of the people around the world, we need to educate ourselves. We need to become digitally literate, AI literate about the new tools so we understand the underlying meaning and what we can and cannot trust.
>> MODERATOR: Thank you very much. Now, I would like to ask the professor to respond to the same question.
>> JUN MURAI: I remember when I visited and who did the physical research using a computer. That was the 70's so typing into all of the philosophy books and analyzing it and then understanding what the human being is thinking about type of thing. That was very much starting up of AI with language information based on that, but that language was very much trustable, books and philosophy books and other things.
What is different today for working on Generative AI and other things is generated from other people's social networking content and IoT sensor data as well and many of the information on the Web generated from everywhere, and then that is basically very much security measures of AI today.
And then sources in the accuracy of the data and how trustable that information would be. So in Japan we started one of the industrial efforts called originator profile which is the information on the Web to be originated and then authorized of that reason, particular reason, in order to achieve that, then the ID and the sources of information and the traceable from the exact data is also important, and not only the text messages for AI now is accurate number generated from sensors around which is very much kind of learning sources for the global warming and other environmental studies.
So that kind of accuracy is going to be monitored and discussed and also shared as wisdom among the AI players.
>> MODERATOR: Thank you very much, Professor Jun Murai so I would like to move to the next question, the draft code of conduct requires requirement for AI developers to promote development of advanced AI systems to address world's greatest challenges such as climate crisis, global health and education. What fields do you think those organisations should prioritize in their activities and investment other than those described in the previous questions? What are some proactive measures including giving kinds of incentives to companies we could embed in a code of conduct that would enable innovation as opposed to only mitigating risks?
So I would like to invite Ms. Doreen Bogdan‑Martin to respond to this question. The floor is yours.
>> DOREEN BOGDAN-MARTIN: Thank you.
Good morning. It's great to be here. Let me start by thanking the Government of Japan for putting this topic so high on the agenda here at the Internet Governance Forum. I think to answer your question, the private sector really is the driving force behind AI innovation, and I would say I'm happy to see how much they are stepping up to address some of the world's greatest problems, and I guess, Nick, I think you mentioned this minimizing the bad, and, of course, trying to amplify the good.
I guess, Vint, that would be get rid of the bugs so we can amplify that good. The private sector is also a key constituent in the ITU membership, and I'm happy that two of the ITU members, Meta and Google, are part of the ITU family, because I think that kind of engagement also helps us understand what they are looking for when it comes to providing insights, when it comes to our engagements with policy makers and regulators.
We have found that sort of a combination of incentives are important, ranging from perhaps economic incentives to explicit recognition of contributions at national and international levels to help really effectively motivate the private sector to invest in initiatives that ultimately benefit society.
Of course, that includes innovative public‑private partnerships. I think that's key. You mentioned healthcare, education, climate definitely. I think I would give perhaps an example of something that stands out for me, we have been very focused on school connectivity. Of course, that's linked also to the WSIS process that had a target to connect every school by 2015. We didn't get there.
But we do have an initiative together with UNICEF and many private sector partners and we are using AI to actually find schools, we are using AI techniques for mapping. We are also using AI techniques to look at different connectivity configurations so that we can ultimately bring down cost. And perhaps just to share another example, I would say disaster management, that's a key priority also for the Government of Japan.
I think AI has shown lots of potential in that space where part of the early warning for all initiative working closely with Japan, WMO, and UNEP, we are looking at ways that you can use AI when it comes to data collection and handling, what it comes to natural hazard modeling, and, of course, when it comes to effective emergency communications.
So I think really there is probably nothing that we cannot do if we actually manage to leverage multi‑stakeholder partnerships to drive positive change. Thank you.
>> MODERATOR: Thank you very much, Ms. Doreen Bogdan‑Martin. I would like to ask the same question to Mr. Nezar.
>> PATRIA NEZAR: We are concerned about misinformation and disinformation actually, so because next year we will have elections, and we try to issue some regulations on the spread of information through the digital platform using AI, and we collaborate with multi‑stakeholders also, and we work closely with global digital platform like Google and Meta as well, and hopefully we can handle it, because this is really a big test how AI will used in the next election for the political campaigns and hopefully we have a fair and safe elections next year. Thank you.
>> MODERATOR: Thank you very much. So what about Professor Jun Murai.
>> JUN MURAI: Thank you very much for raising the disaster things and not just disaster earthquake. That is really the very important issue of this country, and we are facing always the big aspect and the recovery from there, but every time we encounter the earthquake, then it's a lot of digital data network and also support and that is saving of people's life, which is a very, very serious one, and now AI and the more precise and trustable data would be a benefit for the kind of the next one so that Japan needs to be preparing for that.
And another big issue about Japan is very serious older society issue. When people get older, there is a lot of healthcare issues, and then so healthcare issues including the hospital and medical data manipulations which has never been processed in a proper way for the past, I should say, 30 years.
And not only Japan though, everywhere, anywhere. Anyway, so then we started to work on those areas and then it’s a very interesting that the very critical area like data privacy and data accuracy and manipulation and amount of the data to manipulate and amount of the resources of hardware to do that is going to be a very serious one, and, therefore I think it's a very important area, healthcare, and those disaster management areas, because it's very much a multiple responsibility exists in everywhere, and then the need to work together, and so the self‑assessment will be important. Third party will be important and Government involvement, of course, is going to be very important. Therefore the exact very, very important example of multi‑stakeholder model achieving and approaching the AI and the other future.
>> MODERATOR: Thank you. I would like to move onto the next question. How do you foresee it developing over the next few years, and what do you think organisations developing advanced AI systems do in order to realize trustworthy AI across society. So first I would like to ask this question to President Clegg.
>> NICK CLEGG: The problem about the future is it's very difficult to predict particularly with technology evolving as fast as it is, but I think some things are relatively predictable as far as the development of these large language models are concerned, so one thing I think you will see fairly soon is a lot of these large language models as the name implies are large language models so they were focused on language and then you had separate models based on visual content.
I think those things will merge so that you will have models which are what though call multi‑ modal. They operate both in terms of text and visual content and that will introduce significant additional versatility to those models. I think the issue of how the languages that are used in the training data is very important one, a lot of these large language models, particularly the ones emanating from the big U.S. tech companies were originally trained in English.
I think, I mean, that doesn't mean, by the way, that developers can't use the models and redeploy them in their own language. So, for instance, here in Japan, ELIZA a company has taken LAMA2 its open source has actually developed a high performing large language model in Japanese here in Japan, but I think you will see the models in the future being trained if I can put it at a foundational level in multiple languages at the same time.
It's very difficult to talk about these things when you have two Godfathers of the Internet on the stage. I defer to them completely, so I need to know what they think, but I think there has been this assumption that these models just get bigger. Not actually clear if that's going to turn out like that.
Firstly, there is going to be just an incentive to be more efficient with, use less data, use less computing power, use less money, so I think there will be, and also, also the applications of these models, particularly in their fine-tuned form will be most impactful not necessarily because they are bigger, just if they are just fine tuned to deliver particular objectives. So I think this assumption which has certainly been there in the public debate, they just get exponentially bigger all of the time, I'm not necessarily sure if that's going to be the case.
There is only so many times you can reconsume and redigest all of the content, the public content on the Internet. You can only do that a few times. After a while you run out. I don't think size is the only determinant of capability here, and nor do I think risk is only associated with size either.
>> MODERATOR: Thank you very much.
I would like to ask the same question to the father of Internet, Dr. Vint Cerf.
>> VINT CERF: I'm sure that everyone will recognize that just because I've had a lot to do with the Internet doesn't necessarily mean I know anything about AI so you should be careful about my answers. I will tell you something I have learned from a colleague at UCLA, his name is Juda Pearl. He is one of the winners of the touring award which is the top award in computer science for his work in machine learning and AI. He has written two books, one is called Causality, and the other is, if I remember it right, the book of Why, what was his point?
His point was that large machine learning models are all about probability. They talk about probabilistic performance. They don't necessarily deal with causality, so you can't conclude anything from them unless you have a causal model to go along with the correlation that these large machine learning models incorporate.
I'm using machine learning here rather than large language model or artificial intelligence very deliberately. If you don't appreciate causality versus correlation, you will appreciate the story that some parties would conclude looking at the statistics that flat tires cause babies.
And the reason for this is that there is a high correlation between the number of flat tires occurring near a hospital and the number of babies that are born, and you could quickly appreciate that the real reason that there are flat tires is because someone is racing to get the mother to the hospital so the baby can be born there and not in the car, and the result of the fast driving sometimes is flat tires.
To give you one other example where causality is really important, at Google you can imagine we consume a lot of power cooling the data centre off because running all of those computers generates a lot of heat. Once a week we used to have an engineer who tried to adjust the valves in order to figure out how to minimize the amount of power required to cool the data centre.
We trained a machine learning system to perform that task. It saved 40% of the power requirement that we have been able to achieve manually. So causality is going to be our friend here and we need to incorporate that into the way in which we train and use these models.
>> MODERATOR: So I would like to ask, move onto the next question. Do you think there should be consideration given to developing tools or mechanisms to help organisations monitor the implementation of the guiding principles and code of conduct, and to hold organizations accountable for their progress in doing so.
So I would like to invite Professor Jun Murai to answer this question.
>> JUN MURAI: I'm sorry, I missed the order.
>> MODERATOR: Question number seven, so how do we consider about the monitoring the implementation of the guiding principles and the code of conduct.
>> JUN MURAI: That's kind of a repeat, but monitoring is, self‑assessment is an important one, and by any entity who is processing the thing in the code, and its own responsibility to do this, and including the individuals who have been involved in this is going to be very important.
And also apart from yourself and a third party or the independent entity, it's going to be kind of a monitoring and sharing the wisdom for the processing is going to be really important. Public need to address what they should do, but also investing on researchers and educations is going to be Government role and the public sector's role, and increasing the quality of the monitoring for the AI processes.
>> MODERATOR: Thank you, Professor. So I would also like to invite Dr. Cerf again to respond to the same question.
>> VINT CERF: The key question is what do we measure and what objective function are we trying to achieve. It takes a great deal of creativity to figure out how to assess the quality of large language models and machine learning models. The concrete objective functions like the speaker mentioned earlier which is to reduce the cost of cooling off data centers are pretty obvious kind of measurement.
It's a much more complex question to answer how well did the large language model respond to your prompt and produce its output? Was it useable or not? I don't have a deep notion right now of how to apply an objective function to those kinds of applications. The one thing I will say is that if we can detect the quality of the responses coming back in high risk environments, that that might be a top priority for us to make sure that if there are high risk applications being used, that we measure safety as the most important metric of success.
>> MODERATOR: Thank you. I would like to ask the next question to Ms. Doreen Bogdan‑Martin. As the head of the UN agency working hard to bridge the digital divide, what needs to be done to ensure that the Global South is not left behind on AI development while doing so responsibly? What are some of your recommendations to the Hiroshima process in this regard?
>> DOREEN BOGDAN-MARTIN: Thank you.
I think Luciano, you in part answered this before, so I will pick up from where you kind of left off. I think you can't be part of the AI revolution if you are not part of the digital revolution. So this is a reminder about the 2.6 billion people that are still offline today, the 2.6 billion people that are actually digitally excluded today. So I would say a clear message may be let's not lose sight of some of the fundamentals, the fundamentals around meaningful universal connectivity, the building blocks of that universal meaningful connectivity from the infrastructure to the digital skills, I think, Kent, you mentioned that before, the affordability, the cybersecurity and much, much more.
In terms of specific recommendations, perhaps I would share maybe three. I would say the first is that role of meaningful universal connectivity, I think to embrace that in the context of the guiding principles and the code of conduct, including perhaps targeted commitments from companies in different areas like capacity development, the skills piece would be great, also focused on the gender gap. We got that big gender gap, so I would just, perhaps, suggest that.
The ITU is very focused on that capacity development piece in the space of AI. We are working to incorporate it in our capacity development offerings together with other UN partners from UNDP, UNESCO and others. I guess my second recommendation would be in the space of the technical standards and His Excellency, the Minister, I think you laid that out.
It was point 10, so just to ensure that technical standards are actually a sort of prerequisite when we look at effective implementation of the guidelines, and, again, on ITU's side, we will do our part working with other UN agencies in technical standards areas.
And I guess the last piece, and this is kind of a plea picking up on the SG, the UNSG's comments in the opening link to the governance gap, perhaps. I would say use the UN as a catalyst for progress in this context. I think that's really important. This morning the Vice Minister from Japan kind of pleaded with us to ensure collaboration amongst these different discussions so I think that's really important.
Many things are happening. Many countries are taking different approaches and I think it's important that we share experiences and work together. The ITU has the AI for Good Global Summit. We work with some 40 different UN agencies, many of the partners up here on stage, I think that's also a good space to exchange experiences, best practices, and, of course, the upcoming AI advisory body that we heard also mentioned by the Secretary‑General. I think the UN Tech Envoy was in the crowd. I think that's also another important element because that group will lay out recommendations that we can take forward in the context of the Global Digital Compact and the Summit of the Future.
So three pillars, universal connectivity, technical standards and, of course, see the UN as a process that can be leveraged. Thank you.
>> MODERATOR: Thank you very much, Ms. Doreen Bogdan‑Martin. I would like to ask the next question to Vice Minister, Mr. Nezar. How can we engage a wide range of stakeholders on the guiding principles and the code of conduct?
>> PATRIA NEZAR: Yes, this is still a big question for us in Indonesia as well because as you know, artificial intelligence has become a hot discussion amongst countries and at the global levels we still seek best practices that inspire us to regulate AI in our country, but we believe UNESCO also working on it, and we share some insight as well with UNESCO and try to set fundamental norms on guidelines to implementation of artificial intelligence. Thank you.
>> MODERATOR: Thank you very much.
Next, I would like to ask the same question to Mr. Mazza.
>> LUCIANO MAZZA ANDRADE: Thank you.
As I mentioned before, I think it's important to make sure the discussions include as many as possible and allows for a diverse range of voices and constituents to be heard to full engagement of other organisations, different stakeholders is essential to ensure the long‑term sustainability of this effort. We believe that it's important in particular to ensure that this process is carried out in dialogue and is consistent with efforts being developed in other organisations and other fora to ensure the exercise is effective in the long term.
We believe it's important that in due course those discussions are expanded to multilateral spaces somehow, we can make it more sustainable. We have a concern about fragmentation. We think there is a parallel institutional fragmentation with fragmentation of the digital world, I think the two things go hand in hand, so we think it's important to look for consistent cohesion in discussion both in terms of the overall narrative about challenges, risks and opportunities, but also in terms of policies and regulatory approaches.
We believe also that multilateral engagement will also be necessary to reduce asymmetries in terms of capabilities and information between developed and developing countries and so to help countries acquire expertise and capability they need to landscape with autonomy and ownership of the process so as they can fully enjoy the benefits AI can bring to everyone.
I think Secretary‑General Doreen mentioned, and I think that's a way forward. We encourage countries to double their bet on multilateralism, and give the chance, we have important debates in UN in the context of the Global Digital Compact and looking forward in terms of how we engage in the renewal of the mandate of the WSIS. So I think we have a chance to place, again, our effort in the multi-lateral system and a great opportunity to give a sense of ownership to everybody in that debate. That would be my comment.
Thank you.
>> MODERATOR: Thank you very much. I would like to ask the same question to Ms. Wong.
>> DENISE WONG: Thank you.
I agree with the comments so far. A consultative process is important both for developing the principles and code of conduct as well as technical standards that will eventually hold companies, organisations accountable. Is would be useful to hear from other thought leaders and countries outside of the G7 and key groupings.
For example, we have the Association of Southeast Asia Nations that a number of us are part of. There are also the forum of small states that are able to bring together some of the voices in the Global South into this conversation. This will allow the principles and the inclusive conduct and standards to be richer and more textured and able to account for the rich cultural diversity that we have in the globe.
I think the other experience we share from Singapore's perspective is with working closely with industry peers such as those on the stage as well as others and other international and multilateral stakeholder bodies on concrete projects to test out this technology, understand firsthand getting your hands dirty on what responsible AI really means in context‑specific applications, do main‑specific applications.
Drilling down into the details is very important in this multi‑stakeholder process. We are very supportive of the effort. Thank you.
>> MODERATOR: Thank you, Ms. Wong.
So this is the last question. In addition to our work on the guiding principles and code of conduct for organisations developing advanced AI systems, the Hiroshima AI process will seek to elaborate guiding principles for all AI actors and promote project‑based cooperation with international organisations.
Do you have any views you wish to share on potential outcomes for these future streams of work? What is the most urgent deliverable? I would like to ask this question to President Walker.
>> KENT WALKER: Thank you. I think today's discussion has illustrated the incredible importance of highlighting both the responsibility side and the opportunity side of this tool.
As Vint says, we probably shouldn't call it AI. It's computational statistics, but what's an amazing tool it is proving to be. It is giving us ways to help predict the future in different ways. We can now forecast the weather a week away as well as we used to be able to predict it a day away. For issues like earthquakes, today I understand Japan experienced a tragic earthquake, Afghanistan just in the last couple of days, thousands of people were killed in Afghanistan.
If we could provide just a little bit more warning for issues like that, we are already predicting forest fires and where they might spread. We have tools that will predict flooding that are now covering 80 different countries around the world. So Governments working together to understand how they can implement those tools and make them available to their citizens is an important agenda.
There are hard tradeoffs, of course, between openness and security, transparency, how do we have more explainability for these tools? How do we define what tools should be regulated? How different models should be classified?
But Governments are at the forefront of trying to figure out how to get this right, and then there are additional steps that we need to take to understand how to invest in research, to make both the research tools and the computation broadly available around the world and how do we imagine the future of work. In many countries like Japan, we need desperately to have more productivity for citizens, but that also means that jobs will change.
How do we help our workers throughout the world imagine a new AI‑enabled future where they are more productive and live better, healthier, wealthier lives? So collectively through efforts like the G7 and the OECD work and the ITU work on AI for Good, we are confident that we can actually achieve that potential and we encourage the international community to take an optimistic forward looking view in doing just that. Thank you.
>> MODERATOR: Thank you. Next I would like to invite President Clegg for the same question.
>> NICK CLEGG: The most impactful deliverable. Well, I think in the broadest sense of the word, transparency. I think one of the things that is happening and why the debate has swiveled around over so many months, a mystique has built up around this technology. It is very powerful.
It will be very transformative in many respects, but it, in other respects, what did you call it? Computational statistics, but in many ways my crude version of it it's like a giant auto complete system that particularly the large language models. They are just guessing what the next word or what the next token should be in response to a human prompt by processing huge amounts of data across vast amounts of parameters.
But I think sometimes in the debate we have anthropomorphized the technology and also confirm it in power and intelligence which oddly enough it doesn't possess. The systems don't know anything inherently. They are just extremely good at guessing and predicting along the probabilistic logic described earlier. So we need to make it as transparent as possible. Transparent in terms of how big companies like Meta and Google develop these models in the first place.
How is the data being used? How do the model weights operate? What are the red teaming we do to make sure it's safe? How do we make it accessible to researchers? But also transparent to users is why I stressed earlier it's in the draft Hiroshima code of conduct, this work on provenance, detectability and watermarking, you can't trust something if you can't detect it in the first place. There is a lot of very, very difficult technical detail in that because quite a lot of in the future content will be hybrid between AI and human creativity. So how do you identify that? How do you make sure that once you have detected something that is being generated by AI, how does that travel from one platform to the other? The Internet is not just in different silos, content flows across the Internet around the world.
So I think transparency, transparency, transparency so give people greater comfort that this the technology is there to serve them, that they are not there to serve the technology.
>> MODERATOR: Thank you, Mr. Clegg. So that was our last question, and as the role of the moderator, I need to sum advise this discussion, however, I think I need additional one hour to summarize overall this discussion. So I just make one or two comments regarding this panel discussion. Once again, I really enjoyed this panel discussion, and since we have actually discussed about this guiding principles and code of conduct, but what I heard is that beyond this kind of guiding principles there is lots of initiatives and each companies or each international organisations and each nations are actually having their own legal frameworks and their own culture and they are developing measures towards this newly developed technology, the Generative AI and also not only the technology but also the AI system, the services including not only machines, but also the interaction with the human beings, and that's really important and that actually makes it very confusing, and but that's also is the very important things.
So today as Clegg mentioned the transparency is important, and also the other key words we actually repeatedly heard is the collaboration, and I guess we have this Internet Governance Forum has lasted a couple of days so we can continue to discuss how important this topic is and how we can be responsible, responsible as the developers or the deployers or maybe actual users to kind of face this new technology, and to make the society more better.
And so today the discussion, the panel discussion will be very effective guidelines to the AI Hiroshima process, but also to all of us who actually are interested in this topic.
So I will stop here and unless I will kind of continuously speak out, so lastly, I would like to invite Mr. Suzuki, Minister of Internal Affairs and Communications for the closing remarks.
>> JUNJI SUZUKI: So thank you very much for your valuable discussions today. As was mentioned by Prime Minister Fumio Kishida in his keynote address, Generative AI provides services transcending national boundaries and touches lives of people across the world. It is more beneficial that we were able to engage in discussions at Internet Governance Forum where the stakeholders the world over have gathered. Generative AI entails possibilities as well as risks and is also technology that will transform society in a major way. I'm convinced that today's discussions will deepen our awareness about the risks of Generative AI and it will become a step forward to share the possibilities of Generative AI transcending region standpoints and position. As for the valuable opinions offered by international organisations, Governments, AI developers and corporations, researchers and representatives of civil society, we will aim to reflect them in a Hiroshima AI process going forward.
Moreover, on the global partnership on AI, we plan to establish an AI export support centre anew to tackle challenges of AI and broaden possibilities through project‑based initiatives. With regard to these project‑based initiatives to resolve social issues we have received hopes and expectations from the Governments of the Global South yesterday, day zero in the session hosted by the Ministry of Internal Affairs and Communications. Today's discussions were most meaningful and as we continue our discussions on AI governance, it will be important to listen to the views of various concerned persons and we will make sure to take such initiatives. Thank you very much for your presentations and for your attendance.
(Applause).
>> MODERATOR: Thank you very much. This concludes the Opening Session global AI governance and Generative AI. So please give us, so please give the last round of applause for all of the panelists. Thank you very much.
(Applause).
>> MODERATOR: Thank you very much. So ladies and gentlemen, we have now come to the end of the High‑Level Panel V, Artificial Intelligence. I would like to extend our appreciation for your presence here today. Thank you very much for all of the speakers. Thank you so much.
(Applause).