IGF 2022 Day 2 Open Forum #71 From regional to global: EU approach to digital transformation

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> HOST: Welcome to those in the room and online.  This is From Regional to Global EU Approach to Digital Transformation.  This is to approach and learn from partners around the world with regard to Digital Transformation and with regard to where necessary the regulation of the Internet and Internet space.  We have chosen for Europe an approach to the digital transition, which is very much based on the principle of an Internet that is open, free, global, interoperable and secure.  Those may be pious principles, but ones we seek to put into effect and practice, in line with our own Treaties, we wish to put peace and the individual at the center of the Internet and ensure both are protected.

In fact, it is a work in progress in the European Union as it is elsewhere.  In fact, the Ministers responsible for this area in the European Union will meet and will sign the Treaty and this is a sign of what is happening in many Regions of the world where we need to look at, adjust, and sometimes take account of the transformative power, but also the challenges raised by the developments of digital technologies.

That is why we found it useful to make some presentations but also have a discussion very much based on the concept which has been developed called The Brussels Effect by Professor Anu Bradford that I will introduce in a moment.  It seeks to look at the European Union approach as one of many that can be compared and hopefully enrich one from the other.  In the process launched, through the compact to the leadership Summit to the WSIS.  It is important to discuss and identify what are the key principles behind the Digital Transformation.  We hope they will further develop and implemented principles.

We for example, have been working with a number of like‑minded partners in what is the Declaration for the future are of the Internet.  Which of course is in support of a strong digital compact and seeks to set the principles that are in line with the stakeholder community here at IGF has worked on for a number of years, but which seeks to draw Governments of U.N. countries further into that framework.  The prince poles of the multistakeholder approach around also principles for human, open and interconnected Internet.  Where like‑minded will state what they will do and what they will not do.

I will pass the floor to Professor Anu Bradford and a trusted commented on the European Union on the global economy.  She has written extensive if not oversized influence some regulatory units have had on the Internet space.  She was the author of The Brussels Effect, how the European Union rules the world.  As an expert in this area, on international trade, regulation and antitrust law, we felt it would be useful if we could ask Professor Bradford to moderate this session.  I will hand over to her.  She will present our speakers so we can get this discussion started.

So over to Professor Bradford, I hope you are online and can hear us. 

Sorry, I think you may be muted.  Thank you, we're being helped.  We need to unmute you apparently. 

I'm sorry, can I ask tech support would you please unmute and give permission to Anu Bradford who is on the online participation list.  She will be the moderator and needs access rights for the entirety of the session. 

>> MODERATOR: Good morning, good afternoon, good evening.  Can you hear me now? 

>> HOST: We can see you are unmuted.  We don't hear you.  Perhaps the online support can turn up the volume, please? 

>> MODERATOR: Hello, can you hear me? 

Can you hear me now? 

Apparently online you can hear me but the room cannot.

>> HOST: Now we can hear you.  Sorry for the technical hiccup. 

>> MODERATOR: No problem at all.  It is a pleasure to join you.  Thank you for the opportunity.  Good morning, good afternoon, good evening.  I don't know if I can have the video, but as long as you can hear me, we are set to go with the conversation.  Thank you for the generous introduction.

I thought I could probably set the stage with a few weirds of ‑‑ few words of The Brussels Effect you mentioned and the regulatory framework including digital regulation in particular.

I'm working on the second book of digital regulation comparing American, European and Chinese approaches in digital information.  So I feel extremely strong about the important of the conversation we're having today.

I would say there is a point in history that the Governments around the world can hardly agree on anything.  They're increasingly agreeing on the importance of regulating large technology companies.

So there is a growing consensus on the need to regulate, but not yet a full consensus on how we ought to regulate the digital economy.  I would say that there is an ending that the European Union is in the leading position in terms of the values and entrenching those values into concrete legislation.  I cite as an example the regulation is the general data regulation, the GDPR, the regulation of data privacy.  That has had a tremendous impact in not just European Union but around the world, shaping business practices of companies that adopted the European standard as the global standard.  Whether we talk about Meta or Google or Microsoft.  The GRPD has shaped legislation around the world.  By some accounts 130 countries or more in the world that adopted a variant of a GRPD for privacy regulation.

We see a regulation that regulates from the European Union has the potential to really shape the global regulatory environment.  If you look at what happened since the GRPD, the European Union has engaged in extensive regulation of the digital economy, with regulations such as we will focus on today, including Digital Services Act, to regulate online content.  Digital Markets Act that focuses on competition on the economy, Data Act, artificial intelligence act, regulation of platform workers, just to name a few examples. 

Just to explain why then the European Union would be having such a tremendous influence globally.  (Background noise).

This is a short version of what The Brussels Effect entails.  I basically talk about the mechanism of exerting global influence as the European's ability to influence the marketplace.  The EU is one of the largest markets in the world.  There are few companies that cannot afford to not operate in the EU.  We see often companies conclude that it is in their interest to apply the European regulation as their global standard because they want to avoid the cost of complying with multiple different regulatory regimes.

By obeying the most stringent standard, which often is the European standard like the GRPD, the companies can have standards that have markets around the world.  (Background noise).

By regulating its own market has been able to rely on the market forces and incentives of companies to translate that policy into a global policy.  This is different from the EU imposing standards or somehow coercing or relying on cooperation of other countries.  It is basically company's decision to use the European standard as the leading standard.  We see tremendous influence through legislative activity.  Where the Governments around the world are turning to the EU as the model.

In large part because the EU is the most active, today probably the most experienced and ambitious of the digital economy that provides the template that other Governments have turned to in trying to regulate their own digital economies.

Now the important question is whether this subsequent legislative activity including the DSA/DMA and Data Act and AI act that we will be focusing on today also have the capability of transforming the global digital economy and not just affecting the practices of companies and other actors in the European Union.  Before I turn to the speakers, I want to mention a couple things.  I think of the European Union as having an opportunity to shape the global marketplace, but the responsibility of the getting it right.  The Brussels Effect is capable of exporting good and bad regulation alike.

I sometimes worry about this changing geopolitical environment where there is a temptation to move toward digital protectionism or insert for self‑sufficiency.  Not least because of the tech war that is unfolding between U.S. and China and what it does in terms of fragmenting and destabilizing global digital supply chains.

I urge the European Union to keep firmly in mind that the goal really is to promote open, global and interoperable Internet.  The human centric Digital Transformation that serves Europeans and serves other citizens as well at the same time.

My other call for Europeans is to make sure the regulations also work in practice.  That they are implemented with the same ambition that they're being legislated so we can actually see a true change in the market outcomes.

So the stakes of the Europeans getting the legislative agenda right are extremely high given the general environment within which that Europe is taking its norms and digital economy.

Another challenge for Europe is the European way of governing the digital economy is not the only way out there.  I mentioned that there seems to be a growing direction that the traditional American market driven Private Sector led view of the Internet is really waning.  There is less and less faith in leaving the digital economy to the tech companies alone.  And growing recognition we need democratic Governments to be in charge and help establish practices that safeguard for instance, users' privacy and other rights that are fundamental and important in liberal democratic societies.

But there is a growing appeal around the world for the kind of vision that China and some other authoritarian Governments are promoting.  That is not one that relies on open, global Internet.  There is much more of a digital authoritarian way and the use of surveillance that compromises Civil Liberties of individuals.

That vision has a lot of demand in the world that is turning authoritarian.  We see the like‑minded countries, allies that are committed to the principles that were stated in the beginning of this session as well.  As being European principles but also principles that are reflected in the Declaration for the future of the Internet and in other fora that subscribe to the same principles. 

With that backdrop, I want to turn to our session today and give the floor to terrific speakers that help us examine three or four regulations in particular.  The Digital Services Act.  Digital Markets Act.  Data Act, and AI Act.  And examine the extent to which these regulations contribute to the broad open, interoperable Internet.  How they potentially influence the markets outside of the EU and lives of the individuals outside of the EU and how they relate to various other initiatives, including the global digital compact.

I will now introduce our speakers in the order they will be speaking.  We have two speakers helping to guide us through the conversation on the DSA/DMA.  We have Mr. Prabhat Agarwal – European Commission, Head of Unit Platforms and eCommerce, in charge of the Digital Services Act and the Digital Markets Acts.

He's responsible for the digital services and Digital Markets Act.  Before joining the Commission, Prabhat Agarwal worked on ‑‑ he has the perspective of public and private sectors.  We have Guilherme De Souza Godoi. Chief, Freedom of Expression and Safety of Journalists Section, UNESCO.

They're building a co‑regulatory framework designed to protect freedom of expression and information as a public good.  Then we have two speakers, discussing the data act.  We have Veronika Vinklárková who is at Czech permanent representation to the EU.  Czech Republic is holding the presidency of the EU.

She's chairing the working party for telecommunication and information society.  And also overseeing the negotiation and work on the Data Act.  And negotiations on the Declaration or digital rights and principles.

We then have Maiko Meguro, the Director for international digital strategy and international affairs at the Ministry of Economy, trade and industry of Japan.  Known as METI.  She leads the team that operationalize the DFFT initiatives.  She's worked previously at WTO and at the European Commission at DG Connect.

I want to turn to introducing Axel Voss a member of the European Parliament and been there over a decade, shaping the European legislation in that capacity.  He's a member of the political group of the European Peoples part.  And expert on data and digitalization.  And right now, a shadow Rapporteur on this important AI Act that is in the midst of legislative process in the European Union. 

Finally, I would introduce Marc Rotenberg the founder and President of the center for AI and digital policy, a global network of policy experts and advocates in over 60 countries.  I find particularly intriguing the artificial intelligence and democratic values index that the center publishes, which is the first report that ranks national AI policies and practices.  And we'll have a chance to learn about that one as well.

With that, I want to immediately turn to our first topic, which is the Digital Services Act.  And the Digital Markets Act.  If I can ask Prabhat Agarwal to first take the stage and previously introduce what the DSA/DMA are about.  And I would like to ask a few questions on the global influence. 

>> I don't know who is in the room, we had a last‑minute change and Prabhat Agarwal had to cancel.  We are joined by a colleague that was intimately involved in the preparation of the DSA/DMA.  So Meno will take the floor in response to your question.  Thank you.

>> MODERATOR: Excellent, Meno, over to you. 

>> Meno: Thank you for giving me the floor.  I apologize for Prabhat Agarwal's late cancellation.  I have worked on platforms for slightly longer than Prabhat Agarwal, I should be able to cover that, although it is big shoes to fill.  I'm specifically dealing with the digital aspect of the DSA/DMA, recognizing that strong interest in understanding the EU's experience.  But that brings me directly also to what are the global aspects of the DSA/DMA.  But to mention, we believe we share a global problem, opportunity.  And those two regulations, I will quickly mention and present here also translate what we believe are universal values.  Namely, to put the citizen back in charge of online content moderation.  Based on innovative digital services.

So very quickly to set the scene on what the acts do, I think what maybe helps is to, again, remind people just how central digital services have become to our daily lives.  And the statistic about what happens every Internet minute is interesting where you can read that by now we have more than a million swipes on tinder every minute.  And hundreds of hours of video uploaded to YouTube every minute. 

This is fundamentally reflecting that we have incredible access to information.

The fact that this is only going to accelerate and is a good thing is also reflected in the recent for instance, relaxation of export controls by the United States on technology including online platforms.  To supply to Iranian citizens that actually use online platforms to access information and organize, as do other human rights activists around the world.

So this is definitely ‑‑ these rules deal fundamentally with online platforms, social media, online marketplaces and search engines, which are also digital services that are central to our lives.  We organize our lives around them.

What they try to do in my view is put the citizen back in control of the services so they optimally serve our lives.  Technology should always serve our society and democracy.  And they hold tremendous opportunities and we're not naive about the fact that they're going to accelerate even further.  But we need to make sure that they do so in a sustainable manner.

So indeed, Digital Services Act is very much about the issue of online content moderation.  So what content do citizens have access to.  What are the fundamental risks they also face online?  Because while not being naive about the technology accelerating and the fact it holds great opportunity, there is also great risks and we're not naive about that either.  There is of course terrorist content being circulated and specific approaches in a global coalition that is launched to tackle that issue.  And a range of illegal issues online is very wide.  It goes from even now illegal short‑term rentals that we are also addressing in European Union to illegal wildlife trafficking and issues like child sexual abuse materials.  And many issues to understand what if the content is viral or recommended and looking at the advertising funded business models of many players.

Digital Services Act in this sense takes a general approach that applies to all sectors of industry and society and it also takes a general approach in terms of applying to all types of illegal content and societal risks including disinformation, which is not necessarily illegal, but can be very harmful.

So it applies a procedural approach to make sure that we, as a society and that includes Government, Academia, Civil Society organizations, and citizens themselves know what those risks are, how they're being mitigated.  And so that we can also all decide which services do that well.  Which services we want to use.  What are also the value propositions.  And how can we, ourselves take action to improve and make sure that we get the content we want.  Such that in the end, we are all acting in a safe place.  But also a place in which we get access to the content that we want.  And that is valuable to us in that sense we should try to go beyond this idea of facing a tradeoff between freedom of expression and safety.  It is really about optimizing all fundamental rights online. 

>> MODERATOR: If you could say a few words.  This is extremely helpful on whether you see the Digital Services Act will impact non‑European citizens and non‑European companies? 

>> Meno: Absolutely, my pleasure.  Sorry if that was long winded already.  It is a vital topic.  You are right to say the EU needs to get this right.  For the immediate global effect, we look also at enforcement within the EU.  Elements such as transparency around the recommended systems, around how platforms deal with minority languages will be of interest to the globe already.

Having enforcement experiment with understanding algorithmic amplification and working with third party auditors who have to audit the mitigation measures will have important findings for the rest of the globe.

Also in the EU, we will build up the auditing and enforcement capabilities.  There is a specific provision on data access for researchers, which, again, the research also needs to be made public.  The DSA is very much about transparency.  That benefits the global.  That is a direct effect that comes before possible alignment or actions we can do at the international level.

>> MODERATOR: Axel Voss if I can give a hard example.  The DSA bans minors, do you think it is sustainable for a company like Facebook to continue to target minors in other jurisdictions item or is that going to change the expectations, change the global regulatory demand for replicating the practices elsewhere as well? 

>> Meno: It is a very good question.  Very often, we are threatened that the service quality in your jurisdiction will be less.  If you take a societal approach, I would say under these rules the service quality will be the highest.  I would find it hard to imagine that citizens in other jurisdictions would accept lower levels of very elementary safety provisions. 

>> MODERATOR: Absolutely.  I will come back to you as well.  If I can bring Guilherme De Souza Godoi here as well.  Give a few words of how you believe the DSA ascribes to the global, interoperable Internet.  If you can say a few words of what you want and expect the global effects to be. 

>> Guilherme De Souza Godoi: Thank you, pleasure to be here.  Thank you for the invitation.  Good morning, good afternoon, good evening for everyone. 

Actually, we're a global organization as UNESCO.  For us, the possibility when we are doing our work to protect and promote human rights in general and in our area of freedom of expression, we have exercise looking to the universal system of human rights but also looking to the regional system of human rights.  In this case, the European one.  It could be the inter‑American one or the African one, ones that have their own structures of courts and regulations and Commissions and human rights.

So when the EU decided to move forward with the DSA, we can remember in the past with the audio visual directive or before the television without frontier directives, those instruments were important to previous discussions when we were looking to the regulation of the broadcasting system, for example.

So your question now about the protection of children related to advertisement, I remember working on these things in the broadcasting area, the EU kind of passed to discuss these issues were important in a comparative perspective to the universal system of human rights.

So what we think at the UNESCO level, the kind of lessons learned of these EU process that is very much related to what Meno was saying on how we do this balancing act.  Balancing act, for discussing when a judge needs to make a decision regarding privacy or safety, those discussions are particularly relevant for the global discussion on this.  But obviously in a much more complex environment.

Because when you look to the institutionality that is needed to do the things in line with international human rights law, in the institutionality the strong rule of law process.  Having independent regulators.  When you try to export that or having The Brussels Effect to use the title of your book, is not that simple.

We need to make sure that any attempts to copy or understand or implement similar regulations at other Regions and levels should not only look into the specific regulation in this case, but all of the structure that is needed to protect human rights.  Again, independent judges, independent regulators.  This would be the balance that we expect.  And in our case, as you mentioned in our introduction, the Director of UNESCO is convening a global debate about regulation in February of next year.  Of course, we welcome the discussion that you already took on some of these issues.  But actually we would need to look into the plurality of the discussions in different parts of the world.

>> MODERATOR: Very interesting and helpful, Guilherme De Souza Godoi.  A quick follow‑up question.  Maybe you can follow up on whether you see the democratic world is aligned in the understanding of how we ought to regulate online content.  How we balance the various concerns with the content and the concern about whether we get it right.  I would like to hear from you whether you see the DSA is very similar or very much in line with the Declaration of the future of the Internet or the Global Digital Compact or whether there are differences that you think ought to be further discussed? 

>> Guilherme De Souza Godoi: I will start with the question, it is hard to go one, two, three, four.  I think the other process, because we as U.N., we're not registers as the EU is.  In the different processes is offering guidance principles to the stakeholders offering the opportunity if they want to do it well, to self‑regulate, about the Civil Society to hold the actors accountable, what are the multilateral policy guidance, multilateral policy instructions to offer to them to make it useful in our case, always doing this line with the human rights international standard.  This is the first part that is different.  In the universal system of human rights, it is not a regulator.  They can apply compliance and so on.

On the other part of the question, I have the impression we are more or less agreeing on the different problems we have, but we still have a long way of agreeing on the details and they matter a lot here.  For instance, I'm sure everyone agrees that transparency is important.  But what exactly is transparency then we are not agreeing in all of the different aspects of this?  We all agree it is important to counter harmful content.  And that is not that clear.

The last thing that UNESCO wants to underline, in many situations we have a reduced view of the freedom of expression.  If we look at the Declaration of human rights, freedom of expression is defined using three verbs.  The verb in part, is the right to speak, the free speech.  But also the right to seek and receive information, culture, entertainment.  We need to understand what the digital ecosystem is flooded with disinformation, misinformation, hate speech, so on, it is not only to protect the rights, it is also about to protect the rights to seek and receive.

We need to do a balancing act not only in relation other rights but internally to the rights of freedom of expression.  This kind of more sophisticated in that discussion to the rights of freedom of expression is something we need to work and connecting with the idea that this is a public good and how to protect that.

>> MODERATOR: Terrific.  Thank you so much.  Can I get back to also our first speaker.  We know the decisions on content moderation are made by handful of companies that control the platforms.  Can you maybe help us understand what the DMA is contributing to this conversation about regulating the companies that are in charge of governing the Internet in many ways. 

>> Meno: We view the DMA as the other side of the coin of the DSA.  If you, as an approach require platforms to take more responsibility and more action, you would also want there to be a vibrant pipeline of platform companies that can actually contest each other and compete on the quality of services they provide to citizens.

This is not to say there isn't a role for the whole ecosystem, DSA addresses also everything around platforms.  It is them, Government, Academia, others.  The platforms are the important part of the solution of not giving content moderation or put it in the hands of a single party.  Platforms do important removal work, take‑downs based on terms of service.  They need to do consistently of course.  The DMA is about contestability.  It is a fairly new concept where you don't talk about competition in a certain market but observe that platforms benefit from these data‑driven network effect.  And almost impossible to overcome for newcomers.  What you need is a sort of pressure on this central intermediary to continue to innovate and invest in the quality of services, including content moderation, which is the fundamental business model because they created a safe space for users to interact.

Which the Internet absent platforms didn't provide in the same way.  So the DMA sets rules for gatekeepers only.  The very largest platforms that are truly a gateway and end users to each other end users or businesses.  That can be content providers like media broadcasters to act fairly and allow for what we call vertical interoperability and data interoperability.  Clear‑cut items so there is innovation on the platform function itself social media and payment services.  Everything that platforms tend to bundle up in a serve service where they leverage central position as a gateway to users and services.  It is different for instance, the competition rules where platforms can argue deficiency defenses and the rules are more high‑level applied on case‑by‑case basis.  These are rules that require compliance beforehand.

On the DMA we're working in parallel as on the DSA implementing it making sure the gatekeepers are identified and put in place measures to make sure that this contestability becomes reality.

>> MODERATOR: Excellent.  Thank you.  I hope we have a chance to return to some of the questions because they're big topics and much more to discuss.  This is a helpful overview.  In the interest of time, I will now move to the Data Act.  I will ask Veronika Vinklárková to invite you to help us understand the main contribution of the Data Act to this protocol of transforming the digital economy in the way that is consistent with European values.  Maybe we can unmute Veronika Vinklárková so that we can also have her weigh‑in? 

>> HOST: If I can ask tech to unmute Veronika Vinklárková as the participants online, please. 

>> MODERATOR: Maybe the tech will unmute Maiko Meguro who I will invite to speak after Veronika Vinklárková. 

Now we can hear you.

>> Veronika Vinklárková: Thank you for unmuting me and inviting me to be part of this interesting discussion.  Thank you for introducing me before.  I don't need to go into much detail.  I will say in the working party for information, we're dealing with data and AI a vibrant working party where most of the digital initiatives are being discussed and EU Member States are coordinating common positions.  I would be happy to talk on the topic of Data Act and tell you more about what it is.

In a nutshell, Data Act as presented in February this year.  The main goal of this proposal is to unlock potential of the data economy.  And this means that there are three main elements.  I am seeing in the chat that I can turn on my camera.  To be honest I can't do that either.  It needs to be done by the host.  I'm okay if the host will allow that you turn on my camera, I would be happy to be seen as well.

So in the meantime, coming back to the three main elements of the Data Act.  First of all, Data Act is about unlocking potential of how we store, access, and share nonpersonal data.  We are talking mainly about data generated from the IoT products and smart devices.  This is something that was not done in any previous legislation before. 

So moving also to the effects, I think we can say the Data Act might heavily impact the future of IoT products.  It talks about data access rights for the users, but also establishes design obligation for the products, for the manufacturers.  So there is already one very visible effect that the Data Act might have, which is how products will be built and related services will be offered to the users, together with the products.

Second very important part of the Data Act is having a tool how to access data.  And now we talk about personal and nonpersonal data.  In cases of crisis situations, and exceptional needs.  This is more focusing on scenarios B to G.  B to G, yeah, I can be seen.  Nice to see you all.  This second element is about business to Government relations.  And this is about how Governments can access data held by the private companies.  This is very important because COVID showed us that sometimes there are no easy tools how to access such important data.  So this is very important element of data as well.

Last but not least we should not forget about Cloud market.  Data Act talks about how Cloud market works and introducing new rules how users can change and switch from Cloud providers.  The Data Act they're called data processing service providers.  This is what we talk about.  It is all about switching between the providers and allowing greater interoperability so it works more efficiently.  This is a little bit on the Data Act, I would be happy to follow up on the effects.

>> MODERATOR: Veronika Vinklárková maybe you can comment on the opportunities and challenges in implementing this particular act and say a few words on whether you expect it to set the model in other countries.  After you I will turn to Michael for Japanese experience.  But from your perspective, the opportunities and challenges and what the global effects are.

>> Veronika Vinklárková: The opportunity is about unlocking the potential.  There are a lot of data just laying around not used efficiently.  This is the main opportunity, how to allow efficient use of such data in a manner that our economy can be further explored.  We can have further and more services offered at the EU market.

But of course, this comes with some problems ‑‑ I would not call even a problem, but challenges in implementation, because it is very technical proposal.  The first challenge is to make the regulation right.  We are at the early stage of negotiating.  We need to make sure that the regulations and rights we are introducing can be enforced in an effective manner.

I think that especially when we talk about this part of using nonpersonal data from IoT products, it is really important to talk to stakeholders and really know how industries work.  What problems can we kind of address to make sure that it works efficiently.  It is not easy.  We cannot draft the regulation from the table in the Parliament and in the Commission and in the Council.  It needs to be discussed.  It needs to be ‑‑ it needs to be consulted.  So that we know what are the problems, concerns and how to make sure that it is efficient and applicable in practice.  That is a main concern.

Another concern is timing.  In the Data Act we are facing design obligations, obligations from manufacturers how they should design and build products.  This will have effect in the long future because we need to take into account every product has a life cycle.  It will be the long run to make sure it all works in the end.  I think in a nutshell, this is it.  The.

The second question is about the implications of the non‑EU actors.  The product will be quite impactful.  There are a lot of EU manufacturers placing products in the EU market.  They need to obey the rules and be in line with the Data Act.  I think there it is very visible.  It will have very diverse impact.  When it comes to non‑EU users, in the end there might be spillover effect.  Once it is designed in a certain way for users to access the data, it is easy to take the same product and offer it outside of the EU market, always for the non‑Europeans to have the same access rights, because it will be easy by default.

>> MODERATOR: Terrific.  Very helpful.  May I now turn to you, Michael, next.  Because I think we all benefit ‑‑ Maiko Meguro next, we would benefit from the experience that the Japanese Government has undertaken in this regard.  The data free flow with trust.  Maybe you can help us understand what it is about and how it compares to the Data Act. 

>> Maiko Meguro: Sure, thank you for the introduction and invitation to this interesting panel.  It is a pleasure to be here.

By the way, I am trying to put my video on, but I can't, so I think the administrator have to unlock it.  Great, now I'm on the video.  Hello, thank you so much.  I'm working on the material situation of the DFTT trust.  I think people know this from the G20 Summit Declaration speaking about data must flow with the privacy, security, and any other important legitimate interest is protected.  A long time ago we felt that data free flow is like free trade of goods and services.  It turns out data is different because it concerns the privacy and national security and also comes with a threat to the intellectual property. 

This is where this concept came in, the data free flow trust.  This is basically we need to unlock the potential or optimization of the different important element or fundamental values that is attached to the data governance.  So we can't really have one value above the others.

We need to have the privacy must be protected.  National security must be secured.  Then data flows.  That is how we perceive the trust concept.  But then this has to be implemented and materialized as an actual policy framework that can enable the data to flow.  And we can overcome the barriers.  So this is what we are working as the Japanese Government for the priorities of the G7 Ministerial track for the next year because we're taking over the presidency.

We are facing a lot of barriers to move on to the free flow of trust at the moment because of how people think the optimized data governance is quite different place.

Actually I used to work at the European Union.  I understand how proud and how much advanced European people and litigations are in terms of the data governances.  Of course, other countries have completely different idea about how the data should be governed.

When I speak to many different countries and partners from the G7 and G20, I see tremendous gaps about how the data should flow and what is the fundamental principle attached to it.  This is where we're working for the interoperability for different approaches and governances about data.  This is where we are keen or interested to look at how the data governances in the European Union look at this question of interoperability or I think mentioned by Meno, what are the aspects of the DSA/DMA and other important European digital legislations. 

Any case, international discussion is quite developing at this moment.  Japanese Government is quite keen to develop the international cooperation to give it up in different ways of the European governors.  Including European ones.

From this perspective we are watching closely and the initiative with data free flow with trust can work together and build the constructive interoperability in the jurisdiction, was expressed within the key legislation such as Data Act for Europe.  That is ‑‑ so for now, that is for me.  I'm quite open to further discussion on this point.

>> MODERATOR: This is very interesting.  It is particularly important, since we're also observing a lot of pressures towards data localization in different parts of the world.  So the trust has not always been the keyword.  It seems it is not even the default of how data flows.  For some Governments the idea that the data needs to be localized and much more to be done before we let the data flow.  It is interesting.  I am curious if you can explain about how Japan in particular has coordinated with the European Union and how the dialogue has worked between the two Governments.

If you think about your work on the DFFT and the European Union Data Act.

>> Maiko Meguro: Sure, we have established the digital partnership.  Data is one of the prioritized areas we will work on.  We're speaking to the colleague from European Union about the Working Party that we should understand how our data governance should be on there.  It means what are the technological choices we have.  And we're seeking to see how we can connect the different layers of data governance and trying to see what would be the options of the interoperability to deliver to each other.

So of course, like working to have a common like very similar rules is one way.  Between European Union and Japan, for the privacy regulation, we have the adequate decisions, which really helps.

Same time, many companies choose to have the respective contract, because they also working with many different parts of the world.  We're trying to find out what are the actual barriers companies are facing.  Not only companies but research institute and others are facing in the transfer of data.  We establish the Expert Group on the data free flow with trust and did research on the different actors about what they actually see are barriers.

Interesting, many actors mention they see the localization and difference of regulation are difficult for them.  But actually the prioritized barriers is more on transparency and interoperability on the technical side.  Many see they would like to develop the applications or own solution to the barrier they face.  But they face a lot of issues about not only the difference of governance, but also see the many different ways that how different companies and Governments build their own data governance on the technological choices.  I see between European Union and Japan, we have to see the different layers of the interoperability to enhance the data flow between the two jurisdictions.

>> MODERATOR: Terrific.  Thank you both, Maiko Meguro and Veronika Vinklárková.

I want to again, in the interest of time, to bring our third legislation into the conversation and invite -- if I can have Axel Voss, who I understand to be in the room -- to speak.  Maybe you can walk us through briefly, what is the main goal of the AI Act the EU is legislating?  And give us a sense of the time frame we can expect this to emerge from the Parliament and become adopted from the legislation?  Axel, over to you, please. 

>> Axel Voss: Thanks for the nice introduction and also for the Commission, thanks to create this event.  I think it is very important to have this also on the IGF panel.

The goal of the AI Act is we're trying to build excellence and trust to our citizens in using algorithms and having these also be part of the day‑by‑day life.  We would also like to ensure human rights because some of these algorithms or some of these artificial intelligence products might be, in a way, harmful, so we need to have a balance here. 

We would like to strengthen our research and industry capacity, so we have a frame for our society, how we would like to deal with the artificial intelligence and the devices might contain these.  Here we have a lot of issues, but regarding the time frame, I will give you also this idea.

It is expected so far that we are coming forward in the European Parliament until the end of March.  Probably voting on it in the Committee in May, latest, hopefully.  And the Council is already a step further.  Having their own approach on these.  And then we can start trial before the summer break or directly after the summer break.

Then hopefully ending these at the end of next year.  This is how we are thinking on these right now.  The importance of this legislative tool shouldn't be underestimated.  So if we are not doing this right, then all the wealth and competition and so goes to other Regions in the world.  So we have to be very careful here and not changing the wrong track.  Yeah.  Coming to the right track.  Let me put it this way.

So this is then all about the definition.  It is about the governance.  It is about the safeguards for the deployer, about the safeguards for the system itself.

It is especially ‑‑ here we have to focus on what is an AI high risk system.  Because the AI Act itself would like to focus only on high‑risk systems.

We need to think about what we should do with the so‑called general purpose AI.  We would like to have thrust worthy AI that is ‑‑ trustworthy AI that is lawful, ethical and robust.  And we come to the point that is important, sandboxes.  We have mined two cases.

One company is new and has no data at all.  But how we can provide data for these new business model or new idea?  Or there is a company having already data in place but using this not for the purpose this was collected.  And how we can manage this one also.  Therefore, sandboxes are very much important, especially if you are linking this to the GDPR where it is not forbidden in a way but very restrictive how we use personal data.  And to come to a result of algorithms what is agenda planned, nondiscriminatory and nonbiased you need massive legislation to create what is in minds.

And we're creating a governance here and Expert Group, where we can also the Commission is then dealing with it, where we can have the expertise around what is high risk and whatnot at the end.

The question of liability is not integrated in the proposal so far, but it is another proposal that we have adopted a couple of weeks ago.  But here, also, we would like to come to an end this term.

So meaning April 2024. 

>> MODERATOR: This is very helpful.  Thank you Axel Voss.  I did my work and sometimes suggested that one of the reasons the European legislation is so influential as a template, it is the result of a compromise.  You have many different political views and Member States that need to sign on to the legislation.  In that sense, it is the legislative drafting that results in the kind of legislation designed to work in different ecosystem.  Which is sometimes easier to export that and implement in other jurisdictions, compared for instance, to the legislation in the United States.  I think the way you discuss some of the sticking points and important decisions, that deliberation is particle of the balance from the process.

When you are legislating, do you feel it is just for Europe, or are you conscience that it will have a potentially global influence.

>> Axel Voss: We have in mind this is also potential global influence because we are expecting also that all AI systems working as a kind of service also in the EU is also respecting these ideas and our values in the legislation.

So because we are sitting if around seven different political parties, with seven different opinions, this is not always on our mind when we are discussing the definition of sandboxes, so on. 

If you are stepping back and watching the whole situation, of course, you think this might be something.  A kind of key element for the future.  And hopefully also for the like‑minded countries who might think in the same direction. 

>> MODERATOR: Perfect.  That is a great segue now for me to move to Marc Rotenberg and have him share more of an American perspective on the AI Act.  I want to talk about the democratic values index.  Maybe you can briefly give us an American reaction to the AI Act?  What is it accomplishing?  What are concerns you might have on what the Europeans are trying to accomplish? 

>> Marc Rotenberg: First, let me thank the Commission for inviting me today.  It is fascinating to watch the reaction on the U.S. side.  It is fair there is a wide range of opinion.  Certainly as a proposition the tech industry doesn't like to be regulated.  On the tech industry side there is a sense if they can avoid regulation, they would like to do that.

On the other hand, there is growing recognition that the GDPR in the United States helped stabilize public concerns about privacy because the U.S. Congress itself was having difficulty passing legislation as you have observed as companies moved in the U.S. to comply with the GDPR were able to establish trust and confidence in the services. 

Almost unlike the hostile reaction to the GDPR, there is an openness and willingness to see how the AI Act might help the confidence in circumstance.

That is on the tech industry.  I think the U.S. Government itself likely views the initiative more favorably.  We have seen positive statements from the Secretary of State, Anthony Blinkin from national security advisor, Jake Sullivan and Lynn Parker, the speaking policies about the efforts of the EU to move toward the AI Act.  And we have from the Office of Science and technology, the blueprint for an AI Bill of Rights.

To clarify confusion around that document.  It was never intended as a legislative proposal.  Executive Officers in that area don't have authority. 

As a white paper it is influential and sets out five key principles that do align with many of the pillars contained in the EU AI Act.

I think the U.S. is anticipating that the AI Act will be adopted.  That it will have consequence for U.S. firms.  I don't think the opposition is as strong as it was certainly with the GDPR.  I do think some in the U.S. Government actually welcome the initiative on the EU side.

>> MODERATOR: Helpful.  Maybe we can extend the discussion to China.  I would like to hear your thoughts on whether The Brussels Effect will reach Beijing or whether Beijing will reach the rest of the world.  We know China is active in developing AI, but also attempts to establish the Chinese policy view around the AI. 

Maybe you can describe a little bit your thoughts on what influence China will have in the future development of AI.

>> Marc Rotenberg: Right.  We have been fascinated studying the regulatory approach of China.  In November China adopted the personal information protection law, largely modelled after the GDPR.  It is an influential and comprehensive approach to data protection and even introduces improvements.

There is new regulation on recommendation algorithms, it looks very much like Article 63 in the DSA.  And what we are beginning to see in China is an approach that combines a commitment to leadership, some would say dominance in the tech field.  That is certainly an ambition for 2030, with regulation to create the circumstances that enable the continued presence of Chinese firms.  This is fascinating.  In our national analysis, we use the United States leading on innovation we might say and see the EU leading on regulation.

I think in this moment, we could fairly say that China has the ambition of trying to lead in both domains, creating a Beijing effect and certainly creating new challenges for the U.S. and EU. 

>> MODERATOR: That is fascinating to hear about your views on China's ambition.  We are all watching whether that ambition will translate to concrete outcomes.

I would like you also to tell us about the research you are doing on the AI and democratic values index.  What are the key findings you can potentially share with us? 

>> Marc Rotenberg: Great, I will briefly share the screen only to show you one page from our website.  You will find our website just searching for center for AI or AI center for digital policy.  This was released from the second edition of our report, February of last year.  We were pleased to have Eva join us, Stewart Russells and others to talk about our findings.

You will see on the left column, some of the key highlights.  We have conducted the first comparative study of national AI policies and practices (audio distorted).

And a series of metrics so we have the ability to quantify as against 12 key metrics we have established how countries are doing in terms of democratic values.

And it is a useful tool.  Because at a moment in time, we can compare countries against one another and also see over time changes in countries' practices.  I would like to give a shout out to UNESCO, because last year we actually modified our methodology to acknowledge the support that countries expressed for the UNESCO recommendation on AI ethics.  That was a big step forward.  We're modifying the methodology as we explore the implementation of the AI ethics.

>> MODERATOR: Helpful and valuable work.  I'm glad you shared that so we can all continue to watch the results.

Let me at this point turn the floor to the questions.  I want to make sure our audience will have a chance to engage with our speakers on the topics.  Since I'm not in the room, I would ask Pearse O'Donohoue to moderate.

>> Pearse O'Donohoue: We're opening the floor to those online.  I will ask physically or electronically anyone would like to put their hand up and make a brief comment or ask a question?  Yes, please, introduce yourself and your organization.  Thank you. 

>> ATTENDEE: Hello, everyone.  I won't say good afternoon or good morning because I don't know the time zones.  My name is Urdi.  I am from Rutgers international and an Ethiopian.  So welcome to Addis.  My question is as someone coming from not these countries that were spoken of, and I was surprised with a lot of the documentation and legislation that has been shared because these are things that we always look forward to, especially in living in countries where we're talking about human rights where it is either the question between the rights to impart information or the right to life. 

We're here trying to survive on a daily basis on information or misinformation that winds up taking the lives of so many.

My question would then be to the team because there are so many amazing legislations.  As a person that is coming from a country that would love to have some regulation on this disinformation or the misinformation that is happening across the world, are there tips that you can share?  Just one from maybe your speakers, something we can go back to our Governments to say these are things we would love to have regulated as part of the countries that have done such great work, whether it is in the EU or Japan or the U.S.

Would love to get some practical tips.  One thing that could work that we can take away?  Thank you so much. 

>> Pearse O'Donohoue: I will relay your request, would any of the fine speakers like to take that challenge?  Please. 

>> Marc Rotenberg: I will make one.  We have to be careful not to put too much burden on people to deal with these problems.  I agree with the spirit of the question.  Governments need to conduct oversight and legislate.  So we're not left on our own sorting through this. 

I will make a small technical suggestion.  I have background with the evolution of the Internet.  I helped in the early days with the creation of the dot‑ORG domain, to promote noncommercial use of the Internet.

One ever the things I took away, the ability to see the string of characters associated with the website that you are going to can be extremely helpful in determining the validity or authenticity.  One of my concerns over the last several years is the increasing difficulty.  We hide or links between text and email and we go to platforms where we don't know the site we end up on.  I will say if you can examine the URL, that will give you a lot of information intuitively about the website you are going to and the quality of information you are receiving.

>> HOST: Can I call on our colleague Meno that might comment on this question? 

>> Meno: Yes, very important question.  One tip is definitely to give because there are many.  I would say specifically regarding platform regulation, it is important to see that the whole of society approach is the key. 

So before moving to regulation, I would say Governments should focus on the ecosystem around platforms including quality journalism, the independent judiciary, the building blocks we all know of so you can embed your regulation in a fundamental rights approach, which then allows to leverage the ecosystem to make societies resilient.  That is a tough order.  But something that is really crucial, precisely to prevent creating a space for suppressing freedom of speech. 

>> MODERATOR: I can see Maiko Meguro has her hand up.

>> Maiko Meguro: I would like to make a comment from the perspective of Japan and in the western perspective in the thought of diversity opinion, we would like to understand what is the ecosystem around what you are trying to regulate.  And ecosystem can be different place to place.  So I'm not sure what would be the most relevant ecosystem analysis for your own society.  But this is very important to understand what your society is actually built up on.  And Government should take very much responsibility to sometimes very hard, but you have to draw the line on principles.

But beyond that principle, different Government chooses different approaches.  European society basically choose regulations.  They all have very strong regulation.  With a lot of efforts to draft a lot of legal documents.  That also works, particularly for a society like Europe that is consisting of 27 different cultures and countries.

But Japan is more homogenous society.  We have a nonspoken common sense and we choose certain flexible approaches, agile governance.  We build strong principles, but beyond that we let the stakeholders to actually work on how to implement them.  But Government also set the review system.  And also those multistakeholder can join in to actually interact with Government to how they also want to build their own governance to implement principles.  That can also be another approach. 

>> HOST: Guilherme De Souza Godoi I see you would like to take the floor. 

>> Guilherme De Souza Godoi: Briefly.  This is very important.  Unfortunately, we don't have simple solutions for complex problems.  This issue of disinformation, I think we think now we have reached the point that we need to go into the specifics.  Because we have already discussed a lot.  We need to counter disinformation and misinformation what Meno said is important.  It is not the same to counter this in the elections process or climate change or in migration.  We need to go to the specifics. 

What is the risk assessment that countries and other actors need to do when countering information in the development process in the reelection and for climate change or health issues?  There are important differences here.  And different strategies that need to be considered when we're talking on those different processes.

So we need to go further in this discussion to really reach the results we want.  Always using the basic human rights based approach. 

>> HOST: Thank you.  Is there anyone else in the room who would like to raise a question or make a point?  Yes, please. 

>> ATTENDEE: Thank you very much.  I'm Sophia, a youth delegate.  It seems like in the beginning we had the idea to put the citizen back in charge.  However it was said it does not meet full transparency and Government in the end should take care of that.

But if you want to put the citizen back in charge, you need informed content and inform the public and engage also Civil Society.  My question is how do you make sure to sufficiently inform the public about the legislation in EU and also global scale.  Thank you. 

>> AXEL VOSS: So thanks a lot for this question and also for attending here the IGF.  I would always say the European Union is the most transparent organization in the world.  But of course, you have to do your own selection on this enormous amount of information.  Because with every legislative act there is also special information around what you can find and how this can be used.

Of course the ‑‑ all the legal tools are also having in mind, if you are generalizing this little bit, how we can survive digitally in a more and more bipolar developing world.

This is very much important for the EU.  And here, of course, this has to do with human rights, individual rights and so on.  And we're trying, also to get this balance, but we need also competition.  Therefore we have to have in mind it should be balanced everywhere.  Because of the fast‑evolving and development, we as a legislator, I would still say we're not flexible enough.  We need to come to think about different forms of being democratic legislator to catch up with the developments.

Not years later where problem is occurring and then we might think about and we take our times in discussing.  No.  This is probably not the majority opinion in the Parliament, but I still see lack of adapting new technologies immediately as soon as they are out or as we have a kind of idea.

But this is our way so far approaching and trying also to get trustworthy elements in all of the legislative papers.  And also having in mind that this should also serve our citizens. 

But of course, industry aspects are also part of it. 

>> HOST: Thank you.  Marc Rotenberg I see you would also like to comment on this question.  You are on mute. 

>> Marc Rotenberg: In our democratic values index, two of the 12 metrics relate to the ability of the public to participate in the decision making of AI policies.  One question asks are the countries' policies for AI readily available to the public?  And another question asks, is there meaningful opportunity for public participation in the formulation of the national AI policies and strategies?  And actually the good news here, having done this detailed comparative analysis, is many countries have created open processes to seek public comment and incorporate public comment.

We have actually dedicated a page on our website called the public voice to call attention to those situations where the public is invited to express the views.  So I think this was a very important question.  Of course, as we're thinking about how the decisions were made, we should think not only about the quality of the outcome which is important.  But we should also think about the quality of the process, which I would say could be equally important.

>> HOST: Okay.  Thank you.  Meno, I will give you 30 seconds before I give the floor back to Anu Bradford so she can wrap up before we close the session.  To you, Meno.

>> Meno: The digital services balances that need for intervention in an interesting way.  It injects transparency for all actors in the ecosystem.  Trusted flaggers need to be transparent academic research published and auditing efforts done need to be published so citizens can understand what is going on, inform themselves and take action to play their part in the content moderation that we need to do as a society.

At the same time, the Government intervenes to actually vet researchers to make sure they're independent and select third‑party auditors and make sure flaggers don't overstep their role.  There is a strong role for Government.  That is important maybe to finish on the DSA maintains this cardinal principle, which is that there is no general monitoring. 

So it safeguards our citizens from being subject to general monitoring by online platforms, which can be imposed by Governments. 

>> HOST: Meno, I'm stopping you there.  We have to give the last wort to our moderator Anu Bradford before we close the session.  Back to you before we wrap up.

>> MODERATOR: I want to use my final minute to extent thanks.  This was a wide ranging conversation that was informative, sometimes disconcerting and promising.  I take comfort in listening to how many thoughtful individuals are bringing their own expertise to try to solve a very difficult set of governance challenges.  I think just a deep dive into the four different policy domains reveal how there is no single legislation or policy that will help us governor all of the different challenges.

But at the same time, it reveals there is no single actor who has all the answers.  It is the dialogues like we have had today that help Europeans and the rest of the world to bring this conversation forward and use the expertise to appreciate the challenges and make the best of the challenges and use the tools they have to move us towards that vision that has been articulated and all of us can recognize the significance and urgency of the Digital Transformation and importance of getting it right.  Thank you first reading joining this conversation and the rich insight you shared.  Have a good day.  My best regards.

>> HOST: On behalf of the European Union I would like to thank the speakers for the thoughtful and useful interventions.  Two questioners who had pertinent questions.  Anu Bradford thank you for guiding us and training our discussion.  Thank you very much for that.  It has been enriching for us.  We want to continue the discussion.  I wish you all a successful continuation of this IGF week.  Thank you all.  Good‑bye.