The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> PAUL FEHLINGER: Perfect.  Welcome, everybody to our town hall.  I'm very happy to see familiar faces.  And we have an absolutely amazing line-up of speakers today.  And physically here, with us, and also online.  So this is the town hall that is called Towards Ethical Principles for Responsible Technology.  And it is the first town hall at the Internet Governance Forum of project liberties institute.  I'm Paul Fehlinger.  Governor of that new organization.  And before I tell you more about what we do, I would very much like to introduce our speakers today.

     And maybe each of you can just quickly mention your name and where you come from, with what organization you are.  I start to the left, Elizabeth.

     >> ELIZABETH THOMAS-RAYNAUD: Thank you, Paul.  Thank you, Paul.  My name is.

     >> ELIZABETH THOMAS-RAYNAUD: Ly and I'm with the O ECD global forum on technology.

     >> Hiroki Habuka I'm Hiroki Habuka.  A research professor at Kyoto University faculty of law and used to work forked Japanese government until last year.  Thank you.

     >> PAUL FEHLINGER: I think it takes a second if you speak.  Ving of Vivian Schiller.  We focus on media and technology on the impact on society and based in Washington, DC.  Happy to be here.

     >> PAUL FEHLINGER: Wonderful and I just got a pleasage from our panelists that appear rently the Zoom link that was on our event page led time to a different session.  Maybe I can ask the technical team to how give me the right Zoom think so I can share it with the codirector who is the formcy of aKansas who is joining us remotely from Australia.  And with me remotely today is also Sarah Nicole who is our policy and research associate who probably is in the same wrong session right now.  So if somebody could send me by e-mail the right zoom link that would be absolutely amazing.  Maybe somebody of the techologyical team could take care of this.

     We want to talk today about the future.  We want to talk about what is next in the innovation pipeline and how we can foster responsible innovation and look at things like the economy artificial intelligence, web3.  Neurotechnology and all of this.  (?) the underlying question today is how do we create a fair sustainable and innovative digital future.  So those are very big topics.  And so is the mission of project liberties institute.

     Our mission is to advance responsible innovation and the ethical governance for the common good.  We were founded by frank McCourt and we do three innings.  We catalyse solution-s urgented research to foster evidence-based governance innovation and we are fortunate to have three founding academic partners already, Stanford University in silicon valley, Georgetown University a Washington, DC and in Paris and France.

     Our second mission pillar, we bring together leaders spanning international organizations, governments, entrepreneurs, businesses, technologists, academiaia, invest ores Civil Society and we lead three initiatives and one you will hear a bit more on topics such as the good technical governance of web3 technical infrastructures ore creating for public and private innovation makers or on ethical principles for responsible technology.  This is particular for an organization such as hours.  And we have DSNP, the decentralized social networking protocol which allows to build social web applications that enable users to share the data and also gived possibility and the infrastructure to economically participate in value creation with personal data.

     So what is the intention for this session?  We want to talk about ethical principles for responsible technology.  And we want to look at the entire innovation cycle.  How technology is designed.  How we invest in new technologies.  Deploy commercially new technologies and regulate new technologies.  Both the substantive side of ethical principles but also processes for responsive innovation in the ecosystem.  So with this, I would like to start with the first round of questions.  But before this, could I ask if somebody has the right Zoom link for online participants?

     >> I sent it.

     >> PAUL FEHLINGER: You are amazing.  Thank you.  I hope that Paul can join us soon as well.  Thank you very much, Hiroki.  If we think about ethical principles the first thing a lot of you realize and you are all experts on I see here in the panel and I recognise some of you in the real all worked on very important sets.  Somebody from UNESCO said for the eye tentfications for UI they identified more than 630 different principles so there is no shortage on ethical principles for innovative technology.  And there is the sort of tension that we have so many principles that actually nobody knows any more to to do.  And that is a big problem for our sigh citity that is heavily digitalized.  This is slightly worry some forwhere we are in the ecosystem.

     To my first question to our amazing speakers so what is the state of responsible innovation from your view?  And also, as a sort of sub question, what do you think we do learn or should have learned or already learned from the past 10 or 20 years of technology development?  Where are we today?  And I would like you to sort of give you each the floor and ask you to really give us first your ecosystem view.  And you will have an opportunity to talk about the work you are leading a bit later.

     But I want to start by what is the state of responsible innovation today.  Maybe Elizabeth if we could start with you.

     >> ELIZABETH THOMAS-RAYNAUD: Thank you, Paul.  I will just start out with a few thoughts that I had before and might embellish this a little bit.  I think the first thing is there is quite a strong consensus around the need for significant and timely work on these questions.  We are seeing a lot of different -- we have seen and are seeing new initiatives that are cropping up.

     I think there are traditional approaches and also recent lessons that give policy makers and stakeholders an impetus to say we need to started thinking collectively further upstream and finding development in line with respecting human right, building in privacy and security and accountability by design.  Not as sort of add-no features.  Trying to encased technology after certain concerns are raised or issues developed.  Global cooperation has been cited from the early discussions as needing to be sort of an essential feature of the effort given the borderless nature of many of the technologies we are talking about and the impact they have on on citizens arounded world.  The approaches need to be broad erthan national or regional.

     To is important that they are directed at a human-centric and values based and rights oriented development and use of the technology.  That can't be done at the end of the cycle and it is something that is sort of forcing us to say are we getting this right?  Are we waiting too long?  Do we need to think about this in a line with each other sooner in the equation?  And then we need to factor those technological implications but also the societal and offer resolutions that are adaptive to the people both in the culture and needs.  I'm not contradicting myself, we need borderless global approaches but with the informed sensitive considerations of that.  There is a strong need for that.

     There are instances where we have seen this working in some of the dialogues on the IGF at certainish issues and then go into a technical activity or policy making activity and you get a better action and it more initiative that can have impact in the community because of the thoughtful discussions that take place earlier.

     An then, of course, we need to think about how to enable the eoh system and the technologies.  Not just mitigate some of the concerns that we could have.  The last thing I the mention on this, there are a lot of spaces so it could be easy to say and there are lots of actors contributing and all of that is positive.  Could be'sy to say there are too many things happen.  Focus on are we moving in the same direction and reinforcing and having complementary from the of fors that we are make -- in the efforts that we are making and then sort of playing to the strengths and linking up those rather than trying to create a one-stop shop for everything.

     >> PAUL FEHLINGER: Thank you so much. I just recognised that Paul joined us.  Happy that it works.  Quickly want to introduce yourself and I will give you the floor after Liroki.

     >> PAUL TWOME Y: I hope you can hear me okay.  I'm the co-chair of a group of 70-80 experts from about 29 countries concerned about the data governance rules and technology and innovation and sort of a founding figure and CEO of ICANN for those with who have a long memory.

     >> PAUL FEHLINGER: Happy to have you with us remotely.  Hiro can ui you used to be in the Japanese government and you led important work on agile approaches to the governance of disruptive technology, signer physical systems.  I think the name was sew sayty 5.0 which is all of what we said from XR to artificial intelligence ought of it together as a complex, how do you govern this.  You wrote a very important paper forked Japanese government, a strategy Paper in that regard.  I think you are very well placed as well to answer the question.

     Like what where are we with the state of response Inma innovation today when we look across the entire cycle?  Confident?  How do you assessed status quo?

     >> HIROKI HABUKA: Thank you very much for the extremely kind words.

     Yes, so let me talk about list Tory of AI governance or AI regulation.  So as you mentioned, now we have like 630 AI principles.  Started from like 2010 so just after we started to implement the technologies.

     And according to my understanding there are 600 principles but the pillars are similar.  Fairness, safety, security privacy, transparently, accountability.  We have all things on the table.  But how to implement it is a problem question.  And then after 2020s some countries started to make new regulations on AI so apair Rentzly the EU AI act would be one of the significant new regulations on AI.

     And also Canada is now discussing a new regulation on high impact AI.  So there are a lot of discussion going on.  Now, Japan hasn't adopted a comprehensive approach.  So we take more the specific and more soft law-based approach.  So let's see how it goes.  I so it should be always risk based.  Whether you regulate is or not, we already have some very risk management framework for AI such as United States NIST AI management framework or ISO.  Also Japan has some AI guidelines for companies or the operators of the AI and they look similar that says you should do impact assessment.

     And then the risks based on the multistakeholder and reviewed or monitored sometimes by a third-party.  Of course, the contents of different than each other.  Now we understand what we should do and we should do the risk management processes in iterative manner or agile planner I have to say.

     Real question is what is the impact?  For example, what is the prize that should be protected unthe generative AI or how to balance risks an benefits of AI.  For example, if you use, say, AI in the camera in the public space it will dramatically increase the privacy risk but also dramatically increase the efficiency of safety of the public society.  How to balance those different values which are in the tradeoff situation would be the real question.

     So, you know, defining values.  Balancing values.  Or how to solve it.  I mean government doesn't have any idea about how to technically solve this question.  So we always need to, you know, talk with a tech people as well.  So you know, those questions cannot be solved solely by the government.  And that's why we need multistakeholder dialogue in an agile manner.  And so Japanese agile governance concept is based on that.  We always have to try to be more multistakeholder and agile in governance.  Governance doesn't only mean regulation but also in a more soft law guidelines or Democratic processes or corporate governance.  How to material rise rule of law in the government and multistakeholder manner is a real question.  I don't think any single person has the correct answer.  All of us are struggling with that.

     >> PAUL FEHLINGER: Thank you very much.  That's interesting because you highlight the fact that this is a very process and a journey for all offed stake holders to come to grasp.  I want to give the floor to Paul who joined you remotely.  In your view, what is the state of responsible innovation in general today and what do you hope we have learned from the first wave of technological innovation the past one or two decades if we look back?  What is your assessen of the status quo, Paul?

     >> PAUL TWOMEN: I'm optimistic and pessimistic.  So what we have been staying in Australia to Bob each way.

     Let me say where I'm optimistic.  I think what we are seeing interestingly often from sort of larger or more established institutions a real sense of trying to think about ethics as it applies to the evolving technologies.  And I would point to a couple of things.

     You know, quite specific.  I think we are seeing people who are producing ethical frameworks.  Initiatives in the silicon valley, Santa Clara and the centre for culture in the Vatican of all things, working together on a road map for ethical technology development.  Practical road map.

     So there is an example of people who are sort of working on how to do you put this into effect.  A lot of companies that I'm aware of have also taken the lessons from our community and said use multistakeholder processes to begin to identify issues early in the product development stage so that they are not caught, as you said before, with having developed something and having to fix the problems afterwards.  How do you run a series of processes with various stakeholders that help identify issues.  That part is interesting.  AI is a good case study in some of this.

     I would say that to give an example of potentially where I think I'm pessimistic or we have a challenge.  One thing to stay for established corporate or corporations have a view about this.  It is another thing together for what happens in the startup space.  One example at the implement is facial recognition where we have heard now through reporting that people like Google and others said you should not do this.  And then clearview turns around an breaches human rights and copyright or whatever because it is a startedup.  That dynamic is going to keep continuing.  While I'm a fan of the creative -- detruckive creative aspects of capitalism and innovation.  It is all good.  There is a big ethic am difference between boards and investors and VCs saying we want to upset an industry structure or supply chain and find new ways of innovating.

     Take, for example, the car share riding companies as an example.  A big difference between that.  He said all of the students want to created businesses.  I said sovereign risk.  The question is how long before the risk is going is to hit you.  That is a route for the sovereigns to send signals to the VCs, not just to the big corporates but to the VCs saying be careful about the stuff your start throwing money at.  If it begins to breach human rights and other issues they will come down on your lard.  I think that -- hard.  I think that is an important issue.

     >> PAUL FEHLINGER: An amazing framing from the international landscape toed view of one specific government on more agile processes.  Ball, thank you for mention -- Paul, thank you for mentioning the startups and role of VCs because those are roles we want to discuss.  I want to give the floor to Vivian with a very particular view, an accomplished injurallist.  Where are we toot.

     >> VIVIAN SCHILLER: My co-panelists have said so many smart things I'm busy crossing things off because I don't want to be too repetitive.  I'm a journalist and we are observers around reporters.

     A few observations?  We are too slow.  A lot of processes understand they need to be inclusive, of course, they do.  But the technology is moving so fast and no matter how hard we try to futureproof what we are trying to govern, we are still always going to be hind.  And we need to think about that.  Related to that, I don't think we always have the right people in theroom for the conversations.  There needs to be much more emphasis on technologists from addition to all of the other key stakeholders looking at issues.  Civil Society groups and government, et cetera.  We need technologists.

     We also need big tech in the room.  I cannot tell you how many times I have been in rooms with big tech not represented and when I ask they go well, they are the problem so we don't want to let the fox in the henhouse.  But the fox -- I don't know how to continue with that metaphor.  First of all, they hold an incredible amount of power to maked change and understand how a let of the technology works better than anyone on the outside possibly could.  They need to be in the room.  That doesn't need you necessarily have to have complete consensus in the room including with it big tech about your decisions but they need to be there.

     My next point is going to sound like it is the opposite of what I just said but we also have to remember that I have seen so much focus on in various contexts talking about AI.  Well, we have.

     >> The an agreement with gaggle and openAI and Microsoft so we are set.  They are incredibly powerful players and need to be in the room.

     There is an entire world of open source out there that are not represented and that is just going to become more substantial.

     Just a couple of other points.  We are too quick to see regulation as the answer to everything.  And Hiroki made this point.  We need to think more broadly about governance.  And then I will also repeat something because it is work repeating that Elizabeth said which is we need to by thinking much more upstream in the process.  The whole idea of safety or security by design is critically important.  Thank you.

     >> PAUL FEHLINGER: Thank you so much for the first framing round.  The reason why you are sitting here on on the panel is because you all lead very important initiatives in the ecosystem of the use of new technologies and I would like to sort of give you the floor to explain a bit to the people who follow us leer in the room and online what cow category two do you did and how we design technology and invest in tech knowledge an build and deploy technology and regulate technology.  Where in the innovation ecosystem to you fit in with the kind of work that you are leading?  And Elizabeth, you are Leeing the OECD global forum on technology.  A very having forum.  What is the vision and role of this forum.  Already.

     >> ELIZABETH THOMAS-RAYNAUD: He file like you just handed me the greatest ball to hit so I will to it really well.  It helps me feel encouraged by the approaches that we are hoping to pursue what the global technology forum.  Which we like to call GF tech.  Those that know OECD may be familiar that it has policy committees that work on policy topics and even agree on principles and these are tables of preDom fantastically government delegates within the OE CD membership and stakeholder communities in the digital policy area, technical community, Civil Society, labour and trade and they have the pry is exfor and work on the policy issues have, is the no about that.  It is about opening up to a wider dayia log with non-OECD members included with other stakeholders from other area of expertise.  This was something that was launched during the minutesterrial of the digital economy policy in Spain and the Canary Islands in December of 2022 and we had our first inning a ral event alongside the ministerial event this June.  It is a venue for regular in depth strategic and multistakeholder dialogue.  Has the wader community industry, academia,try Civil Society and.  We intend to do it through two tracks.  The first track is going to explore technologies that are identified as right for immediate work.  And so these technologies are going to be looked at through a lens of cross-cutting themes.  Sustainable Development and resellence sights.

     Response Inma and rights based technologies to aCheech human ten Vick technological transformations and binge bridging die individuals.  The other track is horizon scanning where we bring a broader community from an event or some activity like that in order to explore longer term opportunities or risks and figure out where the lights are lapping on other technologies that may not be already explored and discussed to identify and analyse the emerging technologies that may be of interest forwork at a later stage.  We will probably end zone up talk a Witt about the convergence issues.  If you talk about quantum it is hard to do so without AI.  Synthetic biology, hard to do without the interplay inned technologies.  I gave it away but the initial three technologies we will focus on initially in the forum, immersive technologies, synthetic biology and quantum technologies.  There order to go into those technology discussions we are going to do what Vivian said we should do.  Bring the technologies in the room.  We have the government delegates working in the policy committees and they will help us identify national experts and building up a community of the broader scope.  But we are looking for real technologists and we are also looking for experts in the ecosystem.  Understanding how the technology works in the ecosystem and what are the implications much that.  We are going to try to channel the insights.  And again to the-point of going too slowly, this is also an opportunity to use Focus Groups as a policy accelerator to orient OECD and its partners towards the most relevant and needed policywork.  We take the priority and do them first.  Shared insights and perspectives and hopefully increase collective understanding of the technologies anded ecosystems I keep mentioning and look to them to help identify gaps and bright spots and provide examples and data and do what OECD is known ore, building up the evidence base to help inform policymakers.  I will stop there.

     >> PAUL FEHLINGER: Thank you so much.  This is such a bold vision, I think and so necessary inned ecosystem to try to bring together the different threads and basically operate in this level.  We are often surprised by new technology and I think it forum is an important initiative to get on top of the innovation curve from your point of view.  So fascinating.

     Hiroki, you already mentioned you worked in the field of artificial intelligence.  Many might not have heard about what the agile governance is.  Explain it at the legal level of what they say, a 10-year-old.  What does that mean for responsibility innovation?

     >> HIROKI HABUKA: Ed most simple meaning of agile governance is to make the governance systems agile and multistakeholder in a distributed way.  The reason it is clear based on our discussion so far technology is changing so fast.  And in Japan it takes at least two years to make a new regulation on even revise an existing regulation and it is just too long time considering the pace of the speed.

     We needed change or approaches -- to change your approaches.  What is the alternative?  Market.  It doesn't necessarily work very well because of a lot lot of evenings for example in the huge information gap between the government and private sectors in the way that the government has now much less information than the private sectors.  Or you know, negotiation of power.  If you just want to use the service provided by the big tech company and don't have fly options you just click yes to terms and conditions of privacy policies.  The market is not always perfect.  They have norms and ethics.  Again it is good, but not always so helpful.  First of all, it is really hard to define the meaning of principles like privacy and security and safety because AI is a system which moves just based on the probability.  So it doesn't give you a clear-cut answer.  It always answers in a probabilistic manner.  So we need to define the values which is really difficult.

     And also we are not always correctly understanding what the risks are.  Or how the technology is.  And we just easily get furious about the easy toe understand news but the problem is caused by the new technologies but it is also difficult to understand the benefit of the new technologies.  Four those reasons all of the those different mechanism will not work perfect manner.  We have to combine the different tools so that we can make our technology more trustworthy.  And the only solution for only direction we can go is just try different approaches and see if it works or not.  And if it doesn't work very well we should quickly update it.

     When we talk about regulations the first question we should ask is, is this activity overly regulated or not?  And if yes -- already regulated or not.  If yes, if it is regulated for example in car driving or giving legal advice or medical advice.  Then if you try to make AI dod same work.  We need alternative regulation for AI.  How to make the same question.  Maybe it is better to have another regulation for it.  If the activity is not regulated for human beings we have to ask why we need a new regulation because of the reason that this is done by AI.

     What is the AI does is statistically analyse the past data and give you the most probable answer which has been done by human beings for long years and AI can do it more quickly and efficiently.  It will -- flyway so we always have to consider the questions.

     And there will be no answer before you try.  So you only understand the risks after you try it.  So that's why we really need interactive approaches and in Japan traditionally we believe that the government shouldn't make mistakes.  There should be no mistakes by the government.  But we just to have change our mindset.  We have to admit that the government can make mistakes and so is private sit everyone.

     There is what is necessary for software development.

     >> PAUL FEHLINGER: Makes me thing you used yourself to the term mindset -- mindset shift because it is almost counter intuitive and especially from a public sector point of view to experiment and to potentially expose or accept a certain amount of risk or uncertainty.  But what I find very interesting in this approach is that by spelling it out and saying there is always uncertainty and risk.  You don't really know.  If you what could go wrong but don't know to what extent you call it risk.  An interesting sort of mindset shift to say if this is how the reality looks like what do we do?

     I want to give the floor to Paul because he leads a very interesting initiative that he already mentioned himself which is called -- yet a different puzzle piece in the national landscape or the ecosystem.  An example of an initiative in the public interest to work more on the infrastructure level.  Paul, can you explain bit more, what do you want to achieve and how does it enhance responsible user R.s of technology and responsible innovation?

     >> PAUL TWOMEY: I was on mute.  I will try that again.  Thank you.

     The guide is an international initiative by accident.  It was a conversation that I had with a then head of the keeler institute for economics out of Saudi Arabia in the G20 meeting where I think the buses were late or something and we had a long conversation about capital issue.

     Many and decided people should think about this and what is the imablications.  The first product of thinking through what are the implications of that model we have got in the digital economy from an economist perspective and he is an international economist and then people from the communications and Internet and what have you, policy process.

     And we have been joined by 70 or 80 of our closest friends if you like.  I say that jokingly.  People who are experts in various areas around the topics that come up.  What has been the consequences of this analysis?  One is we actually thought for the through market structure for the digital economy.  We have use hes and consumers.  You have do the digital service providers here and that is basic.  I sell you something and you buy something sort of model.

     Abfor E commerce they interact with other producers and manufacturers.  If that was just the digital economy is would all be fine and that works as market in the sense that the individuals involved actual do have power.  Partly because they have the Poor in the sales and goods act and have -- power in the sales and goods act.  One of the best indications this works at a market is the margins are relatively thin and the consumer is king.

     It is about $5 trillion size market per a annum.  We all know there is the dirty underground activity which is the service provider saying if return for my ayouing you to participate in the international digitally you will get this and I get to have information about you.  I would argue nobody has an aid mow much data is awarded.  This thinking through that we said what, that data gets reviewed.  But the data done in such a way that it is the market was all driven by advertisers and others.

     And the skills all sit in the market but the people who should be the consumers are not consumers.  They are the product.  What would be the implications if you changed that such that the consumers -- that the users were the consumers and could play a role in the market for personal data in particular?  Not necessarily talking about making data privacy rights but saying if people have the right to decide who has access to key parts of their personal data and under what terms and have the right of association and recommendation such that the skills?  One parts of the mark already could Dom and actually work for consumers we would end up with a liberalization and innovation that we saw in financial services over a long period of time.  As tennis would say, you are taking what took place 1860 and 1870 in the industrial revolution.  The process that was bringing people together and empowering them which ended up with the middle class.  There are a number of discussions with governments and entities where just making the shift in the policy and there is a lot more detail behind that could end up with' dressing.  Of the things that have been misalignments in digital economy.  At the moment it is driven on the by one incentive system which is advertising and that serves on the one parts of what is an ecosystem.

     >> PAUL FEHLINGER: Thank you very much, Paul.  An interesting initiative and a very important one and in many ways follows a similar philosophy that also let to the development of the de-centralized social networking protocol that we are stewarding the governance of.

     I want to give the floor to Vivian.nd you sort of have two hats on the panel.  You are with Aspen Digital.

     I just wanted to also give you the opportunity to tell us more Aspen Digital, what are you doing and how does it foster responsible innovation?

     >> VIVIAN SCHILLER: I will talk more broadly about Aspen Digital.  Our favourite topics are ones that are incredibly complicated and Tangled on thorny and fast-moving and where there is a lot of chaos and misunderstanding.  We love those.  Good news is there a is a lot offish slews like ha out there.  We have (ty of work to do.  We are not researchers but we bring groups together across sectors for information sharing and sense making.  I know this sounds simple but if many ways it is what we have discovered it is really an incredible gaps.  It never ceases to be shock together me that groups are not talk together each E. each other when it comes to complicated issues.  Just the U.S. government.  People in the U.S. government.  Different parts are not talking to each other.  I cannot tell you howings with different representatives from the United States government meeting each other for the first time.  You are doing that?  So are we.  So we bring those together.  If we did nothing else, I think that would be worth it.

     Groups from government.  The private sector.  Private secker tech but the rest of the private sector who are not tech companies that get left of out of the conversation.  Academics and researches asseversage try to bring not the public if but at least representatives of the public.  And we do not drive the action but ignite the ability to drive action.  A couple of quick examples.  A lot of focus on information integrity.  And we had something called the aspen commission on information disorder.  Which was a panel of people across sect hes where we identified issues and lots of people working on it and not a lot of action happening.  What we did there which is our model for everything we do, in look at the people already doing the work.  We are not there to reinvents the wheel or try to pretend we are the first in the space.  We elevate the best ideas and then sort of distill them with the group of experts and drive action.  We actually we did when if came to the recommendations from this commission.

     Cybersecurity is a big area of focus for us.  Both U.S. domestically and globally we bring groups together to again share critical information.  And then drive action.  Countless examples.  The tech accountability to hold them responsible for the pits made but don't follow through on when it comes to diversity, equity and inclusion both in the way that they recruit and hire, the way they upskill but also the way they approach the development and release of products and how those imfact underRened or vulnerable communities.  Everything it about AI.  We have been asked to come in and help different groups what is going on.  We convene members of Congress at their request to educate them on take sort of like, you know, so many people are in the weeds or so many groups that have specific interests.  So are we -- we don't have a particular interest in any, you know, we are unaffiliated so we are able to bring in experts and help bring sense-making to them.

     We have done the same thing and we continue to do the same thing for journalists and news media.  We have convened all around AI, heads of philanthropic and foundations to help them understand what is happening with AI and that can impact the philanthropic giving and doing the same thing with private sector companies around how they think about AI and labour markets and setting off on very, very substantial work around the intersection of AI, elections and trust.  I think most people in the room know that 2024 is going to be a gigantic year for elections.  Unprecedented consequences of national elections.

     Is what we do.  What are gigg ling about Paul?

     >> PAUL FEHLINGER: A collector item.  She has a collectors a notepad from the second global conference hosted in 2018 and I was surprised to see this five years later.

     >> VIVIAN SCHILLER: Swag is very important.

     >> PAUL FEHLINGER: It has a lot of carbon footprint as well.  I think as a segue to tell you more about the global miked initiatives that we are leading on ethical principles for technology, mark, if I can put you on the spot.  You asked a perfect question from the chat that is the perfect segue to the first part of the discussion.  Ask your amazing question which is exactly the provokation we need.  Please introduce yourself.

     >> AUDIENCE: Mark.  I'm an Internet governor fans consultant.  I work with a Dynamic Coalition here at the IGF on cybersecurity standards deployment.  Our coordinator is here at the back.  My question was really about, going bark right to the beginning when you started to talk about ethical innovation.  That sounds like a wonderful ambition and starting point.  To acto address risks.  Innovation is inherently ground breaking.  If the Latin nova, new.  Everything is new.  And the designer of the eproduct product is foe cowed on newt market opportunity.  List Tory shows that innovation can actually lead to unintended consequences and impacts.

     And even, you know, inadvertently.  Things that were not anticipated.

     And sometimes maybe when a product or service is maturing through development phase some of those things might become apparent, but they tend it o, you toe, the commercial in distinct did to try to skirt around -- instinct is to skirt around that.  The question in my mind mow you can be certain from pursuing ethical innovation when, you know, there are those risks of consequences that would derail such an ambition but which may be unstoppable.  So that was my question that I put in the chat.

     >> PAUL FEHLINGER: This is, thank you, I could not have inverted a better segue.  Doctor, is a name for this, by the way.  This is calmed the Collin ridge dilemma.  You cannot regulate something address.  Technology has a life cycle.  You don't know what happens with the technology what people might or might not do with it.  How can you regulate about of it the mainstream adopted.  A conundrum that sounds imposition.  With the massive break through coming from the computer brain interfaces and quantum AI and XR which will transform our entire economies and fabric.  A question we have to ask ourselves even if it is a wicked problem.  This is why weiatedded initiative.  It is incredibly bold but yet I hope we approach this with the necessary level of humility because there is very, very difficult and this is why we decided to consult the smartest people we can find on those questions.

     And this global initiative that we launched a couple of months ago I think has five criteria or elements that make it special other particular.  The first element is we want to look at flew technology at large -- new technology at large.  Not just one vertical bucket like AI or XR.  Ewant to look at new technology over and over again it is the same mechanics of disruptive things arriving and us not knowing what to do with it with the economic and social factors involved.  And take a 360-degree view in the work throughout the entire innovation cycle.  A lot of efforts historical I will focused just on regulation at the very end.  Fraucht the onset, how is technology designed and developed.  How is capital allocated through venture capital and other modes.  How are things commercially deployed and ultraplattably, yes, highway things are regulated, of course.  A third element is we don't only want to look at principles.

     And have yet the 631st set of the principles.  But we want to look at what I think especially Hiroki my lited strongly at the processes and -- highlighted a the pro of managing insert and risk.  A mindset shift that we hear from everybody we talk to.  Our ambition is that what somes out of the process is operational.  We don't need a 631st set of principles.  There are extremely good sets of principles already out there.  The question that everybody no matter what stakeholder group you are this in the ecosystem is wondering so what, how does in work in practice and aplay to my work.  It is difficult but we need to figure this out and this is part of talking about ethical principles and enabling the infrastructures and how this interplays with ethics by embedding them or enhancing them.

     With this, Vivian, I would like to kick the ball back.  Can you tell us a bit more about what we actually do, how we do it and what we have already done and what are the next steps?

     >> VIVIAN SCHILLER: Thanks for laying out the core principles that guide the work.  It is important and has been a wonderful project and has been great to collaborate with Paul and his colleagues on the project liberties institute on this.

     We have been doing a series of consultations because we want to make slur slur we are building upon all of that.  We have been doing a series consultations around the world.  We are bringing together leaders and advocates and wilders and funders from various sectors.  We had 10 different meetings so far.  In a couple of weeks it will be five continents.  Included to date 200 people across the consult Craigs.  Rights con in Costa Rica and then in Kenya and then in Paris for a European consultation and we are here now and then the final set of consultations will be at Stanford in the U.S. in a couple of weeks.

     And everything that we have heard and that we have learned through these consultations will influence and guide a draft document that we will be sharing for feedback in mid-December then with a final document that we will hope to be finished in February of next year.  So we are on a very fast timeline.  It will have very specific proposed actions.

     As Paul mentioned, it will not just be a set of principles.  It will be a sets of processes and the so what and so who as well.

     So you know, we have learned, I don't want to preempt.  We are still synthesizing everything.  We learned a lot but a couple of insights which have come up on this call today, I mean on this panel today so far.  So we all of the 630 principles are out there but the problem is still with us.  Our work is not done.  That is one thing that is very clear.  Another insight, regulation is very powerful tool but not the only tool.  Often what we fail to take into consideration and Hiroki you alluded to this.  We don't have to set a new law or regulation for every new technology comes along.  Often it is existing technologies that already exist.  Sore are immixes is up my words.  Existing regulations or laws that exist apply.  Humanity human rights or copyright or human protection or safety standards.  They are in place and time tested.  Maybe not always obvious how they apply to new technologies but we should start there.  The third is that we must you alluded to this earlier.  These ideas cannot just be coming from thes US and the EU but from all over the world.  The aamazing things we heard across Kenya were really enlighteddenning for example.  We talked aboutth earlier on the panel.  We need to find ways to influence what is built before it enters the market.  To do that we need more than regulatory approaches.  We need to have sort of open development process that balances speed and innovation.  Because those are important.  We cannot ignore those.  With public interests.  So that's where we are but much, much more to come.

     >> PAUL FEHLINGER: This is why I would like to ask the question to everybody following us remotely and from the room and to our amazing speakers.

     So how should we develop, invest, deploy and regulate new technologies?  In a nutshell what do you think is the important crux in the ecosystem that is not yet working that should work from your point of view.  Some remarks.  Please take the floor and please introduce yourself.

     >> AUDIENCE: Thank you, Paul.  Coordinator of the DC iniest standards security and safe city can.  I think about 50% of our members here at present.  At least 50% here in the room.  What I miss in this discussion is the following.  We have now several research reports out and all talk to the same thing.  Governments discuss cybersecurity and they don't procure by design.  All of the research that we have done show that there is hardly any government inned world who has in the procurement documents something about cybersecurity let alone the Internet standards that run the Internet.

     They don't recognise them in their legislation.  So we we talk about the public core of the Internet and protecting the public core they don't even recognize what the public core is.  When we talk about ICT in whatever form, should there be a component where governments take a lead when they buy it because that would be a major driver for industry and create a level playing field.  Everybody not cliffing would not be selected.

     Would that be an option?

     >> PAUL FEHLINGER: That highlight light capital invest inment, procurement and ethics and how you implement standards or roll out standards across the ecosystem.  Thank you for the question.  Let's collect a few more if somebody else would like to take the floor from the audience or online.  Otherwise I will maybe kick itback to Paul who is following us from Australian.

     >> PAUL TWOMEY: Sorry if I'm going to target.  If there is somebody from the room from the OECD I have to target them.  Here ised challenge to the OECD audience.  We presently in most parts of the world have a governance system for private companies which relies oned concept of a board and having accountability for how it works.  What we have increasingly in the cybersecurity, what we have is boards made up of lawyers and accountants because they were the risk issues for the last 150 to 200 years in companies.  But nearly every company now is a digital company.  We talk about digitalization and yet we don't have sitting as a core part of the curriculum or general multilateral governance around corporate governance and any signallal which says the board should know something with technologies and around data.

     There is a clear issue there around cybersecurity.  The other part is around the ethics of technology.  The lawyers and account and thes are there often bus they are trained in the backgrounds around not only accuracy but!

     Ices, what is the right thing to do.  We should do the same thing I think trying to put a challenge to the corporate governance models when we have it inside cops.  That one -- companies.  That is one thing that could be a reinforcing mechanism.  When the companies themselves have board members that worry about these things,ed vendors will change behavior.

     >> PAUL FEHLINGER: That is a very important point your raise on the bucket of deploying new technologies.  And I think we can even look at other seconders.  If we talk about sustain and and climate on board governance level.  It is not per spect but if we compare to the technology sector did is much more developed from terms of there is a need to act and just start now thanks to artificial intelligence and sort of being a global cross cutting massive issue.  Those discussions stop, we are at the very, very early stage.  This is a very important point.  I see Hiroki nodding a lots.  What do you think across the innovation cycle, in what ask your recommendation?  What is missing?

     >> HIROKI HABUKA: First of all, what I believe missing is the mindset of each stakeholder.  For example, the government should change the mind set that they cannot change everything and rule everything around expect over everything.  Should be more like facilitator role or in sendive provider role rather than --

     >> PAUL FEHLINGER: Can I just interject?  Does this mean anything goes?  An ultra liberal approach to technology?

     >> HIROKI HABUKA: When I said in sendive provider.  Don incentive provider the liability sanction mechanisms in a way that promotes more ethical behavior or private parties initiative on setting their own ethics and implementing it.  For example, if as it is now, if we make the regulation in a very specific man erit will on one hand harm innovation but on the other hand -- instead of that regulation should be more principle-based or process-based rather than the specific, you know, prospective rule based ones.  With you still there will be a gap between what the regulation said and actual operations.  So we need some intermediate rules.  Consider, the quick tech perspective for Civil Society per sective and update in more agile manner and it is a soft law to you can even -- can achieve the better goals from a different way.  Is this a kind of the type of regulation which we could consider.

     And also if we considered liability systems, at least in Japan now if you disclose the bad information then you will be criticized more.  Because it has more news value and also the regulators just coming to you and try to triggered sanctions.  Maybe we could make new incentive mechanism that would give award for companies that detected the problems and reported it.  And cooperated with the investigation or maybe suggest new improvement measures.  We could give award for those companies even after the accident to incentivize the companies to be more ethical after something bad happens.  This kind of the design of regulation or liability systems are something we have to consider instead of trying to understand it everything and consider trying to control everything.  So that is the governance part.

     And, of course, the citizens have to understand there is no perfection in regulation or the new technologies so we have to always consider there is always tradeoffs and focus not only on the risks but also to the opportunity risks which will be a lot if we just miss the opportunity to new technologies.

     Maybe a lot of maybe a lot of public service will not be delivered or happen because of that.

     >> PAUL FEHLINGER: I think you started with the mindset shift and finished with talking about almost the culture of innovation and regulation at large because how weary act to mistakesy.  There is more than just a mindset.  A fundamental shift in our risk averse cultures where everything S. focused on safety and this is counter intuitive to the mainstream way that we approach regulation.  Flanks for sharing that.  That is the kind of topics we need to discuss.  Martin, if you could institute deuce yourself and fleece give us your sort of hope and view.

     >> AUDIENCE: Thanks Paul.  I'm Martin Boltoman.  A pleasure to be leer and thanks for that.  And basically, the responsible disclosures.  Responsible technologies, right.  I think the principle for responsible disclosure is that you avoid more problems by being transparent and active and reduce the risk.

     And the good news about responsibility discrowsures is you are not the only country trying to find its way in it.  There is many countries.  Can learn a lot around the world.  Making responsible disclosures ethical arounded world would be a thing that would favour that.  We have seen examples of responsible disclosures have led to better acceptance of that company.  It is not hopeless and I think it is -- same thing for responsible technologies.  Paul you said boards are responsible and have a responsibility.  Only of that may becketsical.  The problem is ethical responsibility is that it is also different.  Law for jurisdiction.  Ethics is not the same around the world.  How can we come to a global understanding because we talk about global technologies that may be deployed in one country and maybe used in 70 other countries.  Even if the service is originating from a third country or the same country.

     So that is why I think some kind of global guidance is important.  At least with the -- you can generate some global practice that sets the standard around the world for companies and maybe also governments will take into account when they are thinking of how to implement it.

     The biggest foundation is also that governments don't only dive that their own jurisdiction because there is law everywhere.  You play end up with what is impossible to deliver for the global companies that deliver the services and things.  The earlier we have some global understanding about good practice anded project pay well contribute do that, the better.  Thank your so were, Martin.  I want to give the floor to Elizabeth to tell us given the amazing opportunity with the ambitious mandate, GT tech is the acronym I just learned has what is your hope with public interest developments of them?

     >> ELIZABETH THOMAS-RAYNAUD: That's very ambitious.  I will be a little bit more local or what are the practical elements that we can offer to the equation.  One thing that strikes me as we start to look at some of the technologies perhaps not every single one of them is that we have a lot of policy work there.  And if you take the immersive technologies and people talk about the metaverse.  There are a lot of things already there.  Sam of the questions like all of the biometric data and speed and intensity and what are the imwhether Cations of that and the discussion that is going to go not discussion of the privacy commissioners and what elements need to be tweaked or even if the OECD privacy framework it will apply.  What are the nuances and details.  Other than that, there is the component parts that already exist.  One of the things when we think about how would we can heap in the O ECD.  What already exists in the anthropologicy areas.  Security, connectivity.  Privacy.  But also things Blake competition and IP rights and trade -- like come pa expectation and all of the things with lever component.  Paul, good for you for being provocative.  The private seconder piece and understanding what the leavers there and the procurement idea.  I'm hoping this is what we are going to hear inside the Focus Group is some of the ideas and some of the things we can do is go away and understand it the measurement side of that.  What happens when people are using the example in this sectornd can what can we understand an what are the inintended consequences of the ideas and what are the ideas that might exist that we haven't tapped to tweak the policy.

     The last thing I will say about that is that I think we are not looking for the principles get you started and orient and helps everyone understand where north is.  But once you have north you have to figure out how could you actually, what are the different roads that you are requesting going to take in getting that and get around the roundabouts and otherrish slews to take that analogy abetted further.  I was going to use the term tool kit but have but since I'm on the journey.  What are the things you need.  The compass and backpack and granola.  You pull in the policies and develop the kind of guidance and understanding and it can be I think the AI example inside the OECD.  There were the principles done but there is this experts group and they are working constantly to help understand things like compute.  And the demand for compute.  And what is happening because of the certain policies that are from place for certain markets.

     And what are the implications of that oned way the technology is developing and also divides and other things.

     So all of those questions and pursuits come together.  Finally it is a composite of things that get brought together to help the understanding rather then anything sort of very visionary and over arching, if I may.

     >> PAUL FEHLINGER: Thank you so much.  Last, but not least and you might be slightly biased from this.  Vivian, what is your hope to how we address responsible innovation and what we can achieve with the global process and all of the wisdom that people share with us.

     >> VIVIAN SCHILLER: Everything we have been talking about Marying the principles in a practical way to see the principles turn into action.  The praeses like Hiroki was talking about.  Or the -- and the global multistakeholder nature as the other panelists have been taking about around making sure that the dream is that we can actually see some of this.  All of there smart people that we have been talking to throughout the consultations to see their incredible ideas actually come to pass and influence and take us on a better journey.  I will pick up your metaphor, e.  We need to take a right turn here and left turn there and what passengers we will pick up along the way.

     >> PAUL FEHLINGER: Thank you for sharing this.  If I sort of ask in question myself.  As a final after thought on this.  I hope the global consultative process can help to shed light on the known unknowns as they call it in risk management.  We cannot address what we don't know.  And right now, the ecosystem with all of the hype on AI and tomorrow's lines and quantum and who is actually talking and we are not talking enough about computer brain interfaces which will be again a complete game changer on how we interact with technology often more is different.  What are the things we need too discuss.  All of the things here today from the kind of fora we need and who needs to be involved.  Questions on mindset shift and cultures of regulation that exist up until now and learnings that come from previous web 1 sort of innovations.

     Things we have learned from how to deal with disruptive technologies.  All of this we need alignment in the ecosystem and I'm personally carefully almost pessimistic.  I believe there is no perfect set of principles for how to I dress this.  This is expert -- how to address this.  This is extremely complex.  If this is the ecosystem we live in, how do we optimize for this.  We don't havele metre discussion on it works in practice.  We can learn from ore areas.  I want to are -- other area.  A note of also kindness to the eoh system.  If we look at let, bioethics, should you clone a human yes or no and the rapid reaction the is sheep was cloned a and people said this is too much let's not go there.  In health and medicine we have a millennia of people thinking about what is good life and what life should be saved and health and all of this.  Technology is 30 years old so we are still learning and it is part of where we are in the process.  It is have he important that we have the focus the entire energy and ecosystem.  Thank you for the amazing work and important work they are leading in the esew system.  Tank you.  I know we are standing between you and the German reception with German traditional beverages.  Thank you very much.  If you want to get engaged from the work of Project Liberties Institute please reach out and enjoy the evening in Kyoto.  Thank you.