IGF 2024 - Day 1 - Room 6 -WS #45 Fostering Ethics by Design with Data Governance and Multistakeholder

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> SHESAN: Welcome, everyone.  Good afternoon for those here in our time zone.  But also good morning, good evening for those watching us online in other time zones.  My name is Shesan from the Research Institute of Germany, but originally from Brazil and co-founder of the Laboratory of Public Policy on the internet and nonprofit organization based in Brazil. 

   

   We are going to start now our workshop number 45 on AI Ethics by Design. Our main goal here is to delve into how the government initial tans by ethics in design and data oriented technologies and specifically technologies so more specifically this man panel is to one offer an overall understanding of ethical design and importance of ethical at the inception of technological development.  Two, address the challenges of ethics AI and other digital systems and finally the multistakeholder collaboration and its relevance in shaping ethical norms and standards, particularly in the context of the recent UN resolution A78L39 which underscores the importance of safeguards for AI systems.

 

   We have a policy question which will guide the pen toys reflect upon these issues.  First one is how can policy makers effectively promote the concept of ethics by design and sharing integration of ethical principals of process AI and integral systems and it meaningfully includes multistakeholders especially communities affected by these systems. 

   

   Second policy questions is what are the challenges between ethics AI and other systems and how can policy makers and civil society address them to ensure digital responsible development and deployment and finally what strategies and mechanic anyones can be implemented to foster these collaborations in an ethical way considering the diversion interests among these communities.

 

   Who is going to moderate this session are first thee go no rays, a candidate in the law in the University of Brasilia -- (?) I pronounced he also works as at the Brazilian Protection Authority.  Thiago is a co-founder and director of the Policy Internet.  The other moderator is Alexandra Krastins, a senior lawyers a VLK.  She provides consultancy and privacy governance and also a former member of the -- a former worker -- from the protection north as a project manager and co-founder and councillor of the laboratory policy.  Hope you all enjoy the session.  Looking forward to the great discussions that we are I'm sure going to have and I pass the floor to thigh go.

   >> MODERATOR: Thank you, Shesan.  We really excited to be here today because this is not only a relevant discussion but also it's important for us to understand better what is being called a more hands-on approach when we are discussing this topic of ethics in AI and that's yet sign part of it is so important.  And we brought brilliant speakers today.  And I will briefly introduce each one of them as they open the introductory remarks. 

   

   And first starting I would like to invite Mr. Ahmad Bhinder to speak.  Ahmad Bhinder is from the Intergovernmental Organization, with over 20 years of experiencing public policy innovation he has driven economic activity and growth.  Ahmad is advancing the mission in promoting digital prosperity for all.  So, Ahmad, many thanks for you coming here.  Yesterday we saw your framework.  It's very interesting.  And now we will have another interesting moment to now how it can relate with the questions that we are provoking here in this session.  So please, the floor is yours.

   >> AHMAD BHINDER: Thank you, Thiago. Thank you so much, everybody, for inviting me and behalf of the Digital Operation. So briefly what we are, we are an individual organization and headquartered in Riyadh with the countries from the Middle East and Africa.  And European countries as our member states and South African countries we started in 2020. 

   

   In the last four years we have come from 5 member states to 16 member states now.  And we are learned by the council that has representatives or the ministers for digital economy and ICT from our member states Our sole agenda is promote digital da and the growth of the economy to.

   Is makes us a one of a kind Intergovernmental Organization that is not looking at sectors but broadly to the economy.  Again we have our office here.  So we welcome you from Brazil, the whole group of you here, to Riyadh.  I hope you are enjoying.

 

   So coming to the AI global awareness.  There are different issues that we are doing when I explain the framework.  But broadly, AI development and governance is not a harmonized phenomenon across the globe.  So we see two types of approaches.  Which is led by the EU or some of those countries where we call it a lose based Rick approaches and we see the EUI law or AI act which has come into place which categorizes the AI into these categories, and then very prescriptive rules are set for those categories with the high -- that are high risk.

 

   And then we see in the U.S. and Singapore and a lot of other countries which have taken so-called pro operable approach which the focus is to let AI take its place in development and set the rules which are broadly based on principals.

 

   So initially we called it as a principal-based approach but actually all the approaches are based on principals.  So even the particular regulatory approaches, they are also based on certain principals.  Some call them ethical AI principals, some call them responsible AI principals, et cetera, et cetera.

 

   So also we have seen across the nations different means of approaching AI regulation.  For example there are domain specific approaches.  So we have laws, for example, for health sector and education and for a lot of other sectors across.

   And those laws are being shaped and developed in advance to take into consideration the new it challenges and opportunities that goes by AI in them.  And then we have framework approaches where a broader AI frameworks are being shaped in the countries.  That would either refund some of those laws or they would impact the current laws.

 

   So there's a broader framework.  And the third one is the specialized AI act.  So Australia is work on an AI Act and China has an AI law.  So I just wanted to --

   (Audio Difficulties)

   (off mic)

   (Audio Difficulties)

   Where certified and has conducted numerous training sessions for

   (Audio Difficulties).

 

 

   Maybe it would also be very useful to describe the challenges in the region, including in the region.  So since 2019, we have invested more than 5.5 million USD on rigorous privacy related programs.  And privacy matters whether from a policy perspective, legal, et cetera, has expanded from a few hundred to more than 3,000 people.  So this is an overview about the team and the investment that meta is doing about privacy.  The challenge is when we are talking about a moving technology and wide range area.  So developing and enabling AI and responsible AI cannot be done by companies separately or individually or companies cannot work on this alone.  Other challenges that we can find in the region is the disparity when it comes to legal frameworks.  If we are talking about privacy, some of the countries in the Middle East and north Africa region, for instance, and turkey, have regulators and have educated data protection loads, some of them not.  Saudi Arabia has a legal framework when it comes to AI ethical principals.  The U.A.E. as well.  But others don't have such a legal framework yet. 

   

   So this disparity also might create another, or an additional layer of complexity.  So one of the -- so Ahmad mentioned earlier the approach based on risk based approach.  The principal-based approach as well.  And this is exactly what we advocate for when it comes to regulating AI.  We also advocate for technology neutral, legal frameworks, build on existing legal frameworks without -- to avoid creating conflict between different legal frameworks.  And then most importantly collaboration between different stakeholders.

 

   And the way we approach this collaborative work at meta, when it comes to privacy or ethical standards in general is that we rely mainly today on open source AI.  So some people will ask a question, very simple one, saying normally open sourcing AI would bring more complexities because we are opening the doors to malicious actions, et cetera.  And malicious actors.  So how come we are enhancing privacy with or through Open Source AI.  Actually our vision to Open Source AI is experts are involved.  First off we are opening the models.  We talk about Open Source AI it means we are opening the models to the AI community that can benefit from these AI tools or these AI models.  And everyone can use it.

 

   Now the impact of this is that when we open these models, experts can also help us identify, inspect and communicate some of the risks.  And it's about having a collaborative way of working on these risks and mitigating these risks within the AI community all together.

 

   Of course all of this work is also proceeded with work, a privacy review, before the launch of products.  Technological assessments are being done by meta.  Fine-tuning, safety fine-tuning -- (?) is done ahead of the launch of any product.  But in addition to all of these, of course we have the privacy center.  We can talk about the privacy reeled tools that we have.

 

   But if we want to be specific on collaborative work, of course, one the model is launched, and at Connection 2024 which is a developer conference organized by meta annually, we announced the launch of LAMA3.2.  One of our open large language models through -- using an open model that can be used by the AI community, of course.

 

   So just to describe this, one of the tools that we use in Open Source AI and purple project that we have, which means that before takes place standards and sharing standards with the user -- there is a project that is called -- a project which enhances the privacy and safety.  It means that this is an open tool that is -- that where developers and experts can use and to mitigate the existing risks and tools they have investigated these through

   (Audio Difficulties)

   Because it provides the about blue teaming and red teaming and both are necessary in our opinion.  It puts in place standards.  And we call this the responsible news guide and of course accessible to everyone.

 

   So this is when it comes to Open Source AI.  To complete an open source for AI for us it's a tool to enhance.

   It's through Open Source AI that we can enhance privacy.

 

   Another project that is worth mentioning is the open projects that we have at Meta, which is collaborative feedback way of working.  So we gather

   (Audio Difficulties)

   When it comes to prototypes of AI regulations.  And ethical standards or any issue that has been identified in a specific country.

 

   So there are steps that are being put in place.  Gathering policy makers

   (Audio Difficulties)

   And starting from there under real conditions -- the real world conditions, these prototypes or these rules are testing rules, let's call them.  And then starting from there we can issue what are the recommendations.  Learn from the lessons.  And then also issue policy recommendations.  These are the four steps that use -- actually last year, or the year before we organized an open loop sprint -- for instance the UAI act in Europe testing some of the provisions ahead of their publication officially.  But the open loops is a very small version of the open Loop Project at the Dubai assembly last year, the year before.

 

   In the region the way we do it as a privacy policy manager, for instance I organize expert group-roundtables ahead of the launch of any product related to AI, not related to AI.  We gather our experts.  We have a group of experts.  We share the specificities of the product and we get them feedback to improve our products, whether feedback is legal in relation to safety, in relation to privacy and in relation to human rights, he.  We take into consideration this feedback.

 

   We organize roundtables with policy makers.  Recently we had one around AI and the existing data protection rules and whether they are enough to protect within the AI framework or not.  What is necessary to do.  A discussion on data, subject rights as well.  We also contribute to all the public submissions in the region in some of the countries, not all of them and on the regulation, the nature of the regulation.

 

   In Saudi Arabia, of course Saudi Arabia has been very active on that front, completing the legal framework around data protection, putting in place AI ethical standards as well.  So they have been very active on this.  And we shared our public comments and we do believe it's a positive discussion with policy makers.  Look forward to your questions.  Sorry if I took more than 7 minutes.

   >> MODERATOR: It's okay.  Only challenge we have is -- the first round and try to have some discussions.  But it's interesting to see the many it different activities that Meta has involved to try to bring more of a collaborative approach.  The Open Source AI is definitely a hot topic and there are even some sessions here discussing this topic.  So it's nice to know that there are initiatives like that in Meta as well.

 

   Without further adieu I think I should move on for the next speaker.  The next speaker is online, Tejaswita Kharel.  She is a project officer at the center of the communication at the University of Delhi.  Her work is data protection law including data protection and privacy and technology such as AI and blockchain.  The work from the revolution of technology is guided by human rights based perspectives and democratic -- and institutional work.  So Tejaswita thanks for participating with us, and we are looking forward to know more about your work regarding these topics.

   >> TEJASWITA KHAREL: Thank you.  Can you guys hear me?  I just want to confirm.

   >> MODERATOR: Yes, we can.

   >> TEJASWITA KHAREL: Great.  Alright so I'm Tejaswita. I'm the public officer from the governance at New Delhi.  We do a lot of research work in terms of -- a lot of governance of emerging technology and whether or not the governance is ethical is part what have it is.  So in terms of what I want to talk about today am I know we have three policy questions.  Out of these three I want to concentrate on number 2, primary challenges to emitting ethics I think when we talk about emitting ethics into AI or any other system, it's very important to consider what ethics even means in the sense that ethics in what it is, is a very subjective concept.

 

   What ethics might mean to me may be very different to what it means in somebody else.  That is what we can already see in a lot of existing AI principals or guiding documents where in one can you see that they might consider transparency to be a principal.  Which will be a recurring principal across documents.  But privacy may not necessarily be one.  Which means that there will be varying level of what these ethical principals might be implementing.  So what this means for us is when you are implementing ethics there's a good chance that not everybody is applying it in the same way or even the principals might be different.  So in terms of what I mean when I say that people may not implement it in the same way is I will talk about fairness in AI.  When we looking at fairness in AI, fair seasons a concept is different when you look at it in -- let's say the United States versus what you would consider to be fairness as an ethical principal in India.

   Right.  In India there would be various factors sufficient as cost, religion which would be a very, very high valued rules when are you determining fairness.

 

   Meanwhile in the U.S. these factors may look --  specifically when you are look at AI buyers when are you looking at discrimination.  So with that in mind the first challenge when we are looking at embedding ethics and ethics is different for everyone.  Even the principals though they may be similar there will be a lot of varying factors or differences in how the ethical principals are even understood.

 

   So with that in mind we need to solve this issue, and how do he we deal with that -- when we looking at that as the answer I will get to the point -- point number three which is the strategies and mechanisms, what strategies and mechanisms can be implemented, right.  So one way that we solve this problem is by ensuring that there is collaboration between multiple stakeholders in the sense that we very often -- as civil society and policy ideas we have very different ideas what ethics means but the developers and designers of this system understand what this even means.  Whether they have the ability or not to implement design into these systems is a very big question.

 

   The main way we can solve this issue is by identifying what these ethical principals are.  What it means for each differing context.  I am of the belief that we cannot define ethic as a larger context.

  

   We must understand that depending on the system, depending on the context there will always be differences of what ethic business design will look like.  And there must always be differences.  Because there cannot be a one-sides fits all ethics by design.  Because not everyone agrees on what ethics mean.  So first we determine what ethics even means.  What these principals can be.  Whether or not, for example we want to ensure that privacy a part of the ethical principals, for example.

 

   And then we get into the question of what these factors are that will be included within these ethical principals, like I said.  If it's for fairness, are we looking at fairness in the sense of nondiscrimination and inclusivity.  Whether it's fair is important to have one level of understanding.  And then we get into understanding how developers and designers can actually implement this in their systems, whether it's by ensuring that their data is seen before they start working to ensure there's no bias that comes into the dynamic.

 

   So I think the main way we ensure ethic business design is by ensuring that there's good collaboration between stakeholders, this collaboration can be in the form of a coalition, for example, in India what we have right now is we have a coalition on responsible evolution of AI where there's a lot of stakeholders, some of them are developers and some of them are big tech.  There's society participation and in all of this we talk about, number one, the difficulties are in terms of AI and the responsible evolution phase.

 

   And we also discuss how we solve.

   This so the only way we can do this is by actually creating a mechanism, through collaboration between all of these different stakeholders where we discuss and identify how we design it.  So this semipoint predominantly in terms of how can you implement ethics by design.  Thank you.

   >> MODERATOR: Thanks a lot, Tejaswita. And interesting to know about the coalitions that are trying to engage the different stakeholders to thing such as fairness in AI.  I think this is part of the puzzle that we have to solve here.  What we are discussing about what it really means on ethics by design and where we will get from here is definitely a challenge that we have to consider many different perspectives and one of the challenges is how to make these collaboration is to work and come into results.

 

   So thanks for giving a glimpse on that.  Hopefully we can have some time to go back a bit.  But we will move now for our last but not least speaker, who is Rosanna -- she is a specialist from ethics part of the bio ethics and science and technology team focusing on technology, governance and ethics.  She supports the global implementation from the ethics on AI.  And assists in shaping ethical AI policies.  Previously she coordinated AI policy project and contributed to research at the institution, the European parliament.  Rosanna holds enter tease in issue AI governance and policy.  So thanks a lot for being here with us, Rosanna, and we are looking forward knowing more about your work in the topic.

   >> ROSANNA: Thank you very much, Thiago, and thank you for people online.  A lot of thing have been mentioned that I hope to not repeat.  I hopefully not do that but offer some sort of -- first putting to the remarks that we have heard together today and also offering some perspectives for the discussion.  And I will outline that based on the work that we do at UNESCO to implement ethics of AI around the world. 

   

   And first, thanks also for organizing this session because I think it's really, really important to always think -- when we think about new technologies and especially Artificial Intelligence, to look at the ethics.  Because the ethics is what makes us human, what makes us coming to, what makes us sit together in the room and discuss and interact and exchange different perspectives.

 

   So for us at UNESCO, ethics is not something philosophical and also not something that has been built in s a an after thought, so to say what we looking at AI.  But it means really from the first moment to respect human rights and human dignity and fundamental freedoms.  And to put people and societies at the center of the he technology.  So we really believe it should not be about controlling the technology but rather steering the development in a way that serves for human kin because we believe that the technology conversation -- especially the conversation about AI is in the end associated with a technology (?).  And this means that we must scale our governance and understanding of the technologies in a way that matches the growth of the industry and the growth of the technology itself as it develops into our societies in every aspect.

 

   I don't have to, I think, mention the examples of where we see AI already happening today and also the risks that arise with it.  There was one point in the discussion when it came to the AI regulation, it was a bit -- let's say, let's think become a few years when we didn't have AI yet in place, when we didn't have the discussion about the U.S. framework and also not other standards.

 

   There was still in moment, if you remember that a lot of governments were like, oh, but we see the technology is developing so fast, we can't really do anything about it.  I don't really know how to steer it and we need to leave the market to solve the problem on its own.  But there was a moment when UNESCO started to implement its work on the recommendations of the ethics of AI.  So UNESCO actually has been work on ethical governance of technology.  We have inclusive debate regarding the development and implication of emerging technologies with our scientific communities that we have.

 

   And this started off actually as a debate about ethics and human genome editing.  And since then we have at UNESCO constantly reflected on the ethical challenges of emerging technologies.  And this work eventually accumulated in the observation of member states seeing that there is actually a lot of ethical Ricks when it comes to the development and application of Artificial Intelligence.  And this is what has led us to work on the recommendation on the ethics of AI.

 

   The recommendation on the ethics of AI, if you think again now today is actually quite a fascinating instrument because it is approved by all 194 member states, and it has a lot of ethical principals, values and policy action areas that everybody -- and maybe directing to my fellow speaker and former panelist that there is actually a global standard.

   (Audio Difficulties)

   I can maybe very quickly list the values we have -- the respect, promotion and protection of fundamental rights and human rights.  Then we have the environment and ecosystem flourishing which is also really important when you look at ethics to also look at the environment.  And ensuring diversity and inclusiveness and peaceful and interconnective societies.

 

   And then we have ten principal into practices, for example, discriminate nation, safety, right to privacy, of course, human oversight, transparency, responsible.  You can read them online.  I will not outline them for the sake of time.

 

   And this recommendation that  was adopted in 2021 is actually now being implemented in over 60 member states already around the world and counting.  What does that mean, implementing the recommendation?  It means implementing the recommendation through a very specific -- the various assessment of methodology.  I have only a French version but here it is.

 

   The assessment methodology is actually ethics by design for member states and AI governance.  So what does it mean?  It means that when member states work on the AI governance strategies or before they start working on them we offer them this tool.  It's basically a long questionnaire that gives member states a 365-degree vision of how their AI ecosystem looks like at home.

 

   This has five dimensions socio, cultural and regulatory and structural and other dimensions.  And they will be ensured that member states know where they stand and how they can improve their governance structures to ensure that ethics is really at the center what have they do when they work on AI policy and governance.

 

   You also have another tool, and I want to quickly spend 1 minute or on explaining this as well.  The economic impact assessment.  While the readiness assessment is on a macro level and looks at the whole governance framework, the ethical Spectrum looks at one algorithm and to what extent the specific algorithm complies with the recommendation and the principals within the recommendation.

   That's really important when we looking at AI systems used in public sector, for example when we used AI systems used for welfare allocation or for where children -- when we looking at AI use.  And they have context.  So it is crucially important that these AI systems are designed in an ethical manner.  And ethical impact assessment does exactly that.  It analyzes the systems against the recommendation.  And this is done in the entire life cycle.  So looking at the governance, for example the governance, how has the tool been designed?  Which entities have been involved.

 

   And then it looks at the neglect impacts and the positive impacts.  And I think that's something really important when you look at ethics by design, it's not just mitigating the risks but also look telling opportunities that exist in the use of AI systems and there's also always the context organization of weighing the -- of course the negatives against the positives.  So that's also something that the ethical impact assessment looks at.

 

   I would just very briefly -- because I think I'm almost over time, I will very briefly also mention that we work with the private sector -- so now it's fixed.  We work with the private sector as well because we think when in comes to AI governance nobody can do it alone and the private sector is a key entity and ensuring that AI systems are being designed and implemented in an ethical manner.  So we have been teaming up with the Thomson Reuters Foundation to launch a voluntary survey for business leaders and companies to map how AI is being used across their operations, products and services.

 

   And this was actually not yet live.  It's going to be launched in June but now we have already launched the initiatives, and the question that will be available in summer next year.  And the idea really is -- for businesses to conduct a mapping of the AI governance models and also assess, for example where AI is already having an impact for example on the diversion and inclusion aspect or human oversight or the annual impact assessment is also a feature there.

   And by offering this tool to the private sector we really want to support this sector tool to ensure that their governance mechanisms are becoming ethical and that they can also disclose this to their investors and shareholders but also to the public and really make sure that ethics is at the center of their organizes.

 

   And last but not least is motorcycle division and especially we at UNESCO see civil society always are a really critical part about the discussion of ethics and AI and governance.  But it's most often -- civil society is not properly I think sit telling table when it comes to discussions.  So we at UNESCO, I want to change.

   That we have been already through the last year, working on mapping all the different civil society organizations that are working on ethics of AI and governance of AI.  And we are bringing them all together next year at the AI Action Summit in Paris first and at the global forum at the conference on ethics and AI governance and we will be bringing this global network of civil societies ors for the first time together at these both events and we invite all civil society organizations that would like to join us as well to ensure that we bring these voices effective to the major AI governance processes that are ongoing right now.

 

   And with that I will close.  I really look forward to the discussion.  I have many more points to same but, yeah, thanks a lot and back to you, Thiago or the moderator.

   >> REMOTE MODERATOR:   Thanks for your speech.  I would like to engage online.  Does anyone have they questions, comments observation of any kind?  Please use the standing mic.  Okay.  So I'm going to make some questions to our speakers.  You can answer as you like.  So how are you involving stakeholders from civil society in academia in initiatives have you mentioned?

   >> I spoke last.  Mainly pass it on.

   >> Well I will be quick we are all about issue collaboration.  First of all we have member states who we hold and conduct the discussion and workshop with.  We have a very big, big network -- a growing network.  And all the initiatives that we propose, we try to seek the inputs from them and improve and shape the dialogue.  And we want to then become -- position ourselves as a collective voice and advocate for the best practices on their behalf.

 

   So yeah, this is from an Intergovernmental Organizational perspective.

   >> We have an initiative acadamy attachment. It's a nonprofit community that has been created called Partnership on AI and it's a partnership with academics and civil society industry and media as well, creating solutions so that AI advances positive outcomes for people and society.  And out of this initiative, specific recommendations are provided under what we call synthetic media framework, recommendation on how to develop, create and share content generated and defined by AI in a responsible way.

 

   So this is one of the initiatives that Meta -- actually we collaborate with academia but also CSOs we have other projecting coalition for providence and authenticity with the publication of what we call content credentials about how and when digital content was created or modified.

   This is called C2PA.  And this is another coalition we have, not negligence I will with academia or CSOs or limited to these actors.

 

   Another partnership is the AI alliance that was established with IBM.  And this gathers creators, developers and adapt tortoise build, enable and advocate for Open Source AI.

   >> REMOTE MODERATOR: Tejaswita, do you want to join us?

   >> TEJASWITA KHAREL: Yes, I would say as someone who studies Academia.  I can give more input on how I think we get involved in these conversations.  Like I said before, it's predominantly -- in a lot of these coalitions or other groups, there is a lot of reputation predominantly by industry. 

   

   But I do think very often academia and civil society organizations are invited to get opinions in -- to listen and understand more about what our beliefs are.  But I do believe that very often when this is done it end up being a little bit of a minority perspective.  And it feels like you are not necessarily always taken very seriously.  Because it's -- it almost feels like -- it's a little bit like advocacy.  Because you know you are speaking about thing that may not necessarily be what other people want to do.

 

   So I think even though academia and civil society reputation exists I don't think it's being done in a way that is actually useful.  Because it's almost like it's a tokenisation of a reputation.  So I will be asked to do something or represent civil society and academia.  And I will do it.

  

   But I feel like at the end of the conversation I am there solely like a tick box.  We had reputation and we heard from them.  But we will do what we believe should be done.  So I think it's more of a criticism on my part.

 

   That being said, I have another even.  So I will not be able to stay for this event any further.  I really apologize am I really loved listening to everyone.  I'm really grateful for this opportunity and having being a part of this panel with the Glen panelists and even the moderators, thank you very much.  I will be leaving now.  Thank you.

   >> REMOTE MODERATOR: Thank you so much for your participation.

   >> MODERATOR: I want to add one thing we have a mechanism called accelerator programme where we hold global roundtables on different issues so this AI tool that we are developing we actually have been -- so we gather on the sidelines of big events.  We gather the expert stakeholders like we did yesterday.  And we seek their inputs while we are shaping and designing our products.  So this tool -- so we went to Singapore, for example, we went to Real and a couple other places and gathered the experts. 

   

   And this is a mechanism not just for AI but it's a holistic programme for DCO.  Please have a look at it on our website and feel free to contributor join as well.  So this digital space accelerator programme is how we collect -- we involve all the stakeholders into our nation.  Thank you.

   >> Yes, I will also add a couple points and maybe directly picking up from a panelist that has unfortunately now left us.  It's very much true that we also observed there's a tick box exercise when it comes to society and global governance processes on AI.

   And that's exactly why we want to set -- or are setting up the globe network of civil society organizations.  And maybe to give a bit more context, we will launch this in the conference hosted by France appear happening next year.  This is -- as empty you know, a governmental-led summit.  The first being in U.K. as the seat summit and the second one hosted by the Republican of Korea.  For us important that we not do it again as an exercise but that we bring civil society into discussions and leverage their voices during the ministry discussions as well.

 

   So it's something that the organizers have actually already announced if you go to the website will you see that civil society will actually be a high priority.  And our idea is really to link the dots and to make this a network permanent but then also offer it as a consultive body for future AI action summits or for other major governance process on AI.  And this is something that -- yeah it's really at the heart of our endeavor and we really think, also the corporation with the government foundation which fund this initiative for their support in this project.  The other part that I quickly wanted to mention is the work with academics.  This is also for us a really crucial part of our work. 

   

   And especially people from academia support us in implementing the recommendation through the readiness assessment methodology that I mentioned and 360  scanning tool for governments.  And we bringing to these experts.  So I imagine we have now the -- we are conducting the readiness assessments in over 6 countries.  So that mean Wes have 60 experts engaged in each country.

 

   And every expert brings something that is a bit unique from the country to the discussions and we assemble these experts in a network called AI ethics expert without borders so these it AI Experts Without Borders Network is there to you know in the knowledge that we find in governments on the country level and maybe even on a regional and local level on ethical governance of AI and brings this, so to say, together at UNESCO.

 

   And what is really special about it is it can change, hey what is watt experience on -- let's say maybe AI use in health care or AI use in another sector or -- and maybe there was an issue with the supervision of AI.  So the idea is really to bring the expertising to and leverage also the knowledge of local expert.  And what I want to also emphasize and it also link to the civil society discussion that is really important, it's also very often the same -- let's say, theme or the same issue happens with civil society as it happens with countries from the global self.  So it really is more of a tick box change.  We have someone from South Africa here but the majority of the countries that do AI governance are mainly developed economies.

 

   So for us this is also very much linked to our work that we bring in niece voices from the global self and not bring them in as say a TikTok change but really leverage their voices and that's why we are really working out of these 20 -- we have out of 60 countries, 22 are from Africa and even more from small island developing states.  And for us it's really important to bring in these actors that are normally underrepresented and we really hope to be continuing the work with -- as far as the IGF but also in many other contexts as well.

   >> REMOTE MODERATOR: Thank you.  We will ask you to bring us your final remarks.  But as part of those final remarks if you could bring us some lasting insights about one question.  So what is the feedback you have received from stakeholders participating in those collaborative approaches?  And what were the challenges they shared in doing those collaborative works?  What were the key takeaways.  And thank you very much for your participation.

   >> Okay.  I think my last concluding thoughts are actually connected to this question.  So we have engaged with stakeholder across our issued member states and the governments as well as the civil society.  What we have noticed that across our membership -- and that's a sample.  Because we are very diverse to the global examples as well.

  

   There are varying levels of AI readiness across the member states.  So while some member states or some countries are struggle with the bake infrastructure the others are really, really at the forefront of how to shape AI governance.  There are diversion beginnings and diverse approaches to the governments.  So the uniting factor, as Rosanna said, are the principals which have been very widely adopted because they are not controversial.  But how to action those principals has been quite diverse.  Some countries are really, really looking at --

   (Audio Difficulties)

   But the principals are common.  So there's a huge potential for engagement, for harmonization, for synchronization of the policies.  Because the AI -- or all the emerging technologies, the regulations are not restricted to the countries themselves.  So they are global actors that borders do not define technologies, et cetera.

 

   So I think it's really, really important now that when we talk about multistakeholderism or multilateralism, to action it to have those voices heard and to have these global forums and global discussions and then the global rule making or rule setting bodies to be more active and push the right set of rules, et cetera for the nations to adopt.  And I think the dialogue is very important that we are having here and we have across these rooms.  Thank you.

   >> Yeah, I would highlight --

   So one -- I cannot provide a detailed feedback because when we work, for instance with experts the experts provide their feedback depending on the product that we are asking them to provide feedback on.  So -- but in a very general overview of the comments that we received, sometimes we feel that there is some varying level of understanding of what AI is, of the risks that are on the table -- being input on the tail.  Are we talking about existing risks in general?  Or are we trying to have a more specific and more scoped approach?  Identifying a specific risk and trying to target this specific risk and mitigate it properly and in a very specific way?

 

   And sometimes we face misconceptions from the experts.  Because if you are talking with experts who are from the human -- who have a human rights approach -- based approach, then maybe in terms of privacy or when it comes to AI specificities, sometimes there are some misconceptions.  So the educational work is absolutely indispensable.  And hence some of the privacy tools that we put in place, for instance, the system cards when it comes to AI, to explain to the users, the basic -- the user who does not have a knowledge.

 

   And if a user does not understand the AI mold, how it works and why it works and behaves this way, it's very difficult to get the trust from this user.  And this is why, for instance, the system card that we put in place explain -- let's say the round King system in our ads.  How our ads are ranked or how are the users when it comes to the ads ranks.  The ranking systems, the privacy center, some other educational tools as well.

 

   It's very important to educate -- to do this education work.

   >> Yes, I will make it really, really short.  Implementation, implementation, implementation.  We hear from member states that they want to operationalize their principals.  They want to do something with AI.  They want to use AI but at the same time they want to not get it wrong.  They don't want to use it in a manner. 

   

   They want to have the benefits for everyone for their citizens and for their businesses and I think implementation of the recommendation but also over men taking of other tool Wes have heard today from other -- panelists.  We have the implementation of the digital compass. 

   

   Now it needs to shift, yes we need ethics and ethics by design and global governance but yes, how do we do it?  And how do we move from the principals to the action. 

   

   And I think there's still a lot of necessity to build capacities in governments torque build capacity in public administration but also in the private sector and also in civil society to be actionable and be operational and at the same time use AI for of course the benefit of the citizens but also be aware of the risks and mitigate the ethical challenges that we have.

   >> MODERATOR: Thanks a lot.  It was amazing having this discussion.  And I think Rosanna just got this main question now.  Now that we have a consensus, where do we go from here, right.  How do we do it?  And we are looking forward to the initiatives that are being developed by different organizations. 

   

   The AI Action Forum that is coming and many others that we have been sharing in the event and this governance forum.  So thanks, everyone.  Thanks for our speakers to be here and the audience and for the whole discussion and we are looking forward to what is coming.