IGF 2024-Day 1-Village Stage-Award Event 109 Guidelines for the use of AI Systems - RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> CEDRIC WACHHOLZ:  Thank you for all coming to this crucial discussion on artificial intelligence in this session.  We will speak about your communities and

    (Speaking online in background).

>> CEDRIC WACHHOLZ:  Draft guidelines for use of AI systems within the centrepiece of today's discussion. 

    (Background talking)

    Key step on what people are discussing.  In the justice sector.  They are building on the recommendation.

>> But most importantly   

                (Background talking online

    (Unable to hear moderator due to background talking)

    (Overlapping Speakers)

>> CEDRIC WACHHOLZ:  Quite aware of the possibility in AI systems. 

(Overlapping Speakers)

>> CEDRIC WACHHOLZ:  Only 9% have received any guidelines for AI related, so that is a real challenge.  And this is the basis of the framework.  And the guidelines are in response to that.  Now I could speak about the work in different countries in UNESCO with innovative partnerships.  We have a partnership with a lot of national guidelines. 

(Overlapping Speakers)

>> CEDRIC WACHHOLZ:  Impact of knowledge and experiences.  Now, I have mentioned several times these guidelines.  I am sure most of you want to learn more about the guidelines.  So I am very happy to present to you.  Online we can see Judge (?) already there.  But also an associate and the School of Government of where he practices public policy and artificial intelligence.  He will be introducing.

(Overlapping Speakers)

>> CEDRIC WACHHOLZ:  And introduce us.  I would like also to introduce on the floor, the Honorable Judge and also to (Too low to hear)

    So over to you now, and we will launch the presentation. 

>> JUAN DAVID GUTIÉRREZ RODRIGUEZ:  Hello.  Thanks very much for your introduction.  I hope that you hear me well.  I see that you're seeing my presentation.  So if the audio is okay, I will carry on.  Thank you for organising this event.  I have only five minutes to speak about the guidelines.  So I'll make the most of it. 

    You already heard about what preceded the creation of the guidelines itself, but it's worth mentioning that our UNESCO survey in 2023 gave us insights on how AI's used in judicial contexts around the world, the perception of risks associated with the use of these tools for judicial contexts, the availability of institutional guidelines or training on AI, and perceptions on the need for regulations. 

    And it is worth saying that the great majority of the people who were interviewed said that it would be desirable to have mandatory regulations on AI tools, particularly AI chatbots, are used in judicial settings.  And almost 92% of the people who answered the survey expressed the relevance of UNESCO publishing guidelines for using AI and Generative AI in the judicial sector. 

    Of course, that explains why the UNESCO published this year a proposal of guidelines.  And the guidelines had public consultation between August and September 2024.  We received feedback from 92 individuals and organisations based in 41 countries and also collective contributions sent by the judiciary and several events organised by UNESCO. 

    This allowed us to have a new version that I'm sure UNESCO will publish very soon of these guidelines that now are a collective work, piece of work. 

    Now, I have a few minutes to tell you the four key questions that are answered by these guidelines.  Of course, I encourage you to download the guidelines that are currently available in English and Spanish and will be available in other languages. 

    But there are four key questions.  And the first question is about principles.  There are, of course, many AI principles expressed in different documents, including the recommendations from UNESCO. 

    But we wanted to materialise these general AI principles into the use of AI in the judicial context, particularly by courts and tribunals.  We explained a few principles    by a few, I mean more than ten    principles that could be considered to ensure that the adoption and use of AI systems in the administration of justice is not just ethical and responsible but also compliant with human rights. 

    So that's the first point. 

    The second question that is answered by the guidelines is a matter of how to select which AI systems could be adopted to perform certain tasks?  This is a crucial matter.  Because AI systems do not offer an all    one size fits all solution for any task.  Different AI systems can carry out or perform different tasks.  And in this sense that's a first issue that either the organisations or individuals have to ask themselves before using an AI tool.  Which AI tool should I use for a specific problem or task that I want to perform?  And that, of course, also is a matter of not just about the functions of the AI tool but also an issue about whether this tool will help me preserve independence and autonomous which are key principles of judicial decision making and also whether it will allow me to preserve specific human rights, such as privacy, data protection, and due process. 

    The third question that the guidelines help to answer is how to be prepared for use of AI systems.  This is for both the organisations, let's say, the judiciary, particularly those organisations that govern the judiciary, and it's also aimed at individuals in particular. 

    To give you an example, organisations should help justices, magistrates, and in general, judicial staff to access training prior to the use of AI systems.  And, of course, the judiciary is well placed to carry out this task of providing access to training. 

    On the other hand, individuals should make their own effort of training themselves on    not just on the fundamentals of AI, but also in the specific use of AI tools. 

    Finally, of course, we want to avoid and manage risks that are associated with the use of AI and in particular with Generative AI systems which are widely available, and which, of course, can help us support different tasks that are part of the judicial activity.  But certainly we want to avoid risks associated with the use of this systems, particularly when fundamental rights can be at stake. 

    With that, I think I've spent my five minutes.  I want to thank everyone.  I invite you to download the guidelines so you can learn on your own of other content.  Thank you. 

>> CEDRIC WACHHOLZ:  Thank you, Juan David, for this introduction.  And we will to address

    (Moderator's mic is too low).

>> CEDRIC WACHHOLZ:  We have today.  So on my left, we have The Honourable Judge, who is a judge in thanes any and since 2008.  Teaching cybersecurity law, intellectual property.  He's also found    thank you so much.  (Off Microphone) Project Manager at the center.  She's at communication governance.  There are some issues around data privacy and (Too low to hear)

    Online we have Ms. Linda Bonyo.  And Linda Bonyo is a founder of the Lawyers Hub.  (Too low to hear) digital law and digital governance working on the regulation of (?) in Africa.  Miriam Stankovich is in the US.  (Too low to hear)

    Capacity building of    and protection of human rights in emerging technologies across the globe. 

    We have the privilege of having the Honorable Judge.  And we would like to ask you of Tanzania.  Could you share with us where AI.

>> ELIAMANI LALTAIKA:  Well, thank you very much.  Thank you for arranging this.  AI is a much needed system.  The subtitle (?) in which AI is currently being used in my country and other    it allows the general use of AI. 

    First, we are seeing AI being used in scheduling.  It is very difficult as a judge to know which case is coming tomorrow.  What are you preparing?  Since we have this in my country and we have electronic case system, life has become a lot easier.  I talk about how as justices we were struggling two years ago.  Secondly, AI, is used for    AI powered tools are used to analyse texts.  In my jurisdiction, the high court judge and court of appeal judge is assigned a qualified legal research assistant.  These are lawyers who have completed their bachelor's degrees or master's.  Some of them are using AI.  But we are seeing there is improvement.  I'll talk about some challenges later. 

    AI is also used in the judiciary for areas of relevance.  Sometimes it's easy to just get another perspective from (?) more importantly, AI is used with transmission.  I come from a very responsive country.  Our chief justice, professor Ibrahim is probably one of the most supportive when it comes to ICT.  He has pioneered use of ICT around the judiciary of Tanzania to take us to the level that is no match in the (Off Microphone) why some countries are shying away from ICT or Generative AI, our chief justice has encouraged us to explore these tools and make sure that we do not depart from the ethical values as a judge.  I know there are other descriptions where a judge was actually    the committee.

(Overlapping Speakers)

    Through ChatGPT.  That scared away many users.  Now everyone is using (?).  AI is also being used predictive analytics.  I cannot say that is in large scale use.  But every judge knows at some point they will gather some information from AI tools. 

    The other things we've seen so far, include efficiency.  In my country we have Swahili, which is the national language.  We have 120 ethnic groups, each one of them with their language, and the language of the court is English.  So I have to put everything in English but I have to address everything in Swahili.  We have AI translation system that is making life way easier because you can put it to anyone speaking and you can have the entire testimony transcribed. 

    There are quite a few challenges that we are seeing.  At this point I want to really, really thank UNESCO.  Because as I said earlier, I participated in an online training for judges for a few sessions about AI.  UNESCO came to me at a time when there were so many approaches to AI.  Some actually (Audio breaking up) say everyone is looking at you and you're delegating your powers as a judge to a machine.  People were pressing me to (Too low to hear) UNESCO came to me and say, yes, like any other, like deadlines.  This is what you    I think if we all cooperate with UNESCO, we may have some ways to customise these guidelines. 

    Because I know every jurisdiction is saying okay.  We need chief justices and UNESCO is is giving us a platform to start from.  Thank you very much. 

>> CEDRIC WACHHOLZ:  Thank you so much, Honorable Judge, about your kind words but also having examples from organisation aspects to helping research assistants and evidence analysis and translation (Too low to hear) also, the need for a balanced approach with some innovation.  That was really useful for all of us. 

    We'll go to (?) she has worked with us about AI regulations and innovation.  Would like to add more about some innovative usage of AI. 

>> MIRIAM STANKOVICH:  Thanks. Thanks, Cedric.  As the Honorable Judge already outlined and gave an overview of the use of AI in the judiciary, I will focus now for the sake of time on some compelling examples of the use of AI in the justice system. 

    So The Honourable Judge mentioned that AI systems are used for big data analytics where they analyse historical case data to forecast outcomes and estimate the time needed to resolve cases. 

    Now, this technology has proven to be particularly useful in countries with large case backlogs, such as India and other countries, for that matter where AI tools have been deployed to prioritize cases, which ensures urgent disputes like environmental or human rights violations are fast tracked.  AI is also used in e discovery and document review.  Again, here big data analytics, handling massive volumes of evidence.  This has been used by stakeholders in the justice system, in the court system but also by lawyers, by legal firms.  They use AI systems like Luminence which are now common place which enable attorneys to focus on strategic tasks.  While AI handles the tedious task of searching for relevant precedence and evidence, which is quite helpful. 

    AI tools are also used in online dispute resolution.  So integrated into these platforms, for example, we have the Canada civil resolution tribunal which was a pioneer in this space, resolving disputes related to small claims, housing and strata governance. 

    Another example is the European Unions platform will facilitates cross border disputes with the help of AI technologies.  AI as already mentioned by the Honorable Judge also helps with legal drafting and research tools like cocounsel and (?) they can analyse statutes, procedural rules.  For example, in Singapore AI driven drafting tools assist courts in generating judgments, ensuring uniformity and reducing workload.  Similarly, in South Korea AI systems assist support judges by summarizing evidence in identifying relevant legal principles. 

    The Honorable Judge also mentioned language translation and accessibility.  We all have heard India's Supreme Court that has deployed AI powered translation tools to provide judgments in multiple regional languages.  Also, the European court of human rights uses AI to translate case law and decisions.  And this fosters greater understanding across Member States. 

    Needless to say that in order to accomplish all these functions, the court systems, judicial systems, they need good quality data infrastructure.  So that's number one.  Number two, as already mentioned by the Honorable Judge, justices systems, stakeholders need to be aware of the challenges that come with AI deployment, especially if they start using unsupervised machine learning algorithms. 

    And I think the key here is to always have human in the loop.  So AI tools are here to support judges and justice system stakeholders, not to completely overtake the tasks and the work that is being done by them.  Over.  Thank you. 

>> CEDRIC WACHHOLZ:  (Off Microphone) (Too low to hear) online    many others also, some    now, we would like to go to the    some of us have    there will be opportunity for participants to raise some questions.  And ask a general question about some of the (Too low to hear) I would like to ask about the ethical and social uses of AI development based in   

>>  Thank you so much.  On behalf the development work.  I think the AI system privacy is always something that's talked about.  When you're using AI (?) measures that requires a higher threshold of rules of maybe checks and balances.  There's many innovations that    especially when you look at the use of AI, judiciary decision making it takes a whole (?) in the context of south Asia specifically, it's very important that (Too low to hear) so law and judicial    a large part in the powers and discrimination against (?) there's a long history of courts taking in making (Too low to hear) we start with AI in the judicial system which is acknowledging that    towards exacerbating bias there is a possibility that the trajectory of    can be aids in social change.  So that's something that needs to be (Too low to hear) the society of    as something    when it comes to deployment of AI and AI systems which are, particularly design and (?) so they're not suitable which has so many languages and cultures.  (Too low to hear) in these circumstances I think have impact a lot    in the guidelines there is a good section on impact assessment I would strongly recommend, which actually becomes    in south Asia, because you probably    it's important which one we want to deploy and what impact it will have. 

>> CEDRIC WACHHOLZ:  This is very enlightening.  I would like to just ask before we go on, you already have the second section.  The first one particularly highlighted the benefits with the right    some of the challenges stemming from your practice and experience? 

>> ELIAMANI LALTAIKA:  On the challenges, the right of AI    this will continue until some    for some institutions where they look at this more carefully.  But the tools that we use on another level is easily accessible to the majority.  Because if we say that AI will make it (?) the country struggle to access electricity and connectivity is still low, we still think that's a big challenge. 

    There is also the issue of trust.  In my capacity also as a visiting lecturer in university, AI assisting the judge in (?) okay.  I think we stand a better chance if these decisions were automatic, because no one will look at    I'm very sure if I lost, I lost because I wasn't right. 

    If the machine was (Too low to hear) no.  I think I need the professor to mark it personally.  Because if I told them I was sick, they would probably take that into consideration.  So you need the human part.  The human part of addressing societal issues, like my colleague has said, is still needed. 

    The last challenge I want to talk about that was brought about use of    is actually opportunity to abuse the algorithms and actually perpetuate discrimination and injustice. 

    So we are hoping that as we have model from the host country and leadership that equity justice will continuously be monitored and use of personal data is not done away with because of synthetic data.  You cannot have synthetic data or other scientific ways of (?) data and still use the information that takes a human being that is known.  And if I use that system to come up with a decision, it will always favour someone or creating districts (Too low to hear) so I think if we get this right, all of us will probably embrace AI knowing that it will not lead us to injustice. 

    We need to make sure we protect everyone in respectable social standards. 

>> CEDRIC WACHHOLZ:  Thank you so much, Honorable Judge.  The opportunities and how the AI system might correct    but the risks also of the production of injustice and bias or the risk of bias and thereby injustice.  (Too low to hear)

    Now, I would like to ask Linda Bonyo, please, to what extent governance mechanism play a role in addressing some of the risks? 

>> LINDA BONYO:  Thank you, Cedric.  I'd like to maybe just highlight a few of the comments that's been made before.  To help figure out what are the government mechanisms.  I think the world is grappling with    this year there were a lot of elections happening across the world.  More than half the world went into elections.  We've seen the power of fake news and social media.  We've seen how AI have been used to create the right and left divide.  We have social media platforms that are ideally echo chambers where people speak to the people they know and agree with, which is not the essence of a digital Town Hall.  A digital Town Hall should be where we express ourselves.  And views are heard in that sense. 

    So I think with AI, the challenge has been exacerbated.  Because suddenly, the power that was held by, you know, countries and nations now lies in the hands of tech companies.  And the median age of silicone valley is 23.  It gets younger and younger.  Now you have people who are calling the shots on elections, calling the shots    if you followed the news this week, we've had people in New Jersey and I think even California wondering what the drones are at night.  There are drones flying over their houses.  They're wondering is this another kind of war?  We've seen how AI has been deployed.  That puts us all at risk.  It brings the gap that countries can afford and countries that can't afford this particular machinery. 

    A few years ago we had a hackathon I think five years ago.  It was on killer robots and how we could figure out within Africa, how to fix killer robots and drones.  And it appeared as a very new concept at that time.  I think we've been called at this point to consider safety and to consider in who whose hands these tools are. 

    I would like to highlight the (?) it's a US case.  They discovered an algorithm in come passes and discriminating against black people.  It indicates that black people had the propensity to be repeat offenders which ideally they weren't.  So when we look at laws and legislation and figure out how do we governor AI?  I think the first thing is the admission that the global nature active artificial intelligence.  We can't do this alone.  We need to with global    we've had interventions from United Nations.  I think UNESCO's work on the they cans of artificial intelligence at its annual convening is a great point to bring people together to dialogue.  I think that the global impact at the UN General Assembly, I think also indicates that there must be global dialogue and artificial intelligence which is important in this case. 

    But after the dialogue, we need to admit there's a policy divide.  The judge alluded to this.  Not every country can afford to pass its own laws.  Especially with the Global North and South divide is that rich nations are funding poor nationed to come up with legislations. But then it brings in the imbalance of power, impact on trade, impact on everyday living, and even freedom. . 

    I think with artificial intelligence, we must take a multistakeholder approach that's coming up with legislation that make sure that people are not at the mercy of either big tech companies that fund these legislations or richer nations that come with    we've seen this with the common credit system and the conversations that are happening    that really there is a divide where Global South countries need to pass their own legislations.  I think we need to admit that and see what to do about it. 

    AI policies that are robust.  We're very different.  I think this was alluded to in terms of culture.  Cultures are different.  The challenges that Africa is facing and the point of growth where you look at Africa and India, Africa is closer to India than it is to Europe in terms of even policy and challenges.  So picking up policy around artificial intelligence could maybe look like that.  But we have copy pasted legislations that are not fit for purpose and don't really help at all. 

    I also want to applaud    there needs to be a global AI fund as I think articulated in the GDC.  I think that should ideally go into making sure these policies    we need people funding policy in a multi sectle team and that nature.  We need algorithmic accountability.  If judges are using AI, we need to know.  They need to be open and there are challenges where people are saying proprietary over it.  Anything for public good, including elections, need to be open enough for us to query and see and be subject to this process and being subjected to    we've seen with airports and facial recognitions, there needs to be ways that people can opt out so we don't further divide between the rich and the poor.  Thank you so much. 

>> CEDRIC WACHHOLZ:  (Off Microphone) we spoke on how the governance and privacy.  We had a lot of questions perhaps the public would like to ask the panelists any questions before we go to (Off Microphone).

>> AUDIENCE:  (Too low to hear) so I had an opportunity to (?) with judges, in particular    I have a question.  (Difficult audio)

    The one is suggest    so on the process that would be some efficacy.  Based on the document like the    it might not be supporting    they're asking your experience how it is being implemented, how to implement in using AI and how much    how much    thank you. 

>> CEDRIC WACHHOLZ:  Thank you for this question.  We have a second question.  (Too low to hear).

>> AUDIENCE:  Thank you.  Thank you so much.  This was very interesting.  I'm from Iraq, senior advisor from (?) I have two questions actually.  One, challenging    places like valley or also cases there are a lot of confidential information that should not be shared through those chatbots, for example.  Sometimes they show that this model or this text is being used or shared with other parties.  How safe would it be or what are the safety measures by those companies?  I think the question about reliability.  Because maybe in some context or jurisdictions there are a lot of information about laws, but when it comes to things like (?) in Iraq, for example, there's not much about that.  It's not reliable.  I just wanted to ask about that.  Thank you so much. 

>> CEDRIC WACHHOLZ:  (Audio breaking up) so, Eliamani, can you set us up. 

>> ELIAMANI LALTAIKA:  I will start with the counsel of Nepal.  First, we have the perspective of the (?) we see separation between the drafting    we want it to be considered it to be (?) if you say justice starts with draft team.  A lot of the    you start with scheduling, then you call, you hear.  So everything points at the one use of AI in the administration of justice.  It's only a technical person that can sit    later says this was about drafting.  That drafting was because the judge (?) document, analyse the evidence.  Now, on accuracy, I want to be up front with you, that things are changing very fast. 

    In 2022 compared to 2024, things have changed dramatically.  All the stories people have about inaccuracy or something else are not true anymore.  ChatGPT, for example, towards super, super inaccurate.  But two years later, you would be surprised how accurate information has been.  However, judges, magistrates (Audio breaking up) because it's not tested. Personally, I know of customized AI tools offered through license.  For example, with Lexus Nexus, my commercial law and they give you a platform where you can get reliable information.  This is very expensive.  And only judges who are techies or Global North they have this information.  If you write the ruling, you probably got    you wouldn't be able to see there is    but greener pasteur, my brother, AI is specifically challenging to (Off Microphone) protection.  It's highly    (Too low to hear) online in platform.  Someone may end up getting all that information by simply searching. 

    The algorithms are used the goal is to be better.  (Too low to hear) because information that gets    means a lot.  You can always be safe by taking questions.  The precaution is especially relevant in AI.  Thank you. 

>> CEDRIC WACHHOLZ:  Thank you, honourable judge.  The practice and how our response section time but also (Off Microphone) anyone online who would like to add anything?  Linda? 

>> LINDA BONYO:  I think the question on data was very important.  We must admit that data is fragmented, especially data on policy.  So many policymakers are groping in the dark.  There are good initiatives I would like to highlight AI policy.Africa that puts together national strategies, the countries can look at.  That's the work that the lawyers are proud to see how it's being used.  I think that admission of fragmentation helps us to look for a common pool where we can put all these things.  UNDP has a great policy intervention.  I can't remember the name of the policy accelerator.  But it was launched last year, which is a useful tool for policymakers.  But also want to say on the question on algorithmic accountability, we're seeing more and more safety institutes coming up, including Global South countries like Kenya. 

    What that means then, the work of really querying before deployment of these models should be within the AI institutes.  I think lastly, there needs to be sector specific policies.  And that's how    I think it's a low hanging fruit that judiciaries can do across the continent and across the world to look at what are the specifics that come in the judiciary that are being perpetuated by artificial intelligence.  How do we as a sector for that done.  It takes time.  It's expensive and sometimes not very specific to your sector.  I think that will propel us forward within the judicial sector.  Thank you. 

>> CEDRIC WACHHOLZ:  Thank you so much, Linda.  (Audio breaking up) our service    the art of (Too low to hear) questions important.  I would like to continue on. 

    We have a second round of questions, if you don't mind. 

>> AUDIENCE:  Thank you so much.  I have something in mind about what we call privacy, which is (Too low to hear) what is behind privacy?  If people are owning the data waiting to (Too low to hear) what if someone outside is going to take in the name of privacy that this application is not going    because of privacy.  As individuals, it is the ultimate decision.  I'm asking this, because I just encountered serious (Too low to hear) how to dem graph a solution in    there are

    (Background talking)

    Who lost the benefit in the name of what we call privacy (Too low to hear). 

>> CEDRIC WACHHOLZ:  Thank you so much.  We have a very short response.  It is not always clear.

(Overlapping Speakers)

>> CEDRIC WACHHOLZ:  (Too low to hear) is there anyone who wants to respond.

>> (Too low to hear)

    (Background talking).

>>  Which means we have knowledge that    consent from each person sharing.  I think there are some problems.  And then the obvious work around then is around the input that you have a framework for    you can privacy while using public data. 

    (Background talking).

>> CEDRIC WACHHOLZ:  (Too low to hear) it's a constitution.

>> ELIAMANI LALTAIKA:  We struggle with this a lot of time.  Your privacy. 

    (Background talking)

    (Captioning not possible with background talking and low microphones).

>> CEDRIC WACHHOLZ:  I know we have discussed many topics.  I think there is an opportunity to be able to ask some questions.  Now, I would like to do a very fast round, final round.  I know Juan, you wanted to add something.  Any comments you may have as closing remarks?  So you all have the floor for one or two final phrases.  Juan, I think we can start with you. 

>> JUAN DAVID GUTIÉRREZ RODRIGUEZ:  Thank you.  Thank you very much.  I wanted to point out the importance of developing tools that are specifically meant for the judiciary rather than using AI tools that are generally offered to the wide public.  It's very important that laws that govern the judiciary they're able to develop their own AI tools which are aligned with the objectives of the judiciary. 

    In particular, I want to point out that although we've focused a lot on chatbots, as it was mentioned earlier and also mentioned by Mimi, there are so many tools that are useful in the judiciary and low hanging fruits.  Let's say tools that help to translate, tools that help to transform audio into text.  These tools should be widely available for judges and magistrates.  So that's my invitation for the organisations that govern the judiciary.  Thank you very much for your time and allowing me to participate. 

>> CEDRIC WACHHOLZ:  Anybody else, please keep it to one or two phrases.  I want everyone to have a last opportunity.  Miriam, over to you. 

>> MIRIAM STANKOVICH:  Thank you.  I will be short.  I just want to caution there was a discussion about the innovativeness and the opportunities that come with the use of AI tools in justice system.  I would also like to underline that will are a lot of challenges.  As Juan mentioned, these tools should be adjusted to the functioning and operations of the judicial system.  I would like to also highlight the between court data and data prediction privacy tension which is a very complex question.  But as already mentioned, data published in the public interest should not override always privacy protection, data protection.  There should be a delicate balance, because there should be    we should also adhere to government principles. 

    We should not forget there are different AI systems.  And when implementing and developing AI systems for the judiciary, we should always bear in mind the different types of AI comes with different types of risks and not to forget always have human in the loop.  Over.  Thank you so much for the invitation.  It was a pleasure participating in this session. 

>> CEDRIC WACHHOLZ:  Thank you.  We all have so much to share.  One essential message from you Linda. 

>> LINDA BONYO:  I think at this point in time, especially for the Global South, we need more incentives, policy incentives.  How do we access data?  How do we access (?)?  How do we access talent as well?  I think those three are key.  I must underscore, the judiciary is across, especially Africa, are grappling with    we started with big tech companies that hold this data they don't know how to switch from this to another.  We need to see ways in which the judiciary can actually be stable and access its own data and be able to exercise agency over its own data.

>> (Too low to hear) contextualise a solution. 

>> CEDRIC WACHHOLZ:  Thank you for putting it in a different way. 

>> ELIAMANI LALTAIKA:  I would say handle with care not with fear.  This comes from AI attempts to help (?) introduction to the special issue on the Government of artificial intelligence which causes policymakers to read    this was an issue that was put together by    this puts in jeopardy and puts us in the issue related to government AI   

>> CEDRIC WACHHOLZ:  Thank you.  I thank you all panelists but also (?) and Chen who have done phenomenal work.  Thank you for our participants (Too low to hear) thank you, all.  Please follow us online.  (Too low to hear) if you want to join.  I know    thank you so much. 

>> JUAN DAVID GUTIÉRREZ RODRIGUEZ:  Thank you.  Thank you very much.