The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> CORNELIA KUTTERER: Yes, now we can hear. There was just a bit of noise.
>> MODERATOR: Wonderful. Okay.
>> Yeah. Go and switch it on.
>> FRANCIS ACQUAH AMANING: Are you hearing me?
>> Yes, we hear you. The name is not my name. But I'm -- the name is not my name. There's a problem with my Zoom. That's the reason why we have to enter with the one of my colleague, okay?
>> MODERATOR: Yeah, we understand. And if you want to rename your account, you have to do it yourself. It can't be done from here.
>> I know, but I couldn't manage. We tried here, but we couldn't.
>> MODERATOR: No. But we know, you are Teresa Ribeiro.
>> Yes, I am, even if the name is not the appropriate one.
>> MODERATOR: We manage. We manage.
>> Thank you.
>> MODERATOR: Thank you and good evening, everybody, to this town hall meeting with the title "Regulating Algorithms, what if, if not, and if so, how?" So a little bit a riddle.
But we are here to unfold a little bit the different human rights of algorithms so for freedom of expression, freedom of assembly, nondiscrimination, self-determination, privacy and the right to information.
We also want to discuss the responsibility of human rights protection from different stakeholder perspectives and different forms of policymaking and regulation highlighting advantages and disadvantages of each approach.
And we want to start elaborating recommendations for how the topic of algorithms impact on human rights should be included in the global, Digital Compact which is one of the topics of -- or one of the objectives of the internet governance from here in Addis as such.
I would like to introduce today's speakers. We have with us online Teresa Ribeiro, OSCE Representative on Freedom of the Media. She represents an Intergovernmental Organization.
We have with us Cornelia Kutterer, from Microsoft Europe. We have with us Osei Baah, the International Cooperation Officer for the Cybersecurity Authority in Ghana.
And we have with me here in Addis, Shabnam Moitahedi, Senior Advisor of the International Center for Not-for-Profit Law. I see her now. Welcome, everybody, also, to our small, but audience in the room.
We have said that we will have two rounds of questions to the panelists. If there is time, and I hope there will be time, we can also take some questions from our audience.
Let's start with you. I would like to ask you, what risks do you see for human rights coming with the use of algorithms and how do you assess public awareness about the topic of human rights risks? I know that you have a lot to say, but if possible, stay with six to seven minutes. Thank you.
>> SHABNAM MOJTAHEDI: Sure. Thank you so much. And thank you, everyone, for joining us at this late session when others might be going to enjoy the festivities at the park.
Since we are a small group, I was wondering if you could raise your hand. Are you familiar with the topic of algorithms, AI systems? Is this a familiar topic for you already? Raise your hand. Okay.
How about the intersection of human rights?
And how about AI regulation, the issue of AI regulation. Now we have a sense of who people are and hopefully we can have an interesting conversation in the Q&A.
As was said in the introduction, my name is Shabnam Moitahedi. I'm legal advisor for digital rights at ICNL. ICNL we support enabling legal environment for civic space and civil society around the world, and as -- on the digital rights team we focus specifically on online civic space and enabling environment of how tech impacts civic space and civil society at large.
So, the question is really why should civil society care about this topic? Why is it on ICNL's agenda to begin with? And I will give three examples to highlight why this topic is important.
First, AI systems are used for surveillance. And including surveillance of civil society organizations. So, understanding how algorithms play a role in that and the governance of those algorithms become quite relevant to civil society when they are the targets of surveillance systems.
AI systems also are used to suppress or manipulate online content by platforms. So, for civil society working on freedom of expression issues and fundamental human rights in civic space, again, this becomes quite relevant to their work.
And then (muffled audio) AI systems are also used for justice processes or other public services when governments are procuring these systems and using them for governance issues. It's quite important for civil society to understand and have a voice and participate in understanding -- have transparency in place to understand these systems so that they can play their watchdog role.
So, what are the main risks? I categorize the risks twofold. First is the risks with the development of AI to the upstream risks posed by AI systems. And then the risks of adoption and use of AI.
So, those are the downstream risks.
And they are interrelated but I think it's helpful to separate them out, to understand, kind of, the specific issues at play.
So, what are these risks as was asked? Privacy rights specifically. So, when it comes to the development of algorithms data and sometimes personal data and sensitive data might be used to develop these algorithms. And sometimes without the consent of the parties whose data are being used.
So, giving more control over -- and more privacy rights protections for users becomes quite relevant for the development.
On the use side, again, like I said, privacy rights are violated when these systems are used to surveil and monitor outside legal protections of due process and so forth, human rights law.
So, those are two risks on either side of the development and the use.
On another area of risks, when it comes to algorithms is bias and specifically the right to be free from discrimination. So, on that end, there is a risk of how the systems are developed in a way using data that might not be representative of the communities that it's going to be -- that will be impacted by the algorithm or the AI system.
And also on the development side -- or on the use side, it might be used to perpetuate existing inequalities, because if the data that was used reflects historical inequalities in a country or in society as a whole and that system is then used based on that data, then it will perpetuate those historical inequalities.
So, again, interrelated, but different impacts based on the development and then the downstream use of AI.
And third, access to justice and redress. So, the lack of transparency into how these algorithms have been developed and as you get more complicated AI systems using deep learning there's a question of the black box, explainability of the system itself. If the systems are then used and then cause a harm to an individual, how will the individuals seek redress if they do not understand how the system works. Including more transparency, both in the development and the use of AI systems become quite relevant.
Really, I hope that frames this discussion a little bit. Because the question for regulation, if so and how, is about tackling -- how to mitigate or prevent these risks. And at what stage are the regulatory interventions going to be put in place.
For example, should they be put in place for the downstream risks or the upstream risks or both, and so that's the question really for regulators and again ICNL believe strongly that civil society should have strong voice in -- as regulation is developed because their organizations and their constituents will be impacted by these systems and how they are regulated. Thank you.
>> MODERATOR: Thank you, Shabnam, for the very useful piece of analysis of both the risks and then, let's say, the entry point for policy macros.
With that, I would like to turn to you, Teresa, and you speak for those who want to protect human rights. What can and should policymakers do to achieve this goal in the face of and that's at issue that we have to tackle, the rapidly changing technology, so the policymakers tend to be always a little bit behind.
And in that regard, can you please present some of the results of the OCDE representative on freedom of the policy manual that you have recently issued and describe its intended use. Thank you.
>> TERESA RIBEIRO: Thank you very much and thank you very much for having me today. It's a pity that I cannot join you in presence, but in any case, it's really a good opportunity to present the work we have been doing regarding the impact of AI on media freedom.
And I would like to start by saying that new technologies, they bring about a transformative moment in time and with many benefits including for the free flow of information. And I think it's very important to underline the benefits and not to demonize artificial intelligence. It's much more the way we will be able to regulate it. It will be much more the way we will be able to use it.
But at the same time, we know that there are serious human rights concerns, such as surveillance, cybercrimes or the spread of this information. That, of course, as -- have an important impact in the way we seek, we see in part information.
This also drastically change the media as we know it. So, it's very, very important, not only to address the source -- the societal harms of rapidly changing technologies, but also consider ways to harness it for fulfilling the media's democratic role going back to taking advantage of the benefits of AI.
First and foremost, state have the positive obligation to guarantee the exercise of fundamental rights like freedom of expression. Political will on the state of authorities is, of course, a precondition for a meaningful engagement, through support and strengthen national and international safeguards for freedom of expression and media freedom. Also in which respects regulating the use of new technologies, like AI in its use to shape information.
The sad reality, however, is that the political will to protect media as we all know is being created in hostility against the media in many parts of the world that we, unfortunately, we are all quite aware of this.
In times like this, it's not only expedient, but it's really necessary to discuss and to strategize around a large number of emerging challenges brought by new technologies, which, of course, are growing by scale and complexity.
And this brings me to the OSCE policy manual on AI and freedom of expression. First of all, it is the combination of over two years of research and contribution of more than 120 of the most renowned experts, scholars, practitioners working in the media, in the field of media freedom, human rights, technology, but also security.
As you know, OSCE is the largest regional organization on security. The policy manual and our experts, they have focused on four key areas where the use of AI can have a negative impact for media freedom and also for security.
First is the use of AI in content moderation, with a particular focus on building with illegal content and security threats online.
The second area of focus is on moderating content, that may be legal, yet harmful, such as hate speech.
The third area of focus is on the use of AI in content curation and how this impacts media.
And fourth and the last one is its nexus to targeted advertising and surveillance capitalism.
So, all the recommendations provided in the manual are aimed at states who should put human rights at the core of all regulatory frameworks. And I think this is the most important. We call for such initiatives to be evidence based and built on inclusive processes. The key recommendations that are provided by the manual compares and spell out in more detail the principles of transparency, accountability and public oversight. Without those, we cannot have freedom of expression in the digital age.
And I would say that in a nutshell, this is what the policy -- our policy manual is about and this is, I think, a great contribution to the discussions around AI. Thank you very much.
>> MODERATOR: Very active in IGF and other multistakeholder fora to present views. But I would like to ask you, what is the view here on the risk of algorithms affecting human rights online and which role of human rights assessment does algorithms play and how does Microsoft approach this in its own operations? Thank you.
>> CORNELIA KUTTERER: Thank you very much. Thanks a lot for the invitation. I am also equally very sorry not to be there in person.
I want to, of course, maybe say one word about the abilities of algorithms, because we usually look -- and this is important because it's not -- you always have to look at this very holistically. We are looking at the algorithms and how they perpetuate biases and by doing so, are increased risks for fundamental rights. But what we also see is that by using data analytics and algorithm, we can uncover biases that exist.
So, to give you an example, in Microsoft we have researcher that is looking at clinicians datasets and has by using those datasets and building models, uncovered a number of different biases that exist where we can act upon those and improve the healthcare for more people and more be inclusive. I think it's important to always look at all perspectives on these issues.
Now, I think it is very clear where there is no governance and no processes in place, creating and ensuring that the (?) and AI systems that are deployed, where they have an impact on important life decisions of people, on safety, physical, psychological harm or specific human rights, there will be a broad risk of infringing human rights. So, I think that is what regulators have been trying to approach by a number of legislative proposals.
In Europe, that's where -- my focus lies, of course, the AI connect and then also the convention. I think the counsel of Europe is an interesting organization as well, that it is approaching fundamentals rights much more specific eventually in trying to find ways in minimizing the risk of infringements of human rights.
What is important is there is -- there has been an understanding of the processes that are really important, that there is not one-size-fits-all solutions. There is no such thing as error-free data. Data in itself, in the collection, in the creation of the data, there's risks in how fundamental rights can be potentially affected.
We see this, for example, of course, something that is discussed outside of this specific AI regulation that's currently in the making, also in other laws, like the digital services tech that is more specifically looking at content and how (?) recommended to users and potential risks associated with this and the lack of awareness.
And I think that the list is -- can go on and on.
So, the private sector is increasingly aware of it, either because they have understood that as a business goal, being inclusive, ethical is the right solution because you are reaching more people, or because they are starting to see that if they don't do this, because they understand the business value in doing it right, then regulation is also coming. So, there is a couple of different motivations eventually in doing the right thing.
In Microsoft, we have been starting very early on to develop the ethical principles, very similar to many other organizations, enterprises across the world and civil society, and have since then, basically, operationalized those ethical principles. And we had a number of learnings by doing this, one of which is that you need to spell out the specific objectives that you have. Certain requirements that you are putting forward in that process.
So, when you think through these principles that are set out, like transparency or accountability, underneath, you are demonstrating, you are telling the engineering groups and sales organizations what your objectives are.
To give you one example, within our standard that we have now developed that precisely spells out for each of these principles, the objective that we are aiming to achieve and then the specific requirements underneath that the specific groups have to follow through. Then you are starting to build a governance model and a process that allows to integrate these checks and balances into the system, into the lifecycle of an AI system.
One or two examples on accountability, underneath this objective, we want to make -- we need to be clear and we need to have data governance in place. We need to be transparent to our customers and provide them with the necessary information about the abilities of a system, but also about the shortcomings of a system and where they can be used or not, should not be used.
Important, I think, in particular when you think about fairness or bias, so fairness in order to do so, check that we are trying to achieve is to have a similar quality of service or equal quality of service for different demographic groups (muffled audio) it becomes very clear that you need for specific use cases, make sure --
(Speaker's screen froze)
>> MODERATOR: Do you hear us? I think we lost you?
While she is, perhaps, reconnecting, I go forward to Osei. You are not only international operations officer from CSA Ghana, but you are chairing the Digital Equality Working Group within the Freedom Online Coalition, so you are looking at this issue from both, you know, national legislative perspective, but also from, let's say, from the international coalition perspective.
Sorry, Cornelia. We just went on. But I will come back to you in a second.
>> CORNELIA KUTTERER: Sure.
>> MODERATOR: You can finish your statement.
So, what do you think, how could regulation help addressing the risks of algorithms and what should be the elements and tools of regulation, and which role that will be a second aspect could the Freedom Online Coalition play in contributing to this discussion about human rights impacts of algorithms?
>> OSEI BAAH: Thank you very much. Good morning, good afternoon, good evening, everyone. My name is Osei Baah from the Cybersecurity Authority of Ghana. It's as the body and the ministry of communications and digitalization, that is charged with (?) regulatory activities in the country. Think to the Government of Germany for inviting us to participate in this panel.
So, regulations can help address risks such as data possession issues as such that are algorithms that was mentioned earlier. So, regulations made to ensure transparency of algorithms can check the type of data collected and go a long way in ensuring that the data is used for a particular or agreed-on purpose. Agreement between them an end user and then a client. The clients could be the client company or the protector.
Items are regulated to ensure that the information it collects is useful just towards a user reviewers and just that and for nothing else.
So the information should also be secured from hackers and third parties that are always looking for people's information, people's personal things, such as their bank, Social Security information and home addresses. And sometimes these informations are sold on the dark net.
Also regulations can help ensure quality of service or results. So, the data and source code used in training, training is another thing for algorithms. That's a multiterm. So data and (?) developing algorithms that are meant to provide information such as search engines or provide solutions, can be regulated to ensure that it is acquired from expert domain.
What I mean here? For example, we can expect that algorithmic code deleted to health includes expert inputs from medical doctors or scholars in the field, just so that we don't get any result or any service that could be harmful or provide false information or results.
Regulations can also help to ensure a wide range of benefits. When algorithms are regulated to ensure inclusiveness, especially from its development stage, it ensures that it improve the quality of life of the global citizenry. It ensures that everyone regardless of their race or their gender enjoy benefits that's algorithm related technologies have to offer. It's also ensure that the same technology that is meant to benefit people is not biased or harmful against them.
So, talking about the element of -- or (?) we integrations to be rights respected. There's a book titled "The Internet Value Chain and the Digital Economy." This is by Professor H. Sameh in Rwanda. In this book he notes that in 2018 there were 2000 people that were wrongly matched as possible criminals in the city of Cardiff, in the country of Wales. And as a result, it incensed civil liberty groups and incited the lack of regulation and human rights concerns, and there's a lawyer from the civil group called Megan Gordon. She said this is just like taking people's DNA or fingerprints without their knowledge or their consent.
And even, however, online DNA or fingerprints, there is no specific regulation governing how these policy -- governing how the police should use official commissions or manage this kind of (?)
So, also we can -- we should expect that regulations serve in the best interest of all citizens or the global citizenry and discriminate against none. And students Joy Williams, she said that artificial intelligence has a problem with gender and racial bias. And she said this after finding out that some official analysis software could not detect a -- put on a white mask because it seemed that the system was trained or developed for predominantly light skinned people.
After (?) she found out that the error rates for lighter skin men were no more than 1%. And then for dark skinned woman the errors sored up 5%. Filled to identify given (?) Michelle Obama and Sarah Williams. When technology cannot even identify the faces of these iconic women, then it's time for us to re-examine how these systems are build and (?)
How the Freedom Online Coalition contributes in this discussion. Some background and efficacy for people who are not familiar with them. The Freedom Online Coalition is a group of governments that's currently physical governments who are committed to working together to support internet freedom and protect fundamental human rights so we are talking about freedom expression, talking about (?), talking about assembly and privacy online.
And when the coalition establishes sovereign entities in the form of task forces or working group and currently we have task force -- we have three task forces. We have the task force on digital equality, the task force on artificial activities and human rights which is a task force that is champ beyond in this panelization. And we also have one Working Group called the Silicon Valley Working Group.
This task force artificial regions aims to promote human rights respecting the I technologies through the sharing and disseminating information and collaboration of joint initiatives. So, the task force works to advance the application of the international human rights framework to the global governors of AI through engaging with ongoing international policy discussions and coordinating advocacy across different fora.
There's also the Working Group of the Silicon Valley working group which has the aim of building new forms of cooperation between the FOC and the global technology sector, whose products or services potentially impact human rights.
So, when you have -- these are headquartered in Silicon Valley in the United States of America. And by providing an avenue for continuous private sector engagement with governments the Working Group strengthens opportunities for the collaboration on internet freedom and tangible outcomes.
The efficacy is playing a significant around this topic of human rights, impact on our volumes mainly this task force of artificial intelligence and Silicon Valley working group. But nevertheless I believe I can do more to update the objectives and aims of those two separate entities to focus more directly on algorithms. Let's say the advocacy can create a whole new sovereign entity to focus on the subject matter with objectives similar to that on the Silicon Valley Working Group but with focus -- but which focuses directly on human rights impacts of algorithms. Thank you.
>> MODERATOR: Thank you, Osei. Now, Shabnam, we have heard that private sector, national regulators, international organizations are very decided to tackle this heightened risks and solve the problem.
So, why is it important for civil society to be engaged in these issues and what are the challenges and opportunities for civil society engagement?
>> SHABNAM MOJTAHEDI: Thank you for that question and everyone's comments. It was quite interesting to hear some of those specific examples of how the risks play out. Osei gave some really relevant examples of bias.
So, in terms of civil society, I think that there's been -- civil society in many parts of the world are not aware completely of artificial intelligence. So, there needs to be more capacity building in place to train on what AI is, how algorithms are developed, what the risks are and what the impacts can be on civil society and human rights at large.
So, that, I think, is the first step. I had talked about some of the lights risks earlier and I had meant to say that the three that I had mentioned, it's certainly not exhaustive. There are quite a few others. But for the sake of time, I limited it to three examples.
There are a lot of risks involved that different civil society organizations depending on what their focus areas are, would be interested in understanding the intersection with their work.
And then in terms of the opportunities to get involved, I think at a first level we are seeing that many countries are putting forward national AI strategies or digitization of public service strategy or different -- it might be called different things. But essentially putting forth a strategy for how government will support the industry, the tech industry. How government will adopt these tools internally, and how they will invest in research and development.
So, at that basic level when those policy initiatives are being formulated, we are seeing that civil society isn't really playing -- having a seat at that table. And we -- our position is that they should be involved from the very outset of this policy development and these initiatives.
These AI -- national AI strategies are often also putting forward proposals for regulation. And those proposals aren't necessarily assessing the human rights risks. And that's where civil society can have the voice to talk about the risks and impacts for the communities that they represent.
So, that's Ukraine at a baseline.
And then as policies are being developed, I think at the EU level, of course, there was quite a lot of civil society input back and forth. And our sister organization, ICNL, was quite involved in that process and continues to be, but are not seeing necessarily the same level of engagement in other countries.
I will give one example of Brazil, where very basic draft framework for AI regulation was put forward and with no consultation whatsoever. And mostly representing the interests of the private sector in Brazil.
So, it wasn't really addressing any of the primary risks involved, because there was no input from affected communities. So Brazilian, civil society is quite robust. They advocated against, kind of, rushing forward with this very basic framework that was mostly focused on self-regulation and not hard regulation of AI. And they were able to get something rolled back and then put in place some sort of -- a more robust, while not perfect process with a Commission that had -- that conducted consultations and is going through a more thorough process, hopefully. It hasn't been proposed yet. But drafting some sort of regulation in Brazil.
So, you will just say that there isn't -- I think oftentimes civil society and other countries or other governments might look to the EU as like the model. We saw that with the GDPR, where some countries took the GDPR wholesale and really I think the conversation needs tore more tailored to the specific context in each country, the tech environment within the country and also how they are procuring services from abroad, because oftentimes these algorithms are being developed in other countries, not locally.
So, there mate be different considerations in place there as well. So, just cautioning against taking an approach that adopts the EU model wholesale and looking to tailoring it to the specific context, to the specific communities, and to the context in which these algorithms will be used and deployed. And I think that's where civil society can play a really pivotal role.
There are amazing groups doing great work. I mentioned Brazil is one. There are many others. So I don't want to downplay that. But it's limited. There's not that many that have the ability to speak to these issues. And definitely not representing whole of society. So, one of the challenges there is not just ensuring that they have a seat at the table, but that they have the ability to contribute to the conversations.
And of course, there's other challenges and issues at play. But I think fundamentally, that covers it.
>> MODERATOR: Thank you, Shabnam.
I would like to go back to Teresa. You spoke about ultimate decision making as one of the use cases of algorithms. Could you elaborate a little bit more about which criteria should automated decision making fulfill to be compliant with human rights and can you, perhaps, tell a little bit from your experience if private sector or regulatory approaches help define and develop these -- if one or the other or both help to define these criteria for automated decision making. Thank you. You have to unmute yourself.
>> TERESA RIBEIRO: Yeah. Thank you. Thanks a lot. Thanks a lot for giving me the opportunity, again, to intervene. But let me also build a little bit on what we just listened, huh? How should we approach this very complex issue of regulating artificial intelligence. And I would say that maybe we can look at it from three different dimensions. First one, as it was mentioned before, it's awareness raising. We need to be sure that the citizens, the individuals are completely aware of the risks linked, related to AI. But also the possibilities offered by AI. So, yes, I think that a strong intervention of the -- of civil society is very important. But for that, we need awareness raising.
And why is it so important? Two, is that because, you know, the business companies, the business companies, they want to do business, of course. But they know that reputational damages are very important. And they cannot risk to have this kind of reputational damages. So, it's in their interest to work in the sense of respecting what civil society, what the users really want or should want. But for that, again, the precondition is also awareness raising.
And then the second dimension, I would say, or the second level, it's the process and it's very much linked to inclusiveness. It's the interest of the business, companies, combined with the interest of the individuals, the citizens, together with the positive obligation of the states to protect human rights.
And then, the third, it's the regulation. The definition or the regulation of the governance model. And there, yes, we need to be sure that the appropriate will be based on a values-oriented one.
So, I would say these are very -- the three dimensions that I think we have to take into account if we want really to define a governance model that can correspond to the expectations of the public interest and of pluralism.
>> MODERATOR: Thank you so much. Now back to Cornelia. When we lost you, you were just speaking about the private sector's motivation to mitigate the risks of human rights violations via algorithms. Perhaps you would like to finish this statement. And I would like to follow up with another question on this.
I mean, regulation means national regulation usually. So, what you have to deal with is the different conditions for specific markets for your products. I would like to ask you, are you also -- or what are you expecting for global decision making and where could this global policymaking and where could this take place, but what would be your approach to this? Thank you.
>> CORNELIA KUTTERER: Yes, and I'm sure that you must have lost me so early in my intervention. Because I was talking a little bit about Microsoft's own operations as well and how we over the last couple of years have seeked to -- sought to operationalize, I think, our principles. If you like, voluntary standard that we have developed around accountability, fairness, transparency, safety, privacy and security.
And we have learned quite a bit around this in terms of how to think about, first of all, how you have to be specific in what you are trying to achieve. And then also in what type of sensitive users might require additional safeguards to make sure that when there is an impact on people's lives, when there is physical or psychological harms or risks to fundamental rights, that you need to look at your process again and find mitigation ways, either technical, contractual, or you might not move forward with the specific application.
I would like, before I go to the global or versus national regulation, so we have these standards in place and we do hope that what we have internally developed is a good basis for when regulation is applicable to us, that we have good foundation in complying with these new regulations, such as the EU AI Act.
Maybe just one comment on what was said before on automated decision making. That's the terminology that we know all well from data protection regulation, where it's a data subject (muffled audio)
>> MODERATOR: If you can hear us, perhaps switch off your camera and restart the sentence that you were just about to say.
Okay. I think we lost you again. I'm so sorry for that. Not coming back. Okay.
Perhaps let me go on with -- okay, Cornelia. There you are.
>> CORNELIA KUTTERER: Sorry. Terrible, this connection.
So in this case we need to start thinking about what the safeguards or both in the development of the AI systems, but almost more importantly in the deployment of the AI systems. And this is -- also we are closer to potentially impacted citizens in the broadest sense. So, that's important. And it's, as I said at my earliest comment, you know, this is not only how do we control the AI, but how do we infuse a thinking through the use of AI that puts into question that result. But our own current processes that we have in place.
I had recently a discussion in the context of judicial use of AI where AI could be used to look at patterns that existed in current judgments. And that is now prohibited, for example, in certain countries. But we need to have this openness to think this through, and we need, of course, the people that will deploy them, we need to help them be -- to be able to make their own decisions, to not be -- you know, that they don't have the feeling to be overturned.
To give you a concrete example, if you are a doctor that has -- an AI system that wants his capability to make decisions, for example, through better patterns of cancer, but he needs to be sure that if he deviates from the prediction of the model or from the prediction of the system, that this does not -- is not at worst to, for example, how he is -- how this -- the doctor might be insured.
So, there's a lot of things we need to think through in embedding the capabilities of AI systems in this human computing interaction. It's not only control. It's much more. We need to really think this through.
And then last point, just on the point that you asked me in terms of national regulation versus international regulations, I think the -- and this is different -- the good news and it's different from GDPR development. The (?) around AI regulations are happening at the same time globally and there's also a lot of global discussions at the OCDE level, at the World Economic Forum level, the OSEG level, at G7, G20 but also within the transatlantic tech council these discussions are happening so I find this positive.
Generally, from a company perspective, you want to minimize compliance costs by adapting to the standard that ensures that if you apply that standard, you are covered everywhere. This is like an internal race to the top because it's -- that's how compliance works best and you don't want to develop your tools for different regions in a different way. So, that's eventually good news in this context.
>> MODERATOR: Thank you, Cornelia. Before I go to Osei. I would like to announce that we have, perhaps, after the next question to Osei Baah, the possibility for one or two questions from the audience. So, be prepared.
Osei, I wanted to ask you when we come to global policymaking or to the global approach to mitigating these risks, of course the Global Digital Compact which is to be drafted in the next months is something where we could define some of the corner points, cornerstones. What would you think would be the role of the global digit competencies, which criteria should be there in order to mitigate these risks of human rights violations by algorithms?
>> OSEI BAAH: Thank you again. So, I think as the Global Digital Compact seeks to ensure an open and free and secure digital future for all, then there should be some principles on -- the principles in algorithm that should be included, should encompass, for example, exclusiveness. So, we (?) I'm happy that my conclusion on this is intended with the ones that my -- the other speakers before me have cited. For example, inclusiveness. So and then there should be no discrimination in the use of our algorithms. Algorithms should be beneficial to all regardless of gender and race and then this should be included in its development stages. It should take input from a diverse background. And then to have controls such that they are not bigoted or biased. It should be made to check the bigotry and discrimination that has existed in the past.
For example, let's say there's a preconceived belief, you know, of a certain race of people having bad credit scores which is influencing the train of algorithms to be biased against all citizens of the areas, because of that concept applying for government or bank loans, then there's a problem here.
So, this is similar to how can be trained to ensure that more women are accepted in STEM programmes. Ensure the safety of uses and the information. So, (?) tech companies but also from the government. And I think there should be something that should be discussed in -- that should be included in the discussions of the inputs that are going to be -- to be included in the Global Digital Compacts.
So (?) AI could so much control in the hands of corporations and governments. And the discussion on should include government of citizens and services. Because it's ironic that the same technology that is meant to improve quality of life can be used on uses and denying them of benefits that these technology have to offer as a result of fear. For example, the government of Nigeria spent 10 billion naira to spy on generalists, and this is being (?) to courts on various charges.
Also show check and detail this information. This information especially gender information is a menace that the world is trying to combat now. So, extending multiple countries have proven that this information is a tactic employed by states and nonactors, whether it is to advance geopolitical or financial interest, coordinated and targeted campaigns leverage (?) and hate in order to silence opposition voices, undermine democratic principles. So, social media algorithms tend to push the most viewed content without taking ever it's true.
And in this tense to multi phi the impact of fake news. Because there are so many -- and then (?) for example, get a great deal of traffic than accurate and properly solved ones. And this is what one of the Google owned platforms former engineer said. His name is Guillaume Chaslowe. And what can really be expected unfortunately, unfortunately, is that, for example, data is available and vulnerable resource and it's valuable only -- its value will only increase exponentially in time. And there's always going to be companies that still be looking to buy some type of data from other tech company to develop some kind of algorithms which is one that is decided. Sometimes there are people who buy personal data of let's say a clientele or a client base of an already existing tech company just so that it could help feed into their development of their new algorithm. And also there's a problem of hackers who will also steal people's personal information such as demographics, Social Security numbers and bank cards to sell on the dark net. And this is likely to persist in the future.
So, there should be sanctions in place to prevent the sell of private data, specially held data and both governments and private sectors should continue reviewing the development and use of these algorithms.
If we can develop algorithms properly, then the world could be something that resembles the technological Utopia that we hope to live in, where technology is far more advanced and safer and way more easier.
The risks and (?) AI could somewhat reduced and algorithmic technology would be safer and serve a wider portion of global citizenry and also it will bring together governments and tech company and civil society in achieving an open and free and secure digital future for all.
And when all of these sectors are involved, then it improves the checks and balances of all sectors involved in the creation and regulation of AI.
So, realistically we shouldn't expect the risks and (?) of algorithm are going to magically vanish in the short while. However if we do develop (?) properly, then the world can be safer regarding -- considering algorithmic technologies and algorithms itself. Thank you.
>> MODERATOR: Thank you, Osei, also for building the bridge back to the positive aspects of technology, progress and also AI and algorithms. That's very useful and, actually, that's what you said our shared digital future, that's also something that all of us fight for.
I don't see any questions and comments here in the room. And we have also reached the end of our time.
I see something in the chat because I can't really read it because it's too small. I want to conclude this session by thanking you all for your contributions. It was a useful discussion and I think we shed some light on the question of how to regulate algorithms. Thank you very much. Bye.
>> CORNELIA KUTTERER: Thank you.
>> TERESA RIBEIRO: Thank you, bye.