The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> SABINA: Good evening, everyone and welcome to our panel. It gives me great pleasure to welcome you to this panel and the panel will include the distinguished set of experts from technical communities, international bodies as well as the private sector. I hope you'll join us momentarily.
Prior to the panel, my colleague Selma will present key highlights from key research projects covering ten countries in the Middle East and North Africa trying to explore these questions on how to navigate the fragmentation of the AI governance ecosystem in our region but also looking into the AI ecosystem, private sector companies, small and medium businesses and start‑ups who are working in this field across the region. So the quick umbrella of this panel is trying to understand how to operationalize inclusion better in the AI governance ecosystem. So before I ask my colleagues to start, if this work, let me tell you quickly about this us. We are an academic institution and work among other areas of policy and research on future government and digital governance areas and we have multiple publications, research projects, we work with ‑‑ one that we're representing and talking about the highlights here as well as AI access assessment and capacity building in partnership with the IEEE, global risk, mapping on AI, with the others as well as generative AI and public workforce. So it's a lot of ‑‑ interest in the region in this field and we hope that this will be something that will inform the decision‑making and capacity building.
This is the executive education work that we are doing with the IEEE and working on capacity building and building assurance related to AI ethics in the public sector but also across society.
So that's something that generates and produces group of experts who are authorized to assess the ethical implications of artificial intelligence in their workplace and with government bodies as well. Now I will hand over to my colleague Selma who will take us through some of the research findings of this important projects in my view and then we'll ask the distinguished panelists to join us for the panel.
>> SALMA ALKHOUDI: Before I get to the slide, is this good? I am Salma, I was the head researcher on this research project that Salem Fadi just mentioned.
This was carried over other many months. We researched many companies. (Audio distorted.) Adoption is more than tripling globally. I think this actual figure is much higher now and there's a dramatic surge in generative AI adoption through open source and private models which of course implicates the governance landscape. So there are challenges that contribute to this complexity of the landscape first we have a fundamental definitional challenge how do we govern something that we can't concretely define? And this is not just semantic. It's a practical problem because the tech is evolving faster than our ability to grab onto AI and find a subset of things. Second, in helping with a few key issues that cross borders and cultures and this is kind of a laundry list of just the tip of the iceberg as to what falls under the domain of global AI governance did a policy, transparency, bias, deeply social and cultural as well as technical challenges and I'm sure you've heard a plethora of these challenges across the panels and workshops from today. And finally, we see a few core tensions that sort of revolve around the global AI governance landscape which include innovation, versus regulation, a lot of people view this as a false dichotomy, a lot of people hold that it is a true tension that exists. Economic transformation versus job placement, transparency versus intellectual property and inclusive development versus monopolization, of course all of these tensions we have heard time and time again through our expert interviews, through interviews with start‑up founders and of course in our survey. So there are four distinct governance approaches that we heard. Risk based approaches like the EU's AI act which is focused on classifying and mitigating potential harms there's the rules-based approach exemplified by the gen AI measures that somehow the specific requirements. There are principles, based approaches like Canada's voluntary code.
And most countries in the region are also principles based offering flexible guidelines as to where companies should head.
Directional. And then outcomes-based approaches as seen in Japan which focus on measurable results rather than prescriptive processes. Each one has its pros and cons but the crucial question is how well do these different approaches serve the global south's needs and contexts? Of course we leave some insights from our expert interviews. Let's see. Okay. In some of the powerful challenges that face the region, first is that geopolitical concerns have taken away from attention from technological progress if not pushed many countries a few years back in terms of issues like health and safety and education. Again, the definitional issue is a problem that came up in our expert interviews, you can't define what AI is and you can't define anything that includes the term "AI" including AI governance and AI ethics and responsible AI, of course there's the original problem of global cooperation mean countries are also in very different circumstances. They have very different priorities. The Arab world encompasses anywhere from emerging global AI leaders like the UAE and Saudi Arabia to countries that are currently emerging from decades of war, countries that are still embroiled in war so the playing field is very, very disparate and very large. And what we heard time and time again which is also quite interesting from our experts is that a shared geographic location or even a shared identity or language like being Arab or Arabic is not enough to unify efforts. Some people believe it should be enough so that's another question for our panelists perhaps. And as a result, the lack of openness and sharing which further complicates the region. When it comes to inclusive governance there's a sobering reality that we all have to grapple with as pessimistic as they may seem and I am just here to lay out problems. How can we do this without a starting point and technology is a priority nationally but what about things like the inability to read and write or things like, you know, not enough access to proper health care. So it's a problem of national priority. As, yeah, that's, the issue of quick implementation versus governance is also a big one there's a lot of push to get things done now and quickly before things accelerate even faster and given the pace of AI and governance needs to come first. Others don't think so others think we need to move as fast as possible especially as a region that is being left behind in many instances. And when asked if these countries were invited to the table the answer was yes and no that global fora are open to members but not always taken seriously. That contributions are siloed and weak often it's just one ‑‑ one country from the Arab world that is in attendance and that as a result there aren't more invitations to lead AI governance framework in conversations. So that's a little bit of insight from the experts that we've spoken to but we also want to dive a little bit into our survey findings. This is preliminary. We're going to reveal the full survey results with our published report early 2025 and also we only sort of dove into the survey findings that correlate with the topic at hand which is global AI governance, inclusion and interoperability as it relates. So, first, this hierarchy of concerns, cybersecurity tops the list with 258 companies expressing concern, there's a high worry of explainability and bias. The interesting thing when we triangulate this data is there are some deeply felt cultural challenges here as well. When a company struggles, they're not just dealing with algorithmic complexity they're wrestling with how to make AI systems comprehensible in terms of tradition, in terms of viewpoints. So it's not just about tech. It's not just about the technical problems. On the question of ‑‑ we wanted to dive into the particularity, so we asked about the negative impacts of regulations are if any, and 22.6 respondents cite cost as a big concern. Funding is the biggest concern for SMEs in general. So the increased cost of regulations is top of mind. The combined impact of slowing innovation and limiting AI applications also accounts for over 36% of responses. Then we also asked about potential positive impacts. And I think, okay, despite all those concerns nearly 30% of companies acknowledge that regulations are making AI more secure and trustworthy and add that to the ‑‑ around 17% who see increased consumer confidence. This is pretty in line with what we heard in our interview with company founders as well. They do believe regulations have a role to play especially as they look to scale across markets across borders, but they're still hesitant at the scope of regulatory reach. Because the definitions are so vague because they're still so much on the horizon. This radar chart looks quite simple but the clustering towards supportive, very supportive and mutual rather than the extremes on the other sides along with our interview data tells us something really important which is our region isn't suffering from overregulation, it's suffering from regular uncertainty. There's no certain way that things are headed and companies have told us time and time again that they're trying to figure out a regulatory landscape that is taking shape and emerging and then perhaps, you know, one of the most interesting findings is interoperability. Nearly one‑third of companies face interoperability issues with regulation I across the MENA region and the vast companies that said no are too small to scale. When 31% of companies say they face interoperability issues they're highlighting a question of how can we unify a region with diverse regulatory approaches and different developing priorities and varying levels of digital infrastructure. On the question of AI ethics standards this is the last chart before we hand it off to experts today. The high levels of partially implemented across all the categories is not just about these companies themselves being halfway there. It's about an entire ecosystem and transition which is kind of how you can define the ecosystems that are still emerging and developing in the MENA region and what is particularly striking is areas like recordkeeping and transparency show higher full implementation rates than things like third party evaluations and this suggests that these companies are better at internal governance than external validation. Still need to extract a bit more insights as to what this means. From our respondents but this is also a critical gap when we think about building regional and global trusts in our AI systems so with that, I will hand it back off to Dr. Fadi and invite our panelists to stage.
>> SALEM FADI: Let me ask the panelists to join us. Dr. Nibal Idlebi. Gilles Fayad, leads a lot of capacity building.
So clearly from the presentation that we highlighted some of the findings and based on the discussions we had earlier with, you know, other panels as well today, there are a lot of questions around inclusion in the region and it has been an area that our region has not been able to achieve. Proper inclusion in the digital age. And I will start with the doctor. And this is a sample of the world, we have the highest, some of the highest-ranking countries in the world to some of the least developed. So in our region, what are the reel goals of inclusion should we be aiming for? How in the digital era as well as in the age of AI? What are we looking for? How can we understand what inclusion can lead to?
>> NIBAL IDLEBI: Okay, good afternoon, everyone, can you hear me? There is a different facet for the inclusion in the Arab region for sure. I mean, AI is emerging in many countries. We can see that Arab country are not ‑‑ are in the development. We have countries that are really heading very well like UAE, like Saudi Arabia, Qatar and other countries with a national AI strategy. And other countries are lacking behind. When we speak about the region we cannot speak about all countries together. They are leaders in some technology development. Like GCC in general. And maybe Jordan and there are other countries which are really lacking behind like let us say, Somalia, Syria, Iraq, Libya, and so on. And then there are countries which are really lacking behind. For inclusion, I mean, I believe there is different level at the least from the perspective of AI we need to involve all stakeholders in the discussion about AI either in the AI strategy or AI framework or AI governance than inclusion of all stakeholders which mean private sector government as well as NGOs and academia. Academia plays a very important role in AI. Maybe in some information society we forget about them in some cases. But today I mean in AI it is one of the areas where the research and development is very important, therefore, the inclusion of academia is very important. Then the design and the discussion of AI we need to include all stakeholders but also, we need to include all discipline because we know that AI, it, I mean, it matters for many sectors like health care, like education, like agriculture, maybe transportation then enter the discussion might not only be technological people but also in interdiscipline this is one side of the inclusion. The second side is the ‑‑ to include all segment of people, I mean, whenever we are developing any system for AI, either for the ‑‑ during the design or the deployment or the use, we have to include all people in terms of the disabled people, all races, I mean, all segment of the society as people, women, gender, youth, and everyone, then we have to include in our algorithm, in our thinking, on our design, on our strategy, all segment of the society. This is also very important. Because I mean, the needs might be different from one segment to another. And then here we have to think about globally about all societal group. There is also maybe inclusion in terms of I don't know how to link it with data. Because data is very important in this regard, and here we have really to, we know that in some countries, in some of the Arab country we lack a lot of data, data are not very well‑developed. We don't have everything on digital format. I mean, we don't have enough data in digital in one way or another. When the data from one side we need to have it. We need to have it a clean and reliable and we have to have it reliable and timely in a way or another. Then the inclusion of the data that represent all region or subregion or locality in a way or another. This is another form of inclusion in order to have our algorithm or our AI system addressing all the needs of society.
We need to focus on agriculture. Not everything. Whenever we think about the data is very important and here, we have to encourage or to generate data in the digital format today. In order to have representation of these ‑‑ the needs at a different level I will stop here for the time being.
>> SALEM FADI: Thank you Nibal. You talked about the region and that limits the availability for development across the region and this brings me now to my question to Martin and Martin you come from global leader in AI development from Google. And very active in the region. And you ‑‑ as private sector leader in this domain, and based on your understanding of this, our regional context. What are the roles that, you know what, is the role of first, the private sector entity, a leader in AI that can help in the inclusion of the region whether it's in data availability, data representativeness or having a seat on the table in these discussions around AI development globally for the region to have a voice?
>> MARTIN ROESKE: Thank you for your great partnership in this research that we have done together. Some very interesting data points coming out of it.
Now to your question, I think there are many ways in which the private sector can play a key role, tech companies in particular in the region. Maybe before I go there, just a couple of things we should focus on when we talk about governance because those are all aspects that Google also is very involved with and we're looking at it from a number of different lenses. One is of course, around equitable access. So bridging all the different divides. As his excellency mentioned in the opening remarks today. The data divide, the digital divide, et cetera, and a lot of that is around accessible AI tools, affordable tools, access to infrastructure, still one‑third of the world is not on the internet so how do we help bridge that gap and then making sure people have the right capacity and skills. And so one of the things we've been focused on is the private sector entity here very much is trying to create skills programs for everyone. Not just for technologists or developers but also for users of AI. Whether those are users of gen AI or just people exposed to it at school, at university, small medium enterprises, how they can adopt AI. So there it's important for example to make this available for free in language, in Arabic, and scale it to as many people as possible. So just a few weeks ago we announced a new Google.org program to give grants to research universities on AI. Second point is about mitigating bias. So how do we create AI systems that are fair, unbiased and have work with inclusive datasets and, you know, because a lot of the A I forays have been made.
For China this is not traditionally part of the datasets used to train models. Very conscious efforts have to be made to ensure that these datasets are inclusive. On protecting privacy and security, I think that's obviously one of the key areas that all governance efforts are focused on. A lot of techniques that we as private sector companies use. Which differentiate privacy techniques try to anonymous data, preventing data from being widely shared if it doesn't have to be for a particular purpose. Giving users the option to opt out of their data being collected. Website owners throughout being ‑‑ their information used to train models so making it a user choice to use this technique to keep it private and secure. And then finally about promoting transparency and accountability also one of the points that I think the, when your survey brought up as one of the primary concerns. There, you know, there's a lot of work happening. Across the board, Google participates in my global fora when it comes to privacy. We've done a lot of work recently on explainable techniques. So, for example, in data problems or tools which is a way to water log content generated by AI so there's some content that can be easily identified whether that comes out of an image generation model or algorithm X model or video model. (Audio distorted). I think those kinds of fora work with IGF.
>> SALEM FADI: Thank you and this brings us to Gilles, you come from the IEEE is a standard organisation but also it has a structure of doing things, has a lot of horizontal working groups, across the domains, across jurisdictions, across the world.
And the same thing happens in the IGF ecosystem, the same thing happened in the multistakeholder model that enables a lot of people to participate in creating something, or at least be included in discussion and that it eventually could be presenting them or presenting their ‑‑ what they want as an outcome. Now the question is our region and this is something we hear a lot from in ‑‑ from these organisations, does not participate enough. And this is not just this region but also maybe the global south in general to use that term. Has that of an issue is this a question of capacity or is it awareness? Or is it other reasons that we are not involved at a mass scale whether it's the researchers in academics, whether it's the exert experts, whether it's the technical community.
>> GILLES FAYAD: Thank you for the opportunity and also for the question by the way. IEEE, as you say, is a standard organisation.
But it's not a standard organisation in the sense of developing standards. It is a nonprofit organisation that is completely voluntary based in the sense that all the people who participate are volunteers and structure into working groups. You can think of it a grassroot bottom-up approach in opposition to standard organisation that would come from governments and that would trickle down all the way to get consensus at the engineering level. No, it's more the engineers deciding that they need to build the standards.
For example, wi‑fi was built this way but because it addressed the need. And by doing it this way you are able to do it fast. Now the ‑‑ that's one aspect of it. The fact that it is ground root, grassroot, sorry, is something that maybe we don't advertise enough. Because it's open to everybody. Everybody can participate in the working group.
And if we have a problem of AI governance, I think the problem has two sides to it. There is a side described before that is about how do you import and localize technology in the global south regions. Which I don't like the term global south. I think outside of Western Europe and America. So either we are able to ‑‑ we have to localize that and it is very important to be able to localize, but there is also the fact that we can contribute. If we don't contribute, the same way as others are contributing in these countries, then how can we expect to have our values and our cultural aspects reflected in the technologies that we use? So we have the opportunity to contribute as well to these standards. And make sure because once they are standards they get adopted and they very often get adopted through regulatory channels in many different regions so that also helps address the other point that you brought up which is how do we address standards or regulations at the country level. We had someone at the EU panel yesterday on misinformation and disinformation was reflecting on the fact that what helped you as a consumer society of AI pretty much like the global south is, but how did it manage to get its voice heard is by getting all this as a consumer society reflected through EU regulations and this is what GDPR was about. The Brussels effect is really about enabling a build up of needs at the regional level. So that these needs can be taken into account by the technology developers and can be integrated into solutions. But this is outside of this. Just to go back to the attribute question I think the interesting part is IEEE standards also offers others that are useful.
We are collaborating on one of these which is about capacity building and capacity building is really about the ability to build capacity but ‑‑ so and it starts with people then with data and then compute.
We don't know what the AI is and what is the difference. Why is there a need for trustworthy AI. There's a need for literacy in the first place. That literacy is needed everywhere not just in the region. It is needed for the city manager in North America as much as it is needed for the government service provider in the GCC country and that literacy allows you then to understand that you have an issue with data representative like Martin was reflecting upon. The fact that these ‑‑ many of these algorithms basically are built based on data and that data is reflective of certain societies of cultures so if you are ‑‑ if your data is not represented you might be ‑‑ you might, not necessarily are, but you might be misrepresented at the algorithmic level and the outcomes might be non-beneficial for you. So you might have bias. You might have transparency issues. You might have other issues that could be associated with it. So from that perspective, capacity building allows you to contribute as well.
So participating to working group insiders going through capacity building efforts such as the one that we are developing and last but not least have the ability as well to localize content. So we came kind of to the conclusion that you develop a solution. That solution you can get it adopted, you can get it spread in a specific region, in a specific country but at the end of the day the people who are going to implement it might not all be English speakers so you need to translate these solutions into local languages you need the ability to address the solutions to the cultural representative of the populations that you have.
And have the data also, avoid data on the representation or at least compensate for it with local data or processes around the solutions that allow you to avoid the kind of vice issues and that you can find with solutions.
>> SALEM FADI: Thank you and this covers a little bit the grassroot element of it so it's awareness capacity, inclusion, access, I will ask ‑‑ I will be back to you Dr. Nibal and ask you about the higher level of representativeness and AI governance ecosystem so you as a regional U.N. body deal with member states and these member states you deal with regulators, you deal with ministers, you deal with stakeholders at the top of the governance ecosystem. Who are tasked to represent their countries in these global fora around AI governance or around digital inclusion or around other elements as well. Based on your experience and you highlighted this some countries have structural restrictions related to, you know, we have conflicts, we have all of that, but also some countries are not having a seat on the table although they have something to offer. In some of these global fora. How can we, and this is just about our region but it also can be an issue for the rest of the global south, how can these voices be represented at the higher level in the ecosystem of governance around AI that usually decides around AI safety, AI measurement, AI standards, AI ethics, so it has real implications for their countries. But they don't have a seat on the table and the top of the chain. Do you think there is a way for this kind of mechanism to exist at ‑‑ represent these countries in the global south especially in AI? Given that it's currently more often a lead club in these discussions?
>> NIBAL IDLEBI: Yeah, in fact it is quite complex. First of all I would like to return back to what Gilles was saying and I would like to mention there's a technology gap between the south and the north and this is behind the scene, everything is related to technology gap. There is a big gap nowadays between developed country or the north country and the global ‑‑ the southern country. And from that, we can derive a lot of issues I mean, from this issue with the big issue that is sustained, I mean, with the time it's sustained and it is sometimes it's becoming bigger I mean the digital ‑‑ the technology gap or it could be digital gap but I mean in this case maybe digital or technology because I can think about big data. Everything is ‑‑ then the gap it's unfortunately, it's widening, rather than ‑‑ we are not easily bridging the gap even in digital technology. But returning back to your question, I believe there is a ‑‑ we have to work at different level from one side we have to work at decision level I mean, decision‑making level because a decision‑making level it is those who decide in one way or another their participation in the global fora then it is the ministry's or the regulatory authority who are visible vis‑a‑vis the global fora and the global forum like the IGF for example or WSIS or digital cooperation compact for example and these decision makers should be aware about the importance of their engagement and their participation. And here maybe we can tackle this issue from countries which are more advanced in this. Like for example case A, UAE, in the Arab region who are maybe ‑‑ they can afford it, I mean, they can from one side afford it and they can discuss it more than from one side the decision maker, but also, we have to build capacity of people. Those who we go and discuss I mean. Here also we sometimes we see the gap in the capacity building because they cannot argue enough, I mean. On the matters because of the digital gap or the technology gap we have to admit that in the south we are more user of the technology rather than developer of technology. We are speaking about AI technology most of the time it's developed in the U.S. or Europe. But you cannot find a lot of global solution which might be developed currently now here in the region unless it is very local solution. I mean. But this big solution, this big company, when there is the capacity building of those who will be arguing, the technical people, technical people or academic people if there is this the need. I believe in these two areas and we have to be more proactive I mean and maybe build some force I mean. At the regional level to have representation from different region. I mean, and here we can collaborate with other regions as well. Africa, for example, Arab countries, some Asian country are developed today. They are quite developed. But I mean this also does some building between the region. Then I think it might matter because we do have some similar matters or similar issues like, for example, this issue of language, the issue of gap in technology, and so on. And I believe there is a decision maker I believe practitioner should be really aware about the issue. And if we want to go even deeper with this you mentioned the consumer that the generated ‑‑ the GDPR it will be fantastic also if we can have this knowledge spread in the ‑‑ in terms of user I mean, the user and here maybe it will be NGO and I think IGF is very good forum for building capacity of NGOs and user there the user perspective. When I believe we need really to build capacity and to convince decision maker at that high level for the participation and to convince the value of the participation I think some of them are aware today and we feel, I mean, I believe I saw many, many times representative of case UAE who are, I mean, in the international forum and they are participating and also at times participating I would say they are participating. But also, there is this building network between the regions in the south to have the ‑‑ because we have the same issue. Then we have to push it more than we are doing it today. I mean.
>> SALEM FADI: From a school of government to add to your point we do deal with the region as people in senior and mid‑career and high level positions in the region in terms of capacity building and leadership development so there is something that definitely we can be done in terms of capacity building in the highest level as well in many countries around the region where they need to be aware but sometimes they have to develop the capacity and then there are issues around language, issues around access, issues around financial matters that are not able ‑‑ that not allow them to access these fora. All of these are in the leadership capacity building scale so thank you for that
>> NIBAL IDLEBI: But let me add about the private sector.
>> SALEM FADI: With the data that Salma presented, we noticed and this is about companies so private sector companies. This survey, that was presented is hundreds of small and medium enterprises and start‑ups around 10 countries in the region working around the domain. And they have identified and, I mean, in the data that showed ‑‑ that Salma showed regulatory uncertainty as a question mark. And this even talking some of these ‑‑ in the interviews with these, it's not working? Even discussing this in interviews with some of these companies, they feel less willing to participate and share even their issues. With us as researchers so there is this culture of why do I need to share? I will leave it to the anthropologists such as Salma to understand why this is happening. But from your point of view as a leader. With this uncertainty, as highlighted in discussions as a reason for restricting. How do you balance this trade‑off as ‑‑ in our region, many countries have that direction around regulations on AI, a knockoff. But others do not have anything. This uncertainty is causing many of our companies, but also individuals who are interested in being included or talking. To hold back. Do you have a view on this trade‑off that is happening in the private sector at least and there's small business enterprises that you might be involved with.
>> MARTIN ROESKE: Thank you. You made a great point. There's that level of uncertainty that is holding the private sector back a bit. I feel this region is actually getting quite a lot of things right.
When it comes to governance of AI and we see that, you know, a lot of focus over the past couple of years has been about ethics and principles so, first countries adopted national AI strategies and pretty much every country in the region now has one. At different levels of implementation. With, in many cases international input and support from the private sector. To then developing their ethics and principles and guidelines and what they haven't done yet is to come with hard regulation and laws similar to the AI actor or otherwise and in our mind that's not necessarily a bad thing. Right? I mean, there has been an intentional bit of a wait and see attitude to first let the technology get to a point where you can see the actual use cases and practice. A lot of countries in the region, Saudi being a good example UAE have implemented regulatory sandboxes where you can try new technologies in a safe environment. Control the implementation. And we have seen some great investment in things like Arabic LLMs so if you have the falcon model in the UAE, all of which are available for researchers around the world to work on so we're starting to see greater datasets from the region and also homegrown technology as well. Enabled sort of by a slightly hands off attitude to regulation. Now, a lot ‑‑ Google has published a lot about what kind of regulation we think makes sense. There is ‑‑ we talk a lot about bold and responsible. Both. How do you get that balance between preventing users from harm but at the same time keeping the innovation open and flourishing? And I think most countries in the gulf will try to make themselves the hub. Not regionally but globally. How will we build the enterprise and the AI start‑ups to see this as a place from which they will flourish around the world.
Investing in energy, alternative energy, green energy, building data centers, et cetera, I think a lot of countries build their strategies around attracting talent, attracting companies, attracting business. So that's on the positives, I think a lot of regulators ask me, you know, what should we do?
You know, what's the right way to go about implementing AI regulation in should we even focus on AI regulation as such or should we focus on first filling the gaps we had in our existing regulation and I think that's a very good point to take home is that whatever was illegal without AI probably should be illegal with AI. Right? It's about making the regulation that already exists, adapt in such a way that AI is included in the thinking. But doesn't necessarily mean that one has to regulate or the inputs that go into developing models, scientific breakthroughs, it's more about trying to, on the downside, regulate the outputs. And I think we haven't seen enough outputs in everyday usage, beyond the sandboxes and some of the use cases I mentioned to know quite yet where the regulation needs to focus so apart from those principles I mentioned earlier what are the goals of broader global governance I think the regional specific's still being worked through. We talked about language quite a bit earlier and Arabic in particular. I just want to give some interesting data points one is so you know there's a product called Google Translate. At the moment it exists in about 260 languages. We started this project about 20 years ago. Of those 260 languages, 110 languages were developed in the last six months thanks to AI. 20 years of development and six months it's gone up 110 languages so this is about creating an inclusive way of accessing the technology. Another interesting thing I just learnt a couple weeks ago is that Gemini which is our generative AI tool we have more daily active users in the MENA region than in the U.S. Which is crazy to think of but it's a testament to the appetite that exists in the region for using these tools and the fact that people are generally open to technology and more optimistic about using technology means there's a ‑‑ an almost instant embracing.
I tend to be a bit more optimistic about where the region can go with this and I think they are getting a lot of things right? Why is the private sector still hesitant? I think there are other underlying factors, the funding streams for start‑ups. Investments, how says easy is it to get finance? The things about privacy. All the countries in the region have some form of data privacy law. The implementing regulations were still missing so you don't know how to apply. There's a lot of ambiguity around it. Focusing on the jobs that are the implementing mechanisms and policies for AI in my mind maybe should be the first priority right now.
>> SALEM FADI: Great, thank you, this is very insightful and you highlighted the standards and the, I mean, this ‑‑ Gilles, while Martin was telling us how this area has more Gemini users than the U.S., but we have I think an implications for this, in our region and you're in the IEEE, you have set both specifications as well as capacity building for AI ethics assurance and in a way that is currently also we're working together to adapt it to the region. But given this massive explosion of AI use, and at the same time lack of ethical standards or regulations or systems in place for our region to govern that use. Do you feel that this can take things into misdeployment, creating some victims in society?
And if it is the case, is it an argument for more inclusion? More regulation? Or other things? I know this is a very complex question but I trust that you can highlight all of these.
>> GILLES FAYAD: Thank you Fadi, in a nutshell it is important to realize that without the contribution of a private sector it would be very hard to achieve literacy and capacity building in the region. When I run courses in Africa, I run them on collab, I run them on Google Meet on tools available to me by the private sector. It is very important to acknowledge the enabling nature of the private sector in this. I think from a regulatory standpoint what is interesting to see is that many countries, even in Europe now, are starting to look at it more from regulatory is good, in terms of protection, but it should not come at the expense of innovation. And there is kind of a turnaround in a sense. The people are coming to realize that you should not go too much in one direction and not too much on the other direction. Especially Europe in the middle of both of these. And so I think the region here has the option.
And has grabbed the option to kind of leapfrog some of these issues. And grow into an environment, some countries I'm not talking about the whole region, but you into an environment where they can be even ahead of some European countries in terms of AI deployment. So, again, not ‑‑ it's not an us versus them kind of thing. And it's very important to acknowledge the fact that you cannot do that work in AI without the private sector. This being said, the AI at the end of the day is a tool. It's how ‑‑ it's not AI that can be good or bad or nefarious or ‑‑ it is the person behind it and the user that is being put to that decides whether it is turning in good shape or not. The problem that you face I think is you are trying to use AI to bridge a gap that you don't know how to bridge otherwise. So, suppose that you're in a situation where you don't have enough resources to fulfill something and you say generative AI is going to do it. We had the same issue ten years ago with the chatbots. Chatbots will fix it but they did not and generative AI might be able to fulfill some of these roles. But the condition is that they need to be well‑defined. Without that you don't have trust and if you don't have trust, you don't have adoption of the AI. At the end of the day AI is in that triple morphic user interface.
The way it interfaces with humans is by behaving like a human. It is basically assuming more and more whether it is autonomously through augmentation, roles and decisions, that were left to humans before. So what we expect from it is trust. The same way as we expect trust from humans in the way they behave so from that perspective the trust in the solution is very important. So how do you build that trust, it's very important to make sure that you look at the use cases and that you take into consideration from the inclusivity perspective all of the stakeholders that need to be involved in it and this is a role that is at the individual level participating into standards, the private entities being very inclusive of all stakeholders the way they are doing it or the government even pushing for empowerment of the different civil society's group into achieving it. But it's a magnifying glass at the end of the day. So everything it does will get magnified. If you don't prepare it well it will get magnified in the wrong way and you will see more negative aspects than positive aspects so it's important to, from the onset, look into it and in order to look into it, like now we're seeing you need to have the ability to understand what it is about. If you don't know what is AI, then you are just relying on who the provider telling you it is. And at that point you have no say in its implementation.
>> SALEM FADI: Well great, thank you, I know we are coming to questions and we have questions coming up so we have, if you're eager to give us your question, please go ahead. Do you have a mic? Ah, here it is.
>> AUDIENCE: I'm sorry for being so anxious but I'm a panelist in another workshop I'm late. But this workshop is very important so I have two quick questions for the panelists. The first, data was said that without data we don't have local solution. We need local data so my first question is, how do you think is the best approach to encourage, to have local data? Should it be through regulatory ways or what other incentives could be used if there's some best practice already that can be shared? That's my first question and my second question is mainly to Dr. Roeske is one of the big gaps was once the developing country have the data.
Where to run it? This requires huge computing power, that, well, imagine, we don't have enough for data centers, imagine the computing power that we need. That's the gap that is are growing. I'm asking, maybe it's already happening, or maybe in the future, will it be possible, a business model in which the big companies that already have the hardware bring as a service to run the local data of countries for training of models or other thing as a service for developing countries that do not have the infrastructure to run the data in their machines? Could that be a possible way? To give some time? Until the countries could have their own hardware?
>> SALEM FADI: Thank you. Two questions. Who would like to start? So local data?
>> NIBAL IDLEBI: I believe there are some initiatives. There are some practices in a way or another to have this ‑‑ to encourage local data. I mean, there are some initiatives I believe even Google did one at one point. For encourage teachers and the students to develop and to put their data or whatever research they are doing but we can come up with an example and help users and teachers and so on.
And to make some initiatives or to make some words in a way or another. I mean, the words are very capturing to collect. They might be a solution to capture some data. Then I mean, there are some practices in a way to encourage citizen and to encourage people to provide their own data through specific initiatives and I agree with you. There should be some initiative. Some incentives I mean. Of course I mean local government, we can have the data collected from eGovernment I mean or digital government. This data we need to encourage maybe a little locality or government to open their data in order to be used and this is one of the examples that are mentioned by Fadi, this open data is very important I believe that you can put all ‑‑ but it needs some efforts to clean the data to put it in the proper way. There are some initiatives that could encourage or accelerate the generation of data through division of governments you have a lot of data with the government. Oh, if you don't have digital government in the country then it's another question. But I mean through these initiatives from the government, from local government, from institutions, I would say, also, you can encourage in a specific field that generation of a new data. It could happen. And there are some...
>> MARTIN ROESKE: Great question as well. I don't know if you've heard of data commons but Google's been quite involved in that initiative for a couple of years now and the idea is to take whatever publicly available data there is.
Clean it, structure it and then provide insight to anyone who want to query the data so it doesn't become a wall garden of information that is only useful for some people but to create if you like the repository of data that you can base of the research questions around whether that's environmental data, climate data, health data, et cetera. Governments can make a lot more data available than they currently are doing. I think there's a lot of hesitancy around what is sensitive.
What is not sensitive. When we worked on data protection laws, for example, in sort of providing best practice and consultations. The default position was oh, it's all sensitive. It's all national security interest, et cetera. What people don't realize is so much of that data could easily be anonymised or made impersonal that you're not giving away, you know, secrets by just sharing the data in a more meaningful way, so I think there's a lot of work that can be done there on just sharing data between the departments within the government but also the private sector and others. On the other point what can Google and other tech companies do to, you know, include the global south and the emerging markets in more of a ‑‑ on the infrastructure development? So first of all, the cost of, compute and capacity to run models is going down all the time, so a lot of work is happening at Google and other companies to reduce the amount of compute and storage and everything else needed to come up with a good functioning AI systems and so Gemini 2.0 uses 90% less compute than Gemini 1.5. Which is a huge scaling down. The other way is to bring a lot of the compute closer to the device and have the algorithms run on the edge or on the user device itself so you don't need to run it all through global infrastructure. But we do recognize that a lot of global infrastructure will still be needed and so we and other tech companies a lot in sub‑c cabling and satellite systems. My colleague here just gave a talk on the interplanetary internet being developed. How do you create an infrastructure that you can easily bring to markets that don't have it today? At a low cost? And so a lot of development is going into that at the moment. Of course, the capacity building, skilling, all of that is super important as well. And making sure that, you know, the universities that exist, there are some good institutions here. Are connected to the research that's happening in other parts of the world. Including them. So that would be hopefully ‑‑
>> GILLES FAYAD: Can I add to this? I will be brief and quick. I think there is no one answers for everything so in the sense of data, right? LLMs today have scrubbed the internet completely. And now are generating synthetic data and are growing out of synthetic data so I don't know about Google Translate if maybe that's part of the uptake but there's a lot of data there. How to generate data when you don't have enough. This can be interesting for the global south and help with synthetic data can be very helpful for ‑‑ to make sure that there is no underrepresentation at the data level in the global south. That's one, the second thing is it depends on what AI we are talking. We are always talking about LLMs but if you go all the way to the nets that are just the layers below in terms of sophistication or building blocks, you can do a lot with tolls that are viable either from private entity like a collab, for example, available for free, and you can even do a lot with lower computational hungry algorithms that provide you with lower performance level but that is still enough in many global south countries. If I achieve 70% but don't need to use the data center I can run it on my local PC. I achieve 70% accuracy and I and for 90% accuracy but I go out to a local center. But in order to know that I have choice I need the literacy for it in the first place.
>> NIBAL IDLEBI: Data and satellite information are useful in some cases.
>> SALEM FADI: Thank you and looking forward to your panel as well afterwards. We look forward to your panel. All right. So as we started the questions, if there's any other questions at the floor I will come back to you but I would like to take question one online. We have a vibrant clearly community out there but let me read the question to you. What common cultural aspects deter the participative appetite of the global south? For instance, board member, public or private. Those members are responsible for overseeing AI implementation, use cases, assessing data, internal governance and so on, who assesses board members and certifies them as AI worthy? How does culture affect such aspect? This is a very good question. I don't know if anybody would like, you have all of you have boards, one way or another, so is there anything to be learned on how these cultural aspects deter participation in these boards? In our region? Or in the global south in general? Any insights? Or thoughts?
>> MARTIN ROESKE: Just one good thought I have seen adopted in the region implementing chief AI officers or AI offices across different government departments and empowering people to actually learn about the technology and then lean into those conversations when it comes to multistakeholder dialogue so, yes, there is some capacity building to be done. Still. But appointing people, giving them a mandate and putting structures in place that actually allow this dialogue to happen is a very good first step.
>> GILLES FAYAD: How do you make sure that the people who are in charge once these organisations or these structures are put in place? How do you make sure that they are actually ‑‑ they are effective and they have the capacity for it? So this is where you have some kind of, that can be certified or authorized or basically recognized for their capacity.
So it's like continuing education. We hear that a lot in the MENA regions where you make sure that people aren't always catching up with the technology they're using and that they need to make important decisions and there are courses and structures that are offered that can offer these trainings.
>> SALEM FADI: Thank you and thank you for the question. Now we move onto another question from the floor, can you please, let's give you a mic. Yeah, because everybody needs to hear you.
>> AUDIENCE: Hi, we're a youth led nonprofit focusing on responsible tech. I really like the aspect that Dr. Nibal brought up about the global south being not as involved with AI developing and that most of the global north or the western countries are doing it. Because I 100,000% agree with this. Most of the labeling is actually happening in the Global south with click workers doing the work and being exploited.
How do we make sure that specifically click workers and the global south actually does not only feel included but can actually benefit from the fact that it's part of that developing stage of AI?
>> SALEM FADI: Who are you targeting your question to? I think it's coming back to Martin but, yeah, so this is ‑‑ this is something that is common in all, you know, all the ‑‑ it's AI. So in a way, it's how to exclude rather than include.
Because it happens. Is there any issues that Google or technology companies are applying to ensure that this is properly governed that we might learn from?
>> MARTIN ROESKE: I think beyond skills programs and helping others and the content.
Work that you did mention. Of course a lot of it is encouraging local businesses to adopt some of these technologies and the start‑up ecosystem to take those technologies and build things that are regionally relevant from the ground up. One of the things we're focused on a lot is to work with a start‑up ecosystem in particular. We just had a program last year called the women AI founders program but we found that there's a huge gap in, well, women founders in general. I think in the MENA region it's only 3% of start‑ups are run by women and the funding gap is even worse, 1% of funding going to women. So we realized there's a huge amount of talent in the region that is not tapped into properly.
And that help and support is required so we run these accelerator programs for different groups of startups. We started generically looking at AI start‑ups. We're now starting to go down into more thematic approaches whether it's around health or education or fintech.
Even gaming we're doing accelerators. So there's some very interesting sectors here particularly in economies that are trying to diversify away from fossil fuels.
There's a lot of opportunity. It's just making sure that the ecosystems are there, the platforms that these ‑‑ all this talent can tap into and work with and that there are pathways to success, right, that people don't get stuck.
They graduate and have a degree and have nowhere to go. They need to jobs to go with it.
>> SALEM FADI: I quantity Gilles to comment on this. There's procurement, you name it. Is this embedded already and how for you as an assessor of AI ethics, or AI ‑‑ application of AI, can look into or want to look into?
>> GILLES FAYAD: Certainly from an ethical perspective part of the process of evaluating a solution is taking into consideration inclusively all of the stakeholders including the laborer. So if you take the laborer in Kenya and being mentally impacted by it, for example, or being underpaid for it. This is certainly something that would be caught at the identification what we call the ethics profiling at the use case level now this is part of, for example, the IEEE certified assessment framework which allows you to assess solution so irrespective of whether or not you have any governance, you have a solution today and you want to know if it's ethical or not, you can go through, it's very detailed and formal process that will allow you to do this. What I'd like to tackle the question a bit differently as well. For every challenge there is an opportunity. In scenes, AI costs a lot of money.
To countries, western countries, so‑called western countries that will go for cheaper labor there developing countries but at the same time it was an opportunity as Martin was referring to grow the capacity building into these developing countries. And grow even the capacity for local employment and local expertise and once you have that local expertise then you can afford more AI solutions because you can afford local salaries instead of having to pay for external salaries there and on top of that. Closing the loop on the programs the programs allow you to become authorized assessors, and once you are an an authorized assessor.
You can work worldwide anywhere. I think compete with others. And it opens up opportunities that doesn't force you into a niche market of labeling or specifically doing some tasks there. There are opportunities there that go beyond just the current setup.
>> SALEM FADI: Thank you and we still have around ten minutes to go, are this any ‑‑ you have a question? To the panelists.
>> NIBAL IDLEBI: If I may, if you allow it.
>> SALEM FADI: That will mean they have to ask you a question.
>> NIBAL IDLEBI: That's no problem. We know UNESCO have published ethical ethics for AI.
I want to know from you as the IEEE up to which limits you are applying these international ethics of UNESCO in AI?
>> MARTIN ROESKE: I will start and Gilles let you way in as well. We published AI principles I think back in 2018.
Quite a while ago. Which defined what we will and what we won't do with AI. And I think a lot of those have since been, you know, incorporated in some of the global governance standards as well. It's important not just to keep checking and assessing but also to build these principles into the product from day one. Right? And so when deep mind or one of our units that works a lot on AI develops a product it does so with those principles in mind.
This predates gen AI by many, many years. Our CEO declared Google to be an AI first company in 2017 I think and it's now in all the products. It's in search, YouTube, maps. And so there's a sort of established practice of how do you take these principles and build them into products and their working groups, product teams that check this on a very, very regular basis. So, yes, I would say these principles are very much part of our everyday life and there are whole groups dedicated within the company to working on it.
>> SALEM FADI: Would you like to comment?
>> GILLES FAYAD: Maybe quickly I will answer and you can grab the mic. I have to say, I mean, since you are opening the door for it, actually we have social standards. We had the ethical framework built which a lot of regulations and standards came out including for software development similar to what Google did or for assessment like the attribute certified. Or for procurement. Or even for governance so all of these is the work of this ground ‑‑ grassroot, thank you, community work. The EAD principles were used in the UNESCO principles. We are applying from the start and we are promoting their use worldwide for this. Sorry.
>> SALEM FADI: You have a question? Yep. There's a mic over there. Yes.
>> AUDIENCE: Thank you. I am from Germany and I work at an international corporation and all three of you gave examples on regulation, on governing standards for AI. But bring it back to the title of the session. How is the global south related to all of these?
>> SALEM FADI: I guess this is a closing question because we are almost out of time. But it in a way, it's an important question. What each of your ‑‑ how can we learn, do you have some examples of how inclusion happens within your organisation? I know IEEE maybe starting with you, Gill, there's massive working group volunteers, et cetera, but tell us more about the examples of inclusion that exists.
>> GILLES FAYAD: Sure, IEEE has chapters in every country of the world. So there is representation from every country in the world and every country's encouraged to work with our chapter and get basically represented through all the activities that are doing that. Happening there. So all the infrastructure is available worldwide. Beyond this, to give you an example T work we are doing with NBR is deep here in the region.
We're also doing in Malaysia, in Korea, in Brazil, in South America so it is not work that is specific to the MENA region and it is about AI literacy and enablement with tools and framework for assessment, governance and software development from that perspective in terms of AI and ethical AI, responsible AI.
>> MARTIN ROESKE: We got now AI engineering centers in a whole range of locations out of the U.S. And outside of Europe in the global south that we opened an AI center in Ghana for example some years ago. And what we cast these centers with is to look at specific AI use cases and issues and challenges that exist in the global south and developing world more broadly. A lot of very interesting solutions were actually developed as a result of this. So I give you one example. Flood forecasting, river rain, floods, that have disproportionately impacted the global south Bangladesh being a good example a lot of the work around being able to forecast more accurately, more quickly, and give people a week's warning and this is work at the engineering centers there. I think other than bringing people together at the local level and having those discussions on policy not just in the global fora but also making sure policymakers from the countries that we cannot afford for some people to travel for in which they can have these discussions is very important. For example, we do for the MENA regulator academy once a year where people from all around the region come, in our case it's London, we come together there to discuss all the policies issues that they're dealing with. And we found this to be a very good recipe to have regional dialogue and not just be a recipient but also to direct those discussions and Fadi you've been there in‑person.
>> SALEM FADI: I have been there and is there a mechanism internally within Google to capture these regional grassroot if you like findings or discussions or needs or concerns then into your ecosystem of development or regulatory frameworks that you influence into the global fora as well? Is there a counter feedback loop?
>> MARTIN ROESKE: We collect feedback at all levels, right? Users, anywhere in the world can provide feedback on the product directly. Right? There's always a feedback button. Apart from that, the predecessor to Gemini was called bard at Google. When that first rolled out in the region, we involved trusted testers from the region to make sure that it was culturally appropriate and linguistically appropriate and giving responses that made sense in a local context. So even before the product reaches market, we wanted to make sure that there's inputs that are taken into account during the development stages.
>> SALEM FADI: We're out of time. So last minute.
>> NIBAL IDLEBI: I believe we are doing our best in order. But we are working in the south. We are ‑‑ ourselves in the south however we are as part of the U.N. system we are trying to take the voice of the region to the global fora. In general. We have IGF for the Arab region, for example.
And through this IGF we have involved all stakeholders, in fact, all kinds of stakeholders. Private sector you counted and we are trying to be the bridge between the region and the global. However, in some meetings, it is the government who should be the voice, the government itself, not the ‑‑ even the convener as ‑‑ but we are trying to make the discussion at the regional level to the maximum possible and involving the maximum of stakeholders and we are doing that in good way. In a way. In our meetings, we have all stakeholders nowadays are involved although we are intergovernmental organisation.
Then sometimes for the decision‑making it's only government tool. So in this area. This is ‑‑ but we are doing our best but sometimes the responsibility is in our government.
>> SALEM FADI: It is impressive that countries that have under conflict are represented and the experts are represented. Getting out of these conflict zones and coming to the meeting or even virtually is very critical and I think is doing a lot of work. Finally and this is, we're closing, thank you very much, everyone, for contributing to this discussion or the start of a discussion. It's ‑‑ we are in a digital era and the AI age but many of our countries in the global south are not yet there when they're either markets or, you know, just labor or neither or. So there is this challenge of those who are willing and those who are unwilling to be included. Are able, or unable in developing capacity building so something that requires more research and we look forward to working with our key stakeholders and partners to develop such research for the global south and specifically for the Arab region which we are based and finally I would like to thank Salma and Sara and Zeina and the teams that worked on making this research possible and we look forward to publishing it and finally thank you Google and Google.org for supporting this research. And making sure this involves a lot of stakeholders and represents the voice of people working in AI in this region and in these discussions globally. Thank you once again and looking forward to this discussion.
>> NIBAL IDLEBI: Thank you, Fadi, thank you very much.