IGF 2023 – Day 2 – WS #86 AI-driven Cyber Defense: Empowering Developing Nations – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> BABU RAM ARYAL: Good evening.  Tech team, it's okay? Okay.  Welcome to this workshop number 86 at this hall. It's a pleasure to be here discussing about artificial intelligence and cyber defense, especially for developing country perspective. This is Babu Ram Aryal, and I am a lawyer, and I've been engaged in various law and technology issues from Nepal. And I would like to introduce very briefly my panelists this evening. 

Mr. Sarim is from Meta, and he leads Meta's Southeast Asia Policy Team and is significantly engaged in AI policy and technology issues. And he will be representing this panel from business perspective.

My colleague is the leader of international affairs Of Pakistan Telecommunication Authority Pakistan, and he is engaged in the regulatory perspective, and he will be sharing reasonable, and of course, from Pakistan perspective on regulatory perspective.

And my colleague, Michael, is from Zambia, and he is cyber analyst, and he is an investigator in cybercrime, and he will be representing from law enforcement agency.

And Dr. Tatiana Tropina is Assistant Professor from Linden University, and she will be representing policy perspective, especially from European perspective.

So, artificial intelligence has given very significant opportunity for all of us. It has now become a big word, although it is not a new one, but recently it has become very popular tools and technology. And lots of traits also have been posed by the contribution of technology, of artificial intelligence. At this panel, we'll be discussing how artificial intelligence could be beneficial in especially cybersecurity perspective or defense perspective, and also how we can discuss on the framework of, on the defense side, on potential risk of artificial intelligence in cybersecurity, cybercrime, mitigation of these kinds of issues.

I'll go to directly with Michael, who is experiencing directly various risks and threats and handling cybercrime cases in Zambia. Michael, please share your experience and the perspective. Especially you have been very engaged in IGF perspective. I know you have been MAG member and engaged in the African continent as well. The floor is yours, Michael. 

>> MICHAEL ILISHEBO: Good afternoon and good morning and good evening. I know the time zone for Japan is really difficult for most of us who are not from this region. So, of course in Africa, it's morning. In South America, it's probably in the evenings. So, all protocols observed.

So, basically, I am with law enforcement working for the Zambia Police Service and the Cybercrime Unit. In terms of the current crime landscape, we've seen an increase in terms of crime that are technology‑enabled. We've seen crimes that you don't expect that such a thing would happen, but at the end of it all, we've come to discover that most of these crimes that are being committed are enabled by AI.

I'll give an example. If we take a person who's never been to college or who's never done any computing course, is able to programme a computer malware or a computer programme and are using it for their criminal intent, you ask what excuse have they got for them to execute such? Or we've come to understand this, everything has always been enabled by AI, especially with the coming of ChatGPT and other AI‑based tools online, which basically are free. With their time on hand, they will be able to come up with something that they can execute in their criminal activities. So, this, itself, has posed a serious challenge for law enforcers, especially on the African continent, and mostly to developing countries.

Beyond that, of course, we handle cases. We handle matters where it does become difficult to distinguish a human element and artificial intelligence generated, whether it's an image or whether it is a video. So, as a result, when such cases go to court or when we arrest such perpetuators, there is a slightly ‑‑ like, it's a gray area on our part because the AI technology are able to do much, much more and much, much faster than the human can comprehend. So, as a result, from the law enforcement perspective, I think AI has caused some challenges.

>> BABU RAM ARYAL: What kind of challenges have you experienced as a law enforcement agency significantly?

>> MICHAEL ILISHEBO: So, basically, it comes to the use of digital forensic tools. Like, I'll give an example. A video can be generated that would appear to be genuine, and everyone else would believe it. And yet, it is not. You can have cases where, which have to do with freedom of expression, where somebody's voice has been copied. And if you really listen to it, you believe that, indeed, this is a person who has been issuing this statement, when in fact not. So, even emails. You can receive an email that genuinely seems to come from a genuine source, and yet, probably it's been AI written, and everything points out to an individual or to an organization. At the end of the day, as you receive it, you have trust in it. So, basically, there are many, many, many areas. Each and every day, we are learning some new challenges and new opportunities for us to catch up with the use of AI in our policing and day‑to‑day activities as we also try to distinguish AI activities and human interaction activities.

>> BABU RAM ARYAL: Thank you, Michael. I will come to Tatiana. Tatiana is a researcher and significantly engaged in cybersecurity policy development. And as a researcher, how you see the development of AI, specific in cybersecurity issues, and as you represent in our panel from European stakeholder, so what is the European position on these kinds of issues from a policy perspective, policy frameworks? What kind of issues are being done by European countries, Tatiana?

>> TATIANA TROPINA: Thank you very much. And I do believe that in a way, the threat and the opportunity that artificial intelligence brings for cybersecurity, or security in general ‑‑ like, let's say if we put as protection from harm ‑‑ might be almost the same everywhere, but the European Union indeed is trying to sort of deal with them and foresee them in a manner that would address the risks and harms. And I know that the big discussion in the policy community circles and also in academic circles is not the question anymore whether we need to regulate AI for the purpose of security and cybersecurity or whether we do not; the question is how do we do this? How do we do ‑‑ how do we protect people and those systems, like kind of from harm, while not stifling innovation? And I do believe that right now there are two operations that are discussed, or not two, but mostly, you know, we are targeting two things ‑‑ the risk‑based regulation, so when the new AI systems are going to be developed, the risk is going to be assessed, and then based on risk, regulation will either be there or not.  And outcome‑based regulation. You want to create some framework of what you want to achieve and then give industry some ability to achieve it by their own means, as long as you protect from harm. 

But I do believe, and I would like to second what the previous speaker said. From the law enforcement perspective, from crime perspective, the challenges are so many that sometimes we are looking at them and we are getting sort of, how do I say it? Not our judgment is clouded, but we have to do two things. We have to discuss the current challenges while foreseeing the future challenges, right? So, I do believe that right now we are talking a lot about risks from large language models, generation of spam phishing campaigns, generation of malware, and this is something that right now is already happening and is hard to regulate. But if we are looking to the future, we have to address a few things in terms of cybersecurity and risks.  Sorry. Yeah.

Well, first of all, the AI bias, the accountability and transparency of algorithms. We have to address the issues of deep fakes and here it goes beyond cybersecurity; it goes to information operations, into the field of national security.  So, this is just my baseline, and I'm happy to go into further discussions on this. 

>> BABU RAM ARYAL: Thank you, Tatiana. Now, at the initial remarks, I will come to Sarim. And from an industry player, Meta is a very significant player and Meta platform is also a platform that is very popular, as well there are many risks to Meta platform were complained, and not only Meta platform. You are just here, that's why I mention.  But these platforms are sometimes many countries, they have complaints, and they are not contributing. They've just been doing business, and technologies are bringing issues by people and the bad people.

So, there are a few things.  Like, business perspective, technology perspective, as well as social perspective. So, as a technology industry player, how do you see the risks and opportunities of artificial intelligence, specific the topic that we have been discussing? And what could be the response from industry on addressing these kinds of issues? Sarim.

>> SARIM AZIZ: Thank you, Babu, for the opportunity. I think this is a very timely topic. There's been a lot of debate around sort of opportunities with AI and excitement around it, but also challenges and risks, as our speakers have highlighted. 

I think I just want to reframe this discussion from a different perspective. You know, from our perspective, we do see, you know ‑‑ you have to actually understand the threat actors we're kind of dealing with. They are quite ‑‑ can sometimes be using quite simple methods to evade detection but sometimes can use very sophisticated methods, AI being one of them.

You know, we have a cybersecurity team at Meta that's been trying to stayed had of the curve of these threat actors. And I want to point to sort of a tool which is like our Adversarial Threat Report, which we produce quarterly. And that's just a great information tool out there just for policy tool as well to understand the trends of what's going on. This is where we report on in‑depth analysis of influence operations that we see around the world, especially around coordinated, inauthentic behavior, right? If you think about the issues we're discussing around cybersecurity, a lot of it has to do with inauthentic behavior, someone who's trying to appear authentic, whether from a phishing email to a, you know, message you might receive and hacking attempts and other things. So, that threat report is a great tool, and that's something we do on a quarterly basis. We've been doing that for a long time.

We also did a state of influence ops report between 2017 and '20 that shows the trends of how sophisticated these actors are. But from our perspective, I think we've seen three things with AI from a risk perspective that honestly does not concern us as much. I'll explain why. Because, one is, yes, like as Michael mentioned, you know, the most typical use case is AI‑generated photos and you're trying to appear like your real profile, right? But frankly, if you think about it, that was happening even before AI. In fact, most of the actions that we were taking on accounts that were fake previously all had profile photos. It's not like they didn't have a photo. Whether that photo is generated by AI or a real person, it shouldn't matter because it's actually about the behavior. And I think that's my main point, is that I think the challenge with gen AI is that we get a little bit stuck on the content, and we need to change the conversation about how do we detect bad behavior, right? And so, that's one.

Second thing we notice is because of gen AI being the hype cycle, the fact that almost every session here at IGF is about AI, it becomes an easy target for phishing and scams, because all you need to do is say, "Hey, click on this to access ChatGPT for free," and people are ‑‑ because they've heard of AI, they think it's cool, they're more willing to get duped into those kinds of sort of hype cycles, which is common with things like AI and other things.

The third is, like, we ‑‑ as I think Michael also alluded to this, and Tatiana as well ‑‑ it does make it a little bit easier for, especially I would say non‑English speakers who want to scam others to use gen AI, whether you want to make ransomware or malware to make it easier because now you've got a tool to help you fix your language and make it look all pretty. So, it's like, okay, you've got a very nice auto complete spell‑checker that can make sure your things are well written. So, those are sort of the three high‑level threats.

But honestly, what I would say is we haven't seen a major difference in law enforcement. And I'll give you an example. In quarter one of this year ‑‑ and we also have a transparency report where we report on, we measure ourselves on how good is our AI, and I think that's the point I'm trying to get to, is that we are more excited about the opportunities AI brings in cybersecurity, in helping cyber defenders and helping people keep safe, versus the risks. And this is one example. 

99.7% of the fake accounts that we removed in quarter one of this year on Facebook were removed by AI. And if I give you that number, it's staggering. It's 676 million accounts were removed in just one quarter by AI alone. Right. That's the scale. So, when you talk about scale detection. And it has nothing to do with content. I just want to bring it back to that. What we detected was inauthentic behavior, fake behavior. It shouldn't matter whether your profile photo was from ChatGPT, doesn't matter, are your text. Because once you get into content, you're getting into the weeds of what is the intent, and you don't know the intent, right, whether it's real or ‑‑ and in fact, I'll also point to the fact that some of the worst videos ‑‑ you talked to fake videos ‑‑ are actually not the gen AI ones. If you look at the ones that went the most viral, they are real videos. And it's the simplest manipulations that have fooled people. So, I'm pointing to the U.S. Speaker of the House, Nancy Pelosi, her video that went viral. All that they did is slowed it down, and they didn't use any AI for that. And that had the most negative, like the highest impact, because people believed that there was a problem, right, with the individual, which clearly wasn't the case. It was an edited video. So, I guess what I'm trying to say is that the bad actors find a way to use these tools, and they will find any tool that's out there, but I think we really have to focus on the behavior and detection piece. And I can get into that more. That's it for now. 

>> BABU RAM ARYAL: Thanks. It is a very encouraging thing that 99% fake accounts are removed by AI. And is there any intervention from AI on the negative side, platform?

>> SARIM AZIZ: Like I said, I mentioned the three areas. Obviously, when you get into large language models, you know, I also want to make the point that we believe the solution here ‑‑ I'm getting into solutions a bit early ‑‑ but is that more people in the cybersecurity space, people who ‑‑ you know, we talk about amplifying the good. We need to use it for good and use it for keeping people safe. And we can do that through open innovation and open approach and collaboration, right? So, of course, the risks are there, but if you keep something closed and you only give it access to a few companies or a few individuals, then bad actors will find a way to get it anyway, and they will use it for bad purposes. But if you make sure it's accessible and open for cybersecurity experts, for the community, then I think you can use open innovation to really make sure the cyber defenders are using the technology, improving it. And this 99.7% is an example of that. I mean, we open source a lot of our AI technology, actually, for communities and developers and other platforms to use as well.

>> BABU RAM ARYAL: Thanks. I'll come back to you on next round of Q&A. You are at very hard space. I know regulatory agencies are facing lots of challenges by technology, and now telecom regulators have very big roles on mitigating the risk of AI and telecommunication, and of course, Internet. So, from your perspective, what do you see is the major issue as a regulator or as a government when artificial intelligence is challenging the platform in the way that people are feeling risk? And of course, from your Pakistani perspective as well. And how you built this kind of digital citizen in your country? Can you shed some light on this?

>> Yeah, thanks, Babu. Actually, thanks for setting up the context for my initial remarks here, because you already said that, you know, I'm in a hot seat. Even though I'm in the middle of, you know, Meta platform, the police, and the researcher, even in this seating. 

With regulators, it's a bit of a tricky job, because at one hand, we are connected with the industry; on the other hand, we are directly connected with the consumers as well.  This is more like a job where you have to do the balancing act whenever you're taking any decisions or any moving forward on anything.

With cybersecurity itself being a major challenge for developing countries for so long, this new mix of AI has actually made things more challenging. You see, the technology is usually and primarily and inherently has been developed in the West. And that technology being developed in the West means that we have a first disadvantage for developing countries as well because they're already lacking on the technology transfer part. What happens is, because of Internet and because of how we are connected these days, it is much easier to get any information which could be positive or negative. And usually, the cybersecurity threats or the elements that are engaged in such kind of cybercrimes and all, they're usually out of the code when it comes to defenses. Defense will always be reactive. And for developing countries, we have always been in a reactive mode. 

Meta has just mentioned that, you know, their AI model or their AI project has been able to bring down the fake accounts on Facebook within one quarter by 99.7%. That means that they do have such an advanced or such a tech‑savvy technology available to them, or resources available to them, that they are able to achieve this huge and absolutely tremendous milestone, by the way. But can you imagine something like this or some solution like this in the hands of a developing country with that kind of investment, to deploy something like this which can serve as a dome or a cybersecurity net around your country? That's not going to happen any time soon. 

So, what does it come down to, then, for us as regulators? It comes down to, number one, removing that inherent fear of AI, which we have in the developing countries. Although it is absolutely tremendous to see how AI has been bringing in positive things, but that inherent fear of any new technology is still there. This is more related to behavior, which Sarim was mentioning. And I think it also boils down to one more point which is intention.  I think intention is what leads towards anything, whether it is on cyberspace or off the cyberspace. 

I think what developing countries need to tackle this new form of cybersecurity, I would call it, whether it's with the mix of AI, is to have more capacity, is to have more institutional capacity, is to have more human capacity, is to have a national collaborative approach which is driven by something like a common agenda of how to actually go about it. We are so disjointed, even though in our national efforts for a secure cyberspace, doing something on a regional level seems like a far sight to me right now.

So, just to sum it up, for example, in Pakistan, we have a national cybersecurity policy as well. We do have a National Centre for Cybersecurity. They have issued regulations on critical telecom infrastructure protection. We do certain intelligence sharing as well. There is a national telecom service as well. There are so many things we are doing, but if I see the trend, that trend is more like last three‑four years, maybe, where things have actually started to come out.

But imagine if these things were happening ten years back. We would have been much more prepared to tackle AI now into our cybersecurity postures. So, from a governance or a cybersecurity or from a regulatory perspective, it is more about how we tackle these new challenges with a more collaborative approach in looking at, you know, more developed countries for kind of technology transfer and to build institutional capacity to address these challenges. Thank you.

>> BABU RAM ARYAL: Thank you. Excellent. I was supposed to come on capacity, and as you just mentioned the capacity‑building of people.

Tatiana, I would like to come to you. How much investment on policy frameworks and capacity building is coming in framing law and ethical issues in artificial intelligence and whether industries are contributing to manage these things, and also from government side? So, what is the level of capacity on policy research, on framing artificial ‑‑ I mean, framing the way out for these artificial intelligence and legal issues?

>> TATIANA TROPINA: It's working, right? Thank you very much for the question.  I must admit, I've heard the word investment. I'm not an economist, so I'm going to talk about people, hours, efforts, and whatever.

So, first of all, when it comes to security, defense, or regulation, I think we need to understand that to address anything and to create future frameworks, we need to understand the threat first, right? So, we need to invest in understanding threats.  And here, it is not only ‑‑ and I think I mentioned this before ‑‑ it's not only about harms as we see it, harm from crime, harm from deep fakes, it's also harm that is caused by bias, by ethical issue, because the artificial intelligence model is only as good as ‑‑ it brings as much good as the model itself, the information you feed it, the final outcome. And we know already ‑‑ and I think that this is incredibly important for developing countries to remember that AI can be biased, and technologies created in the West can be double biased once technology transfer and adoption happens somewhere else.

For example, when I've heard about Meta removing accounts based on behavioral patterns, I really would like to know how these models are trained, be it content, be it language, be it behavioral pattern. Does it take into account cultural differences between language, countries, continents, and whatever? And here, here, I do believe that what we talk about in terms of cooperation between industry, researchers, and governments and law enforcement is crucial.

Just a few examples.  Scrutiny, external scrutiny of algorithms ‑‑ and I believe the three of you will agree with me ‑‑ that it is incredibly important once the algorithm is created and trained, to open it for scrutiny from civil society, from research organizations, because you need somebody to see if it's ethical from the outside. You know, to me, testing an algorithm just by adopting them ethically is the same as testing medicine or cosmetics on animals. We don't do this anymore. So, it's not only building capacity itself, it's adopting a completely new mind‑set, how we are going to do this.

And in terms of investment in creation of future‑proof frameworks, you really need to see the whole picture and then see, okay, what kind of threats I'm addressing today and what kind of threats I might foresee tomorrow. And this is why I was talking about sort of, it is hard to think about future‑proof frameworks, because indeed, defense will always be a bit behind. But if you forget about technology itself, technology can change tomorrow, but you can think about how you frame harms. What do you want to achieve in your innovation? And then say, okay, Meta, I want to achieve this level of safety. If you see this risk, please provide this safety. And leave it to Meta and make Meta opening this also for external research, and this cooperation might bring you somewhere to the point where it would be more ethical, where it would be more for good in terms of defense.

And I also want to say the extent of AI exists everywhere, I believe, and this is why every second session here is about AI, just because we are so scared. But I also do believe that we cannot stop what is going on. We really have to invest. And here I'm talking again not about money, but about people.

And also, if I may, if I have not spoken for too long yet, I think that there are so many issues here that we have to detangle, and again, look at harms and look at algorithm itself. For example, the use of algorithm in creation of spam phishing campaigns on malware. We know how to address it. We need to work with programme engineers because it creates malware only as good as the prompt you give it. And if a year ago you could say to ChatGPT, just create me a piece of malware or ransomware and it will do it, now you cannot do this. You need to split it into many, many prompts. So, we have to make this untenable for criminals. We have to make sure that every tiny prompt, every tiny save that they can execute in creation of this malware by algorithm will be stopped. And yes, it is work, but this is work we can do. And so with any other harm. Sorry for speaking for too long. Thank you.

>> BABU RAM ARYAL: It's absolutely fine. Thank you very much, bringing more issues in the table. Sarim, there is very interesting response from Tatiana. Studying what is harm, how we understand, and studying this ‑‑ previously, Waqas mentioned the fear of AI. So, do we have any fear of these things from technology platforms like yours? How you are handling these kinds of fears and risks technologically? I don't know whether you could be able to respond from a technological side, but still from your platform perspective.

>> WAQAS HaSsan: Yeah, I think any new tech can seem scary, but I think we need to move beyond that. And like Tatiana and others mentioned, the existential risk becomes a distraction in the conversation. I think there are near, short‑term risks that need to be managed. And there are approaches. I think there are some really good principles and frameworks out there with the OECD frameworks about fairness, about transparency, accountability. The White House Commandments as well. There are policies for countries to look at, and they certainly need to be localized to every region, but there is plenty of good examples. The G7 Hiroshima Process. I think industry generally is supportive of, you know, in terms of making sure that we make AI responsibly, and for good. But to me, I think the bigger question ‑‑ the harms are sort of clear. The idea I think now is that how do we get this technology into the hands of more people who are working the cybersecurity space? Because if you think about cybersecurity space, also 20 years ago it was quite closed. But now you have a lot more collaboration and open innovation happening in that it took 20 years for us to realize that, actually, that keeping cybersecurity close to a few does not help. Because the bad actors get the stuff anyway, and then you're defenseless against them. So, I think the same thing has to happen with AI. It's going to be tough, but I think the governments and policymakers, they need to incentivize open innovation. So, when you have a model that's closed, you don't know how it was trained, you don't know, you know, how it was built, you don't have a responsibility ‑‑ like, it makes it difficult for the community to figure out what are the risks.

One of the things we did for example is we submitted our model. Our model is open source. It was launched just in July of this year. And already in one month it was downloaded by 30,000 people. Now, of course, we did red‑teaming on it, we tested it. But no amount of testing is going to be perfect. And the only way to get it tested perfectly is to get it out there in the open source community and responsible players have access to it. They know what they're doing. And that's the beauty of AI. I think that's the game changer. Waqas mentioned there is a capacity issue. Yes, there is a capacity issue. We have a capacity issue as Meta. You can't get the bad actor to remove the content.  You have millions of people looking at what's on the platform and removing content. It will never be enough, right? AI helps us get better so human ‑‑ you still need human review. You still need experts to know what they're doing, but it helps them be more efficient and effective. The same thing, an open innovation model can help developing countries catch up on cybersecurity, because now you don't need thousands and thousands of cybersecurity experts. You just need a few who have access to the technology, and that's what open innovation and open sourcing does, which is what we've done with our model.

We even submitted our model to Defcon, which is a cybersecurity conference in Las Vegas. And we said, "Break this thing. Find the vulnerabilities. What are we not doing? Where are the risks?" And we're waiting for their report. But that's how you make it better, right? Of course, we did our best to make sure that it takes care of the CBRN risks of, you know, chemical, biological, radiological, nuclear risks, but there are other risks we may not have seen. This is where putting it on open source, giving access to more researchers. Doesn't matter whether you're in Zambia or Pakistan, any other country, you have access to the same technology that Meta has built. That's how we get to an open innovation approach.

There are many other language models. I am not going to name them, but they are not open. And Meta's is. So, I think that's where we need to get policymakers to incentivize open Hackathons on these kinds of things. Break this thing and create sandboxes to safely test this on, because a lot of the testing you can do is only based on information that is publicly available. If governments have information they can make available to hackers to say, use this language model and try to do this, in a safe environment, obviously, ethically, without violating anybody's privacy and things like that. So, I think that's where we need to focus the discussion on, policy.  (Sarim Aziz speaking)

>> BABU RAM ARYAL: Thank you, Sarim. I think one interesting issue is we are discussing from the development country perspective, right? This is our basic objective. And there are opportunities to all of the countries. Access is always there, as you, Sarim, mentioned.  But there are big gaps between developing countries and developed countries about the capacity we have been talking about. And especially if I see from Nepal's perspective, we have very limited resources, technology, as well as human resource, that that is a big challenge, that is a big challenge on the defense.

So, Michael, what is your personal experience leading from the front? And what is the capacity of your team, and where do you see the gap between developing countries and developed countries on capacity of addressing these issues?

>> MICHAEL ILISHEBO: So, basically, my experience is probably shared by all developing countries. We are consumers of services/products from developing countries. We haven't yet reached that stage where we can have our own homegrown solution to some of these AI model languages where we can maybe localize it or train it on our own data set. Whatever we are using or whatever has been used out there is a product of the western world.

So, basically, one of the major challenges that we've encountered through experience is that the public availability of these language models, in itself, has proved to be a challenge in the sense that anyone out there can have access to the tools. It simply means that they can manipulate it to an extent for their criminal purposes. As reported by Meta, in the first quarter of their use of their language model that they're using, they got close to a billion fake accounts. Am I correct? Close.

>> SARIM AZIZ: (Off microphone).

>> MICHAEL ILISHEBO: Whatever, it could be images, it could be anything that is not meeting the standards of Meta. So, if you look at those numbers, those numbers are staggering. Now imagine some of the information that Meta has brought down because of ethical and probably safety and other concerns, were deployed to a third world country that has no single capacity on probably filter that which is correct, filter that which is not correct. It is becoming a challenge.

As much as the crime is increasing, also with the borderless nature of the Internet, the AI models have really become something that you have to weigh the good and the bad. Of course, the good outweighs the bad. But again, when the bad comes in, the damage it causes within a short period of time, like, outshines the good. So, at the end of it, there are many, many challenges that we face through experience, only if we could be at the same level with developing countries in terms of their tools they are using to filter anything that is probably will bring public opinions in terms of misinformation, in terms of hate speech, in terms of any other act that we may deem not appropriate for society or any other act that probably is a tool for criminal purposes.

>> BABU RAM ARYAL: Thanks, Michael. Waqas would like to intervene on this?

>> WAQAS HASSAN: I think like already mentioned, the pace with which the threats are evolving is I think unequal, equal to at which the pace which our defense mechanisms are improving. And why this is happening? This is because we don't have as much faster forensics ‑‑ forensics is not as fast as the crimes are happening. Like Michael has already mentioned, it is a good thing that the tools or these models are open source, but at the same time, these models are equally available to people who want to misuse it as well.

Now, the capacity of people who want to misuse it is sort of ‑‑ when it outweighs the capacity of people who have to defend against it, you find incidents and you find such situations where we eventually say that, "Oh, look, AI is bad for us or bad for society" and all. But when we are better prepared, we are proactive. Like Facebook, what Facebook did is sort of proactive thing. Rather than those accounts doing something which would eventually become a big fiasco, that you took it down before something would happen. That is something which developing countries are usually lagging behind, doing cybersecurity or having their cyber defense in a proactive mode, rather than being in reactive mode. I'm not saying they're not prepared and I'm not saying that there is no proactive approach there. There is. But that proactive approach is hugely dependent on what kind of tools and technologies and knowledge and resources and investment are available to the developing countries, rather than just saying that, you know, okay, fine, we are do proactive approach and we're doing these things. Michael is at the forefront of everything you would know, that the kind of threats that are emerging now are much more sophisticated than they were ever before. 

Are we as sophisticated and prepared as we were before? I leave that question on the table. Thank you.

>> SARIM AZIZ: Can I add perspective?

>> BABU RAM ARYAL: Sure, Sarim.

>> SARIM AZIZ: Coming back to my introduction, I don't think the risk vectors have changed. Sorry, you want to add something? Yeah.

>> TATIANA TROPINA: (Off microphone).

>> SARIM AZIZ: Okay. I think, yes, you might ‑‑ as I said, the bad actors who might want to cause harm are using the same vectors they were before gen AI. I don't think ‑‑ just because they're putting ‑‑ it's like phishing, right? Phishing is a good example. You don't solve for phishing ‑‑ okay, fine, they can have a much better email that's written that seems real and logos that look real and whatever, right? But that's not how you solve phishing. You solve phishing by making authentication credentials one‑time use, because any one of us ‑‑ the most educated person in this room can be Phished, right? I mean, if you're writing in a rush, you don't have time to check the email address, you look at something, it looks real, you click on it. We've all done it, right? I'm going to raise my hand. So, that threat vector's, in terms of what you're talking about, haven't changed. Same with the fake accounts. So, our fake account detection doesn't care how real your photo is or isn't. It's based on behavior, and that behavior ‑‑ yes, of course, we have 3.4 billion users. We have to be careful that this is the spamming, we're seeing people creating multiple accounts on the same device or sending 100 messages a minute and spamming people, so it's really bad behavior. Doesn't matter what it is, it's wrong, no matter what country or culture you're from. So, that's the kind of stuff that is universal, right? Same with phishing, it's quite universal. 

So, yes, there are certain risks the same, but NCII. So that still there before gen AI, non‑consensual imagery. You can use Photoshop for that. You don't need gen AI. And unfortunately, that's the biggest harm we see. That's the biggest risk ‑‑ we talked about risk. And that's a separate topic where I'm talking on a panel on child safety as well, where any collaboration ‑‑ we have an initiative called stopncii.org. Where if you are a victim of ncii ‑‑ and this is where AI helps. If you know a victim of NCII, and their pictures have been compromised, and whoever is blackmailing them and things like that, you can go to stopncii.org, and you can submit that video or image, and we use AI to block them across all platforms, all services. This is the power of AI. Right? Even if it's like slightly changed, because we take that hash and we do that. So, this is the power of AI. I think it helps us with sort of preventing, actually, a lot of harm; whereas, without AI, you can easily do the same thing, you know. It might make it a little bit easier or maybe makes it high‑quality, but the quality of the impersonation or of the attempt doesn't really change the risk vector. 

>> BABU RAM ARYAL: Tatiana.

>> TATIANA TROPINA: What I wanted to say largely is in line with what you say. I made one line while I was Lissing in. Misuse will always happen. We have to understand that we should stop fixating on technology itself. Any technology would be misused. If you want to create bulletproof technology, you should not create any technology, because it will always be people who misuse it, who will find a way to misuse it. Crime follows opportunity. That's it. Any technology will be misused.

And also, about phishing, for example. The human is the weakest link always. You're not fooling the system, only you are fooling the humans. And in the same way, we have to talk about harms. And here I go to one of my interim remarks. We have to focus on harms, not on technology per se. We have to see where the weakest link is. What exactly can be abused in terms of harms, where harm is caused. And in this way, I strongly believe that AI can bring so much good. And thank you for reminding me about the project of non‑consensual image sharing. Of course, AI can do it, you can have hashes or databases. But again, if we look layer after layer, we can ask ourselves how this can be misused as well and how this can be addressed and so on and so forth. We just should always ask questions.

And also, I would like to remind again and again, it's not only about technology. Let's always remember that. It is humans who are making mistakes and humans who are abusing this technology. And this is where we also have to build capacity, not only in technological development, not only in regulatory capacity, but after all, the whole chain of risk, you know, focuses, focuses at the end on humans, on humans developing technology, on humans developing regulation, on humans being targeted, on humans making mistakes. And this is where we have to look at as well. 

>> BABU RAM ARYAL: Thanks, Tatiana. Now I would like to open the floor. If you have any question, I'm going to go to my colleague who is moderating online. And if there is any question, if you want to ask to the panel from online as well, we are joining this discussion. You can also put your question to the panel. And also, I would like to request participants to speak or share your questions to the panel.  Yes, please introduce yourself briefly, for the record.

>> AUDIENCE: Hello, everyone. I'm (?) from Nepal.  It'd been an interesting discussion. Thank you so much, panel.  I wanted to explore a little bit what we miss from today's discussion probably that is the capacity of individual countries to negotiate with big tech players, right? If you look at the present scenario, so much resources that is being collected from so‑called third world, Global South, to the developed economy, and of course, they are boosting their economies through deploying these sort of technologies, and we have nothing. And that is one of the main reasons we are not empowered, we are not capable to tackle these sort of challenges. 

And of course, another thing is the technology is so much concentrated to the Global North.  And I'm not pretty sure that they do clear equally, inclusively, to the big number of population living in the Global South, and economic comes first. So, it will continue what is happening today and will be continued in the AI time, AI‑dominated time. That is my observation. And what is yours I would like to ask from the panel.

>> BABU RAM ARYAL: Any specific resource person you would like to ask?

>> AUDIENCE: Anyone can. Thank you.

>> BABU RAM ARYAL: Thank you. 

>> SARIM AZIZ: As I said before ‑‑ first of all, I agree with you, there's a way of making technology more inclusive, and that has to be by design. And that's why I think principles when it comes to the frameworks that are out there on AI, being led by Japan and OECD and the White House, it is about inclusivity, fairness, making sure, like, there's no bias in there. But those are all policy frameworks.

I think from a tech perspective, I think open innovation is the answer, and AI can be the game changer where, as I explained, it is out there. There is no reason why the same technology that we've open sourced that the western countries have, now researchers and academics and developers in Nepal and other countries in Africa can also access. And this is an opportunity to get ahead. And you don't need ‑‑ AI is the game changer because it's about scale. It's about doing things at scale and being able to ‑‑ you know, especially when you think about systems and protecting systems and the threats you're talking about. It's not a problem where you throw people at it and it will get solved. Of course, you need to do capacity‑building and you need experts, but it helps them be more efficient, more effective. So, I'd love to see what the communities ‑‑ it's only a few months old, our model. It's called Llama 2. You can go look at it. There's a research along with it that explains how the model was built, because we've given it an open source license under Acceptable Use Policy. And so, yeah. 

And there's derivatives already out of it, so you can't even use the language argument anymore, because the Japanese took that model and they already made it into ‑‑ they've called it, I think Eliza? And the Japanese University in Tokyo has made a Japanese version of that model. So, we're excited to see what the community can do. And I think that's the way we can continue to innovate, make sure that nobody gets left behind. 

>> AUDIENCE: I do not completely agree with you, because you can already see that, for example, ChatGPT has the premier free version and the majority of users are, of course, from the developed economies, and it's quite difficult to import in. And there not always tends to be the easy availability of such resources. And if you are not habited and you are not well capable with the resources, how can you be capable to tackle the coming challenges in the future?

>> SARIM AZIZ: I don't work for ChatGPT or OpenAI, so I can't speak for them. But our model is open source. It's already public. And it's the same ‑‑ and anyone can basically write another ChatGPT competitor using that.

>> BABU RAM ARYAL: Thank you. Tatiana, he raised one interesting divot on Global North and Global South. Do you see ‑‑

>> AUDIENCE: Thank you very much. This is an interesting debate ‑‑

>> BABU RAM ARYAL: Please introduce.

>> AUDIENCE: I am Dr. Mohammed Shabir from Pakistan, representing here Civil Society, the Dynamic Coalition on Accessibility and Disability. 

So, the debate here going on, I sort of, as a student of international relations, I would agree to this, that we don't live in an equal world. The terminologies inclusivity, accessibility, they all seem very fine on the paper, but in reality, when we see what is happening here is, unfortunately, we live in a real world and not in an ideal world where everything would be equal towards one another.

Waqas has a valid point and I would like to ask that question from Waqas and then I would seek the response from Meta. You talked about the transfer of technology. What sort of technology are you talking about here?

And my question from Meta and the Global North is that how far are they ready to share their technology with the Global South? When it comes to diversity inclusivity, not to talk of the earlier point my friend raised about the price and the open and free previous versions of softwares out there in the market. Those will remain there, but what sort of technology are we talking about here in terms of transferring? Of course, AI is a tool, like any other tool, but I can see that when it was human against human, any ‑‑ so, it would be like a sharp knife that could be used against any other person, but that would be human using against human, a tool. But this time, AI as a tool being used as not just as a computer ‑‑ it would be a computer against a human who would be targeted. So, the threat, as my friend from Meta is talking about, is just a real one, and it seems that it's like a phishing one. The example cannot be equated.  I think this is something that we need to discuss. The response measures have to be as sharpened, as quick, and as faster as the technology that we are developing here is. But I would want to seek the response on my earlier point from Waqas and then from Meta. Thank you.

>> WAQAS HASSAM: Thank you. When we say technology, it's primarily, of course, one of the examples is how Meta has just open sourced their AI model, which, of course, is something that any nation can use to develop their own models. What we're talking about is a standardization of these technologies, in my view. Once something gets standardized, it is available to everybody. That's how telecom infrastructure works, you know, across the world.  If there is a standardized technology, of course, it is easier for developing countries or developed countries, anybody, any interested party to take advantage of.

Threat intelligence. What kind of threats are out there? What kind of issues are they dealing with? What kind of information sharing could be there? What kind of new crimes are being introduced? How AI is being misused, and then how that situation is being tackled by the West? Technology itself is just a word. It is more about what are you sharing? Are you sharing information? Are you sharing the tools? Are you sharing experiences? Are you even sharing human resources?

You mentioned that now it is human versus AI, but can we ‑‑ well, how about AI versus AI, you know? Can we develop such tools or AIs that can preempt and work ‑‑ like I'm going back into the cyber warfare movies and all that, which used to predict that in the future bots would be fighting against each other, but we're not there yet. But if we are investing in AI for defense mechanisms to improve the cybersecurity posture, like Meta has just done, that investment muscle is currently not that much available to the developing countries, so we have to look towards the West. And what they are developing is something that we need, and we're going to need for the foreseeable future in terms of the tools, in terms of the information, in terms of the experience sharing, and in terms of the threat intelligence that they have. Thank you. And I'll leave it to Sarim to respond to the other part. 

>> SARIM AZIZ: Thank you, Waqas. I think it's a good question. Maybe I didn't set context. Llama 2 is a large language model, similar to OpenAI's ChatGPT. Except the difference is, it's free for commercial use and it's open source, so the technology is available for any researcher, anyone to deploy their own model within their own environment, so you can put it on ‑‑ if you've got the computational power, you know, in your own cloud, you can deploy it there on your computer, or you could deploy it on Microsoft azure or AWS and any other. So, it's basically a large language model that helps you sort of perform those automated tasks, but it's out there for open source, meaning that it's ‑‑ you know, we invite the community to use it, invite the community to ‑‑ it's free. We don't charge. There's no, like, paid version of it. Obviously, you have to agree to the conditions and agree to the Responsible Use AI Guide, but beyond that, yeah, that's what we've launched just this year, and we're excited to see how the community around the world uses it for different use cases. And there are use cases we didn't even realize. That's the beauty of open sourcing, is that we won't know how it will get used by different governments, by institutions to deploy. Of course, we only make it better and safer through red‑teaming, through testing, you know, all that. But the more cybersecurity experts tell us the vulnerabilities and use it, that's how we'll improve it.

>> BABU RAM ARYAL: Thanks, Sarim. Tatiana, observing these two questions, I was supposed to ask you. The debate of Global South and Global North capacity and the impact on artificial intelligence and cyber defenses?

>> TATIANA TROPINA: I must admit here that I cannot speak for Global South, which is global majority, right? It is hard for me to assess capacity there, but I can certainly tell you that even in the Global North, if we call it ‑‑ if we called it global minority, Global North ‑‑ the artificial intelligence in cyber ‑‑ so, capacity in cyber defense. On the one hand, of course, if we are talking about expertise, we might talk about some high‑quality specialists and better testing and whatever. But believe me, the threat, the threat is still there. And there is lack of understanding what kind of threat is there, in terms of national security, in terms of cyber operations.  Because so much is connected in the Global North, because people follow things on the Internet so much. The question, for example, deep fakes and elections. And I love the story about Nancy Pelosi's video, because you don't have to change anything. You just have to slow down or speed up and whatever. 

So, the question here, again, boils down to capacity to assess the threats before you have capacity to tackle them. And I do believe that right now, in the so‑called Global North, we have this problem as well, capacity to understand the threat. Just saying, "Oh, my God, it's happening!" Or are we disenTanousling it, looking at what is actually happening and then assessing it? And I do believe that, indeed, there is a gap when we talk about developing countries and developed countries, in technological expertise, in what you can adopt, in how you can address it. But in terms of understanding the threat, we still lack capacity in Global North as well. We still lack understanding of the threat itself. And there is a lot of fearmongering going on as well. And I do believe that in this term, we have to share this knowledge; we have to share this capacity. Because, yeah, the threat can vary from region to region, but at the same time, the harm will be to people, be it elections, be it cyber threats, be it national security threats. And here I do believe that there is such a huge potential for cooperation between what you call Global North and Global South. And by the way, I do think that we have to come up with better terms.

>> BABU RAM ARYAL: Tatiana, I will come on cooperation. I will go to the question. Introduce yourself.

>> AUDIENCE: Thank you for giving me the floor. My name is Ami Majaro, I'm fromming from the Africa IGF as a MAG member. Really interesting session, really. I think when we talk about AI, most of the time it's us from the Global South or developing countries have the most questions to ask because we have the bigger concerns.  We are still tagging along. When it comes to AI, we are concerned about how inclusive it is and how accessible it is. For example, coming from an African context, we are still struggling with infrastructure. We talk about electricity, access to electricity. It is a problem. And you need to be online, you need to be connected to be able to utilize most of these facilities that comes with AI. But we are already having those challenges, so it's difficult for us to actually either follow the trend or keep up with the trend. So, it always brings us to mind also as well, we have so many people that don't really have no access to Internet. We don't even know what is digital. And we talk about inclusion. How do we bring those people along? And how can they keep up with the whole idea? There is always a concern what are the risks, what are the challenges, how do we move away from the status quo, how do we follow suit and what are the risks for us? And usually what are the benefits we get? But then it comes back to understanding. Electricity, how people are digitally literate to understand the risks, and what are the benefits that might come from it and how we practically come to, I would say tackle along with the Global North that are far ahead from where we are coming from?

There is always the issue of people trusting AI, you know. From where I'm coming from, people will ask, "Is AI here to take our jobs? How much can we be dependent on AI?" And not really we balance how creative we are, because some of the consumers, when you are a consumer of AI, you are consuming. So, does that really limit you being creative and also just being the consumer and just, you know, receiving and receiving and receiving and not also, you know. It limits how can we balance the creativity of the human being. So, it's a bit off balance, but it's good to bring this to the table to ensure that when we are moving, there must be people left behind, but we see how to draw them along.  And this is something that I just wanted to draw up there. Thank you. 

>> BABU RAM ARYAL: Thank you very much. Anything you would like to respond? Or I have one important side of kroopgs, just as we started with the Global North and Global South. We are talking from a developing country perspective and how we can build up a cooperation and addressing at national, regional, and global level. So, what could be the possible framework for addressing this? Tatiana, go ahead. 

>> TATIANA TROPINA: Sorry. I think that we already mentioned the principles. And they are basically, okay, they're not that global, but I do believe that ‑‑ I absolutely love the previous intervention. I'm sorry, I didn't catch the name. But I do think that there are so many...

So, if we look about principles of AI, like for example, fairness, transparency, accountability, and so on and so forth, I think that we really need to redefine what fairness means. We really need to redefine what fairness means, because I think that right now when we are talking about fairness, we do talk about applicability of fairness to what you call Global North. And I think that if we look at fairness much broader, it will include the use of technologies and the impact of these technologies to any part of the world, to any part of society.

It is hard for me to think about cooperation on the global level. Like, you know, we all get together and happily develop something. I'm not sure this can happen, really, unless the threat is imminent. But, yeah. So, I do believe that we have to ‑‑ when we think about global cooperation, when we think about global capacity‑building, we should not start from threats; we should start from building a better future; we should start from benefits. And I think that fairness would be the best way to start. How do we make technology fair? How do we make every community benefitting from this technology?

I know that you probably want me to talk about more practical steps. I don't have ‑‑ I will be honest here, I don't have an answer to this question. Because unless we frame the place where we start from, which will include fairness for every country and every region and every user, instead of threats, instead of, oh, my God, we are all going to die tomorrow from AI, or we are going to be insecure tomorrow, we should start with a benefit, how AI can benefit everybody, every population, every community, everyone. And if we start from the premise of good and define it, and somehow frame it ‑‑ and it's already framed in a way, but you know, widen this frame ‑‑ I think starting there would be a much better place.

And in terms of practical steps, I do believe that the steps, the baby steps already taken by civil society, by the industry, which where certain players throw away the concept, move faster and break certain things, to the concept of let's go more fair, more transparent, more inclusive, this is already a good start. I do not know if regulation, attempt to regulate would bring us there. I do not think so, actually. I think that attempts to regulate should go hand in hand in what we do as civil society, as technical community, as companies cooperating with each other. But to me, honestly, the first step would be to redefine the concept of fairness. 

>> WAQAS HASSAN: I'd like to add to what Tatiana said, spoken about global cooperation. I would like to take this from the reverse angle, which is starting from the national level.  Information‑sharing, intelligence‑sharing, developing tools, mechanisms against ‑‑ or using AI for cyber defense. That starting point is, of course, your national‑level policy, your national‑level initiatives or whichever body you have in your country. Pakistan, for example, we do have such bodies.

Now, on APAC level as well, there are bodies. For example, there is an Asia‑Pacific who does cyber tools, ITU also organizes that for countries to participate on all. So, there is some form of collaboration happening. How effective it is, I can't say for sure, because this particular mix of AI into this cybersecurity and cyber world is something which I haven't seen in any agenda so far. But the starting point is, again, a discussion forum like we are sitting at right now, like in IGF, for national cybersecurity dialogue to start, which can then, you know, sort of metamorphasize into a regional dialogue which gives way to a global dialogue.

Whether it's human, whether it's AI, whatever it is, the starting point of every solution is a dialogue, in my opinion. So, I think this is where collaboration comes in. This is where information‑sharing comes in, especially for the developing countries. If you don't have the tools or technologies, at least what we have is each other to share information with. So, I think that should be the starting point. Thank you.

>> BABU RAM ARYAL: Thanks, Waqas. Michael, on cooperation? How we can build out cooperation on cyber defense and how, what kind of strategies we can take on that. 

>> MICHAEL ILISHEBO: So, basically, we've discussed a lot of issues. Most of them, we've looked at issues that have to do with fairness, accountability, and ethical use of the AI. There are many challenges that, as a law enforcer, we face. But in all, this discussion would definitely come in a broader way in the future, when, actually, the law enforcers themselves start deploying AI to detect, prevent, and solve crime. Now, that will affect all of us, because at the end of it all, we are looking at AI being used by criminals to target probably individuals, to get money, probably to spread fake news. But now, imagine you are about to commit a crime and then AI detects that you are about to commit a crime. There is a concept of precrime. So, that will affect each and every one of us. Just a simple show of behavior will detect what crime you would want to commit or you commit in the future. So, now that will bring up the issues of human rights, issues of ethical use, a lot of issues, because at the end of it all, it will affect each and every one of us.

Today we are discussing all the challenges that AI in the defense system has brought, but in the future, not even in the distant future, just probably in a few years' time, it will be something that all of us will have to probably face in terms of being judged, being assessed, being profiled by AI. So, as much as we may discuss other challenges, let us also focus on the future when AI starts policing us.

>> BABU RAM ARYAL: Thanks, Michael. One question from you. Yeah. Question here? Yeah, please. Mic there.  Introduce, please.

>> AUDIENCE: Thank you.  Thank you for the insightful reflection. This is Santos from Digital Rights Nepal. On the question of collaboration, I think, Tatiana said that we have to define the concept first, but I think we have to also define the concept of cyber defense. If we are moving from cybersecurity to cyber defense, we have to have a kind of open discussion, because defense is the job of government. And normally, government, national security, security defense, they are the dominant actor, and they do not want to have other actors on the table citing the security. It has happened with lots of other issues, be it freedom of expression, be it other civil rights. So, national security's kind of their domain, government domain, and we are talking that promoting cyber defense ‑‑ not cybersecurity ‑‑ in developing countries.

So, within the developing countries, we are empowering whom? We are empowering the government. We are empowering the civil society. We are empowering the tech companies. Which stakeholder are we talking about? So, I think we have to deconstruct the whole concept of cyber defense, and at the same time, we have to kind of deconstruct developing countries.

Talking about within the developing countries, in the AI regulation, we also talked about AI regulation, and in the discussion of cyber defense, are the civil society now on the table to discuss these issues? I'll give you one example. In Nepal, recently Nepal formed the National Cybersecurity Policy. And one of the provisions in the Cybersecurity Policy is that the technology or the consultation, ICT‑related technology or consultation, would be procured by the different system than the existing public procurement process, and that process will be defined by the government. So, now they are having a new shield or the new layer upon layer where the public or the civil society will not be discussed what kind of technology government is importing into the country or what kind of consultation they are having on the cybersecurity issues.

So, while talking about these issues, I think we have to also, under the practice, we have to discuss about the capacity of the government to implement it. Whether that kind of defense or the capacity we are talking about, whether other governments are supporting them, is it available within the national context, or whether there is a geopolitics in play, because it has happened in many situations. Cyber defense is part of the geopolitics as well. So, we have to also consider that dimension.

So, in my opinion, you said earlier, technologies are different but the values are the same. So, we have to focus on the values. And I think the Human Rights Charter or the Internet rights and principles are the basic values that we have to uphold. We are talking about different ‑‑ somebody already said about those values having in the paper and those values having in the practical world. At least to start, we have to I think start with the values that we have already agreed on. All we have agreed on the paper. Then we have to make them practical in the real life.

>> BABU RAM ARYAL: Thank you. We have just eight minutes left. Can you please briefly share your thoughts?

>> AUDIENCE: Hi. Thank you. My name is Yasmiyn from the UN Institute for Disarmament Research. So, I just have a quick question. Based on my previous ‑‑ I've been following the issue of AI and cybersecurity for a few years now, and I see that while both fields are so inherently deeply interconnected, the fact is that at the multilateral level, other than processes like the IGF, and even so, it's only been recent, most of the deliberations are done in silo. You have processes for cyber. You have processes on AI. But they don't really interact with each other. But at the same time, I see that there is increased awareness, like on making, you know, on coming up with governance solutions that are sort of multidisciplinary and touch upon tech all together. And one of the approaches that have been proposed is responsible behavior. And as states are trying to develop their national security strategies along the lines of responsible behavior on using these technologies, I was wondering if all the panelists, based on your respective areas of work, whether it's in the public or private sector, if you have any sort of best practices that you would recommend or you would be sharing to the audiences here, when states are trying to develop their national security strategies, what sort of best practices have worked in your experience to govern these technologies in the security and defense sector? Thank you. 

>> BABU RAM ARYAL: Thank you very much for this question, but we have very less time. We have just six minutes left. Very quick intervention from Michael, and then a takeaway from all the panelists.

>> MICHAEL ILISHEBO: Yeah. So, basically, to probably just touch a bit of what she's asked. So, in integrating AI into the defense system, of course she's mentioned issues of national cybersecurity strategies. There is also a need for regulatory frameworks. There is also need for capacity‑building, collaboration, data governance, incident response, and ethical guidelines, of course with international cooperation. So, as she's put it, we are discussing important issues in silos. Cybersecurity is discussed as a stand‑alone topic without due consideration for AI, the same way AI is discussed in isolation without due consideration for cybersecurity and its impact. So, there should be a point at which we must discuss the two as a single subject based on the impact and the problems we are trying to solve.

>> BABU RAM ARYAL: Thanks, Michael. Tatiana, closing remarks?

>> TATIANA TROPINA: Yeah. I would like to address this question, because to me, it's a very interesting one as somebody who is dealing with law and policy and UN process. Well, first of all, I think that this is not the first time when two interrelated processes are artificially separated in the UN. For example, look at cybersecurity and cybercrime processes. They are also separated. Then we have cybersecurity and AI and so on and so forth. 

As to best practices, I do ‑‑ I will be honest here as well ‑‑ I do not think that there are best practices yet. We are still building our capacity to address the issues. I would say that the things where I'm looking at to become best practices, there are quite a few. First of all, when we are talking about guiding principles, I believe that they are nice and good whenever they appear, but they do not tell you how to achieve transparency, how to achieve fairness, how to achieve accountability, really, in a way. So, I'm currently looking at the Council of Europe proposal for global treaty on AI, and I think this might be the ‑‑ it's very kind of general as a framework, but this might be a game changer from the human rights perspective, which will play into fairness perspective, in terms of agreed values. But I'm also looking to the EU AI Act, because this is where we might get a point where on the regulatory level we will prohibit profiling and some other AI uses. And this might be a game changer and they might become the best practice. And this is what I would be looking at, not at the UN, but on the EU level. Thank you.

>> BABU RAM ARYAL: Thank you. Sarim?

>> SARIM AZIZ: Thanks, Babu. Yeah. I think certainly you're right, it's still early days, right? I mean, Meta's a member of the partnerships on AI with other industry players, and there's I think multistakeholder collaboration. I know it's been mentioned in every session. That is the solution. And I think there are good examples in terms of North Stars to look at in other areas. So, for example, you take child safety or you take terrorism. AI is already doing some pretty advanced defensive stuff there on both fronts, right? So, on child safety. The National Centre for Missing and Exploited Children, they have a signer tip line where they inform law enforcement in different countries based onCSAM. Industry works with them and they enable law enforcement around the world in that issue of child safety and child exploitation. So, that's a good example of where we can get to on cybersecurity.

Same with terrorism. The GIFCT is a very important forum where industry's a part of and where we ensure that platforms are not used to ‑‑ so, I think back to the harms, like, we have to look at what is the harm we're trying to attend and do we have the right people focused on it? But I think on the AI front, we're in the beginning stages of getting ‑‑ we need to have technical standards built like we do on other areas, like things like, okay, water‑marking. What does that look like for audiovisual content? And that can be fixed on the production side, right? If everybody has this consensus, not just in industry, but across countries, and including countries in developing countries. But I do think the opportunity in the short term is for developing countries to take advantage of the ‑‑ incentivize ‑‑ like we have a bug bounty programme, for example ‑‑ but incentivizing, giving data to local researchers and developers to help figure out vulnerabilities and train systems using that for your purposes locally is sort of the immediate opportunity because these models are open source now and available.

>> BABU RAM ARYAL: Sarim. Waqas, you have just one minute left.

>> WAQAS HASSAM: Okay, one minute. I think we look to government to do most of the things, almost everything. But this weight of responsibility to be more cyber‑ready has to be distributed not only just between the government, but also among the users, among the platforms, among academia, everybody. I'm circling back to the multistakeholder model we have and the collaborative approach that we always follow. I think if we cannot, if in the developing countries we cannot have the capacity or the technology to handle these challenges, so far, at least, what we do have is a share of responsibility that maybe all of us can have, and you know, make sure that we are at least in somewhat ready to address these challenges being posed by AI and cybersecurity. Thank you.

>> BABU RAM ARYAL: Thank you, Waqas. We completed this discussion exactly on time. I would like to thank all of you. A couple of things were very significantly discussed. One is AI harm and another was the capacity. And of course, these are two major things. And without taking more time out of the session. I would like to thank all of you, our speakers, our online moderator, our audience from online platform, and of course, all of you. A very late‑evening session in Kyoto. IGF, thank you very much. I conclude this session here now. Thank you very much.