The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> PAVLINA PAVLOVA: Great. It's 11:50 here. It's more than that where you are. But hello and welcome, all of you, who are connecting to our panel online from wherever country you are in right now or attending the session on site in Addis Ababa in Ethiopia where the Internet Governance Forum is taking place. I am extremely thrilled to present this. My name is Pavlina Pavlova, and I will be the online moderator for this session today. Our event takes place in a hybrid format. And I will now give the floor to Cherie Lagakali, our on‑site moderator, to provide you with some technical instructions about the participation in the session.
>> CHERIE LAGAKALI: Thank you. My name is Cherie Lagakali. I am from Fiji. I am from the global forum on cyber expertise. I'm also an MAG member and assisting Pavlina to make sure that there is a good exchange between the on‑site and the virtual participants. For those of us in the room, if you have a question, please raise your hand. When you get selected, please say your name before your intervention. Thank you. Over to you, Pavlina.
>> PAVLINA PAVLOVA: Thank you very much for the introduction, Cherie, and thank you very much for agreeing to be our on‑site moderator. This session prepared by the CyberPeace Institute, how we can build a framework for measuring the harms stemming from cyberattacks. Please send your questions and comments into the chat or use the hand function as Cherie pointed out. For the questions, please use the Zoom platform and the chat space you have at the bottom. And we'll address your comments and questions in the Q&A.
And we got some wonderful experts as panelists today, and I am very happy to introduce them. We have Emma Raffray with us, Chief Research and Analyze Officer from the CyberPeace Institute in Geneva. We have Roxana Radu from the University of Oxford. Hello, Roxana. And we have Peter Stephens from OECD. Hello, Peter. I will now give the floor to our first speaker, Emma, who will introduce the work of the CyberPeace Institute and the harm methodology before we drive more further into the questions.
>> EMMA RAFFRAY: Thank you very much, Pavlina, and thanks, everybody, for joining us today. Really a pleasure to be here and to walk you through a little bit of the work that we've been doing recently at the Institute related to the measurements and assessments of the impact and harm from cyberattacks.
So just a very quick background on the CyberPeace Institute for those who don't know us, we are a nongovernmental organization based in Geneva and really our primary mission is to actually reduce the harm from cyberattacks on people's lives. And we do this through three different strands of activities where first of all we assist vulnerable communities in terms of improving their cybersecurity posture, for example. We analyze the threats from cyberattacks on those communities, and then we look to advance the norms and responsible behavior in cyberspace.
And really at the core of our mission in terms of understanding harm, we recognize quite early on that there is a significant lack of data available related to cyberattacks where they're happening, why they're happening, how they're happening, but more notably, what is the harm that is actually generated as a result of these attacks? And in order to do this, we started to document and track cyberattacks and incidents against different communities. So one of these is the health care sector, and the other are communities in conflict zones where we've done a lot of work related to cyberattacks in the context of the war in Ukraine. And we also are looking at a threat to the NGOs and nonprofits operating notably in the humanitarian space.
And basically as we started to track and document these attacks and the impact and harm that they caused, we realized that a lot of efforts related to the documenting of harm and impact were actually related to direct impact to targeted systems or organizations. So we are talking about things like time to restore, financial costs, and to some extent the number of records that were breached, for example, in a particular attack.
But we realized that this very narrow assessment of impact of cyberattacks actually misses a fundamental element, which is what is the harm that attacks are causing to people, communities, and society? So perhaps to sort of get the conversation started today, I wanted to walk you through a case that really stood out for us in terms of the varying components of impact and hardly that were notable.
So there was a cyberattack that took place against a psychotherapy center in Finland in which 25 of their centers were impacted. The initial breaches happened in 2018 and 2019, but then it was only in September 2020 that the organization was subjected to a ransom request. The organization refused to pay the ransom. And as a result, the attackers began posting batches of patient records on underground forums and requesting that patients pay 500 Euros to have their information taken offline. Approximately 36,000 patient records, including juveniles, were stolen. They contained highly sensitive personal data, including names, contact details, Social Security Numbers, and records of therapy sessions of some of the most vulnerable in society, as well as the health care professionals who treated them.
Around 30,000 people are believed to have received a ransom demand themselves, and over 25,000 people actually reported the incident to police. A 10‑gigabyte data file containing private notes between at least 2,000 patients and their therapists appeared on the dark web. And to wrap this all up, following this specific incident or series of incidents, this organization had to file for bankruptcy, and the organization ceased to operate in March 2021, leading all of its patients and customers having to look for new services for their psychotherapy treatment.
So I wanted to leave you with that as an initial sort of scene setter for why we went into the development of the harm methodology, because we noted that there's a significant amount of information out there that relates to qualitative aspects of harm that are associated with cyberattacks. And we started to question how we can bring together that qualitative data into potential quantitative indicators of harm and eventually look to measure and assess the harm in a way that is converted into sort of mathematical formula to be able to compare, for example, the severity and degree of harm that different cyberattacks are causing to people. Thanks, Pavlina. Hand back to you.
>> PAVLINA PAVLOVA: Thank you very much, Emma. It was extremely impressive introduction to the topic. Thank you for speaking, especially about the impacts on vulnerable people. That's why also the CyberPeace Institute is paying so much attention to the selected topic you described. And thank you for pointing the revictimization and the far‑reaching and long‑term effects, cyberattacks can have, which are not only online but importantly offline.
To follow up on the introduction, I would like to go to the trend landscape that we are seeing. And you started pointing out, especially with the great case study in Finland, but I would like to hear also from other panelists, what is the trend landscape, and what are the impacts we see? Roxana, do you want to take it? Ah, Peter.
>> PETER STEPHENS: Thank you, Pavlina, and thank you, Emma, for the great presentation. I wasn't familiar with that particular example you mentioned. As you say, the interconnected relationship between the products that we own, the infrastructure that we rely on, and connected technologies is completely fused. So harm can manifest in any possible myriad of ways. We know that, as we heard, there's a possibility for ransomware, also a possibility for this harm to be, you know, physical. You know, we talk about Internet‑connected devices in some cases where a heating element has been used to start fires. We know there's questions about whether or not there have been fatalities associated with cyberattack, particularly with reference to sort of infrastructure within Germany, with ambulances. So I think it's incredibly broad in terms of what this is. And, of course, it is very scary. And I think for many people we have to recognize most people who use these products are trusting them currently. They don't know what to look for in terms of what means for cybersecurity. Also if we think about the end in mine, policymakers are trying to think about how can they prioritize the interventions they need to take, which will do the most to produce cyber risk and increase cyber resilience.
So it's incredibly complicated, and I'm really so proud and so impressed with the work that's going on in the CPI to help to address this problem. I do think there is a broader question about how can we move towards an empirical assessment of cyber resilience and of risk because I think that this is ‑‑ it is a very challenging question to answer, and I think my personal reflection ‑‑ and this is from working in the UK government as well, where we led a piece of legislation as a response to the attack ‑‑ we had to think about what ‑‑ how can we assess the potential impact of our legislation, of our policy interventions as having a cost associated with them. Every piece of legislation will at some point need to go through the impact assessment process.
So that's something that I think is really ‑‑ I'd love to dig into a bit more as the panel proceed, but just starting remarks about that as well. I think I'm really ‑‑ thank you very much, Emma, for your opening remarks, and I'm very interested to hearing others on the panel as well.
>> ROXANA RADU: Thank you, Peter. Just to follow up on what you were saying, I think ransomware stops most of the disruptions we're seeing online right now, but obviously it's not the only harm‑inducing activity. Alongside data breaches, we've seen ransomware being top of the agenda right now. In the news just earlier this week, the special cobra meeting here in the UK, for example, has ransomware on the agenda more often than any other cyber activity or even criminal activity of sorts. So it has become really, really important to fight this type of activity online, but it only means we also have to have the tools to properly understand it and understand its impact.
So ransomware has been bringing down countries, Costa Rica, but also individuals and communities. And I think Emma's example speaks very clearly about that, just how you can bring down entire categories of ‑‑ and sectors. Basically you can go from patients to medical professionals, also to individuals themselves. So we're no longer talking about small‑scale operations but about services and infrastructure designed to induce harm, and that's where the problem lies. We live in a space in which there is very little accountability for these large‑scale activities and cyber criminal networks that have the power to bring down individuals and countries.
I think it's really key to develop a methodology that allows us to better understand this and have this refined analysis of the defects at different levels. There is obviously a direct effect on a particular individual that is called up by a cyber criminal to pay a ransom, but there's also an effect ‑‑ the level of trust that is obviously distributed across the community. There's also an effect on just how much you might be incentivized to go back to your medical health professional if this tends to happen every other day. So it's really key to understand all of these impacts and to be able to have a policy conversation around what needs to be done sooner rather than later.
>> PAVLINA PAVLOVA: Yeah. Thank you very much. Exactly what we are hearing is a lot of trust, and it's trust on all sides. It's trust in the products, it's trust in the government, it's trust in those who are offering those services, and the ransomware, in criminal groups, ransomware has been having a lot of power which can impact countries. So it's not just human security we are talking about. It's also ‑‑ it's also international security and national security we are discussing. Emma, wanted to come in with in remarks.
>> EMMA RAFFRAY: Yeah. Really it's just to jump on a couple of things. Peter mentioned a really interesting point also, looking at the impact of potential legislation or some of the policy decisions that are being put into place. And I think it's interesting, actually, and in the case of the Finland attack, there's a couple of things that actually happened as a result of this. So first of all, there's the ongoing investigation, right, under criminal law aspect. But there's also, you know, the organization is actually found to infringe GDPR and had reprimands. There is also the application of, you know, regulatory and legal means within this. But there were also some decisions that were made very soon after the attack took place to actually look at how we could supply, for example, patients or members of the public with new Social Security Numbers in the aftermath of a breach like this where they could then potentially avoid some of the issues related, for example, to identity theft which could be really harms that are several degrees away from the immediate harm that you could measure directly from the cyberattack but could lead to sort of revictimization issues. That's just something I wanted to mention.
And then also on the sort of threat landscape side, ransomware and data breaches, I mean, they're big ones that we're looking into. But I think in the context of Ukraine, we've also got ‑‑ looking into some of the concerning trends related to distributed denial of service attacks, which although more temporary in nature in terms of taking down some of the infrastructure and predominantly websites, this is having an impact on civilians who are, for instance, interrupted in their ability to purchase transportation tickets, for example, where they might need to be taking a train or a bus. So these sorts of attacks are also some of the things that are important to look into.
But also cyberattacks that are leading to interference in relation to the communication of factual information. Notably attacks on the media, for example. And we've noted a significant number of attacks were actually cyberattacks or intrusions into systems of being used to spread disinformation to the public and to spread propaganda. And this is particularly concerning because we are looking here at psychological harms, and they are harms that are so difficult to quantify but need to be looked into how far they are actually impacting a population's ability to take the right decisions in very, very difficult circumstances.
And the other thing that I also wanted to raise in terms of trends and threats is what we are looking at in terms of the use of or the misuse of spyware technologies and how the targeted surveillance of individuals in this case and sometimes individuals who work for specific types of organizations, how the, you know, surveying of their calls, their messages, their audio is actually being used as an instrument for persecution. And, again, these are things that are going to be very, very difficult to quantify in terms of harm, but they are things that I think are interesting to group into this bucket of threat landscape.
>> PETER STEPHENS: Thank you, Emma. I definitely agree ‑‑ first of all, I very much agree for your focus on DDOS because, you know, I think that we can see that particularly in light of the COVID pandemic, more and more people and more and more organizations are more reliant on their connected products. And so we've seen the explosion of these devices being used around the world. I think I saw a statistic saying that the number of devices, the average U.S. household has gone from 13 to 16 from 2020 to 2022. So that's just an increase of the threat landscape that is existing there. So I would always look at the Internet‑connected devices and DDOS attacks.
I also think that there's a challenge about the number of people ‑‑ businesses, institutions, who aren't necessarily targeted but are just caught in the cross fire because through no fault of their own, some component across their myriad of products and their supply chains is, in some way, vulnerable and has a default password. I've seen it in Japan, they are running a project which is assessing the number of Internet‑connected devices which have a universal password. I think it's ‑‑ the number of devices, if we look back to 2016, the myriad attack was 100,000 devices from 61 password/username combinations. Now in judge pan we have over 200,000 devices with just one. So that shows the expansion of this threat landscape and just how much growth there is a potential damage there. And it's hard to show how can we assess that.
And I think to Emma's point, a lot of what we're talking about is the challenges here of quantifying because, of course, we can hear horrible stories about kind of ‑‑ as we talk about in Finland with the psychotherapy facility which was compromised or other health care providers, and we can hear about sort of cameras in sensitive places being compromised and used in ways that owners were never intended, but it's very hard to break that down into a cost because I think also this brings me back to my point about governments will always look at impact assessments and will always look at design legislation, how can we make this quantifiable. And yes, there has to be some degree of acceptance that it won't be a perfect science, but a recognition that there is a financial cost that comes with that. Because I think there's always that challenge, and we've experienced this as well of consumers and organizations in some cases responding to the big event that takes place. Cybersecurity issues do gain a lot more traction when there is a high‑profile media story which focuses hearts and minds.
I think the challenge that we have from that is are we being empirical, and are we looking at the most impactful and are we focusing on the biggest issues that matter there? Because I think it is an incredibly difficult thing to manage. But it's great to be part of this conversation.
>> PAVLINA PAVLOVA: Thank you. So what we are seeing is an explosion in the number, in the effect, the impact that cyberattacks are having. Different kinds from ransom, DDOS propaganda and spyware, from instant victimization such as unavailability of some services. We see also other prolonged harm and other kinds of harm such as psychological harm, which must be very difficult to measure. So there is an obvious challenge in qualifying those harms and importantly quantifying. And it's been mentioned several times that this is important to overcome for assessment if we want to make harm to people part of the assessment for policy‑making and for decision‑making.
So where would you say we are right now? What is the state of play with measuring harm, how we should approach the issue, what can be taken as positive examples? We already see in other fields, how can we start, and how can we actually proceed from where we are now with measuring harm? Are there any possible case studies we can use? Who wants to take this? Emma, can you go?
>> EMMA RAFFRAY: Yeah. I mean, I think ‑‑ I mentioned this earlier, that actually we came across some big hurdles when we started to document cyberattacks generally and then, you know, the harm and impact from cyberattacks because there's not a ton of research out there that exists. But there has been some good work that's already been done. There is an academics piece that's called a taxonomy of cyber harms, defining the impacts of cyberattacks and understanding how they propagate, which was published back in October 2018, which looked at sort of some of the sort of more high‑level taxonomy of harms which would look at, you know, sort of psychological, physical, geographical types of harms and grouping them together. So there is a little bit of research out there.
And sort of been having some conversations recently with a number of actors that are actually looking at this particular problematic. But where are we in terms of the Institute? I mean, we are at the stage where we've got our very first draft of what we would consider a framework for measuring and assessing harm from cyberattacks. And when I talk about this framework, we've reached the point where we've already identified a number of indicators that we would be looking to include within this framework. These indicators range from quantitative indicators, so if we think of number of records breached or a number of hours of downtime. So these are numerical values that would be feeding into these indicators. Then we've got more qualitative indicators that could be converted to numerical values. So if we think about certain indicators where you would have a yes or no option. So, for example, did the website go down as a result of the cyberattack? This would be a yes or no field that could be converted to a numerical 0 to 1 value.
And then we've got qualitative indicators. And this is really where we are looking at trying to document and track things such as psychological harm, issues related to trust, as was mentioned a couple of times today. And really at this stage we've got those indicators, and we have defined what would be a mathematical formula to kind of identify an initial score, in essence, for an individual attack. But what we're missing right now is real contributions from the community that could help to, first of all, refine this framework because we know that it's not comprehensive enough at the moment.
We are also looking to test the framework as it is today against different cyberattacks to see if the results that it generates actually make sense. And then we are looking also for some peer‑review opportunities, right, to see how others can contribute to this. So I would say it's a starting point, and it's something that we are looking to build on specifically in 2023 through some focus groups and some workshops to be able to actually look how far we can take this particular methodology and if it's useful for those within the community.
>> PAVLINA PAVLOVA: So Emma mentioned an important thing, why we are here and why we convened this workshop. We want to hear your contributions. What do you think? Your questions which make us think about new possible ways how to approach it, new possible factors, or maybe solutions. So please feel free to brainstorm billion the comments you have, about your questions. Drop them in the chat or raise hand later on when we have Q&A after the panelists say their main remarks.
I would like to ‑‑ as part of also this question, feel free to jump in with the other panelists. But I would also like to proceed to the ‑‑ as you describe the methodological framework, why is it important and what it would change in policy‑making, how it would improve. And Peter, you already mentioned some of it. Can you elaborate on your points as you see it from your previous experience and expertise.
>> PETER STEPHENS: Sure. So I think the point I was making was about the relationship between ‑‑ within cybersecurity, policymakers, of course, operate in a world where there is a seemingly perpetual series of events which take place which raise the profile of the issue. And there's also an ongoing trend, which is how can we increase cyber resilience, and how can we reduce cyber risk? And I think that we think of those as one of them is discreet events and one of them is the ongoing process. So what I think we need to be doing is ‑‑ my point was, you know, government will introduce legislation or policy interventions in response to market failure. And those market failures take place when we think there's a disconnect between what is currently being ‑‑ what is currently on the market and what people think this is in terms of security of the products that they have available to them.
So as part of that process, you need to create a robust assessment to show impact on things like technical barriers to trade or the impact of your legislation and what it will mean for others and what will it mean for business if we introduce formal legislation which is designed to increase cyber resilience. Now, of course, doing so will always be scrutinized by parliamentarians, and so it's really important that that is clear. Now, I think that it's important to say that that needs to be a snapshot of progress as things are right now and then how can that move forward?
Now, I think that a process that we went under with the ‑‑ and the impact assessment in the UK government was having to kind of make ‑‑ have the active assessments about how can we quantify some of the many ranges of harms which have been discussed earlier to something which can then be put into a formula to suggest, well, actually, is legislation a proportionate response in light of that. Now, I think we can all agree that there is a problem that we do not have sufficient quantifiable evidence right now to enable policymakers to make that decision immediately. They have to, in some cases, use pieces of research or in some cases assessments and assumptions as well.
I think things I would also say is that there is a lot of great work that's already taking place. There's the CyberPeace Institute, and their work is really ‑‑ there are ongoing assessments, so things like the security foundation who every year publish reports to show the adoption of security practices within manufacturers. That's an important assessment. It's not so much focused on harm, but it's focused on security principles within products. So that, I think, is also an important distinction as well.
I think that one more point from me is just thinking about ‑‑ the work the OECD is doing right now. A lot of it is focusing on how can we empower policymakers to know what good security looks like and how can we share a series of recommendations so that they can help to know what good cybersecurity resilience looks like and how can they make their economies more resilient. Now, of course, the next step from that would be how can we quantify progress against those recommendations. Now, that's something I think is the intention for the next few years, but it's something that we are working towards on how we do that. So I think there's a lot of sort of areas there, but I hope that's helpful, and I'm happy to pass back to Pavlina.
>> PAVLINA PAVLOVA: Thank you very much. Thank you very much for pointing, you know, to the effect it has on legislation and to the many actors who will be using it. And you mentioned something which is very important, especially as we discuss methodology and the framework which is quite complex to make it also understandable for policymakers who will be making these decisions, who need to decide on whether to use it, who needs to decide how to use it.
And before we go to Roxana for her remarks, Emma had comments on what has been said.
>> EMMA RAFFRAY: Yeah. I mean, I just wanted to add, I mean, I'm by far not the policy expert in the room, so I'm going to touch on something slightly different but potentially connected is when we look beyond, you know, the legislation related, for example, to security vendors and businesses, I wonder in how far actually being able to really put a measure on the impact and harm from cyberattacks that actually could help also with the triaging process. So if we think about how resources are allocated by governments to certain key issues within society, you know, I wonder how far some of that is actually based on, you know, the security safety of civilians or of citizens. So in essence, you know, through our identify if we identify that cyberattacks actually create a security and safety threat to civilians or to citizens, you know, in how far maybe resources could be redirected or could be considered to be increased for the prevention, detection, investigation, and prosecution of cyberattacks, which we know are heavily underinvested in at the moment.
And then the other thing that I'm also wondering ‑‑ and, again, I'm not sure if this is specifically related to policy, but it might be ‑‑ is in how far, you know, we owe some degree of remediation and redress to the victims who have been subjected to harm from cyberattacks and in how far that needs to be a consideration. And in order to do that, we need to be able to tangibly say what harm means. So I'm thinking about very specifically looking at lawsuits that are taking place in the United States at the moment, for example, against health care organizations for whom patient data has been breached. But there's actually a threshold of damage that has to been proven by the victim related to how much harm is actually being caused or how much damage has been caused to the victim.
And my question is how do they know what that is? Because as far as I know, there isn't something out there that actually quantifies what that damage is. So, yeah, I just wonder in terms of remediation and redress, I think that could be an interesting angle as well. Thank you.
>> PAVLINA PAVLOVA: Thank you. And you mentioned a very important point, getting the justice to the victims and those who are affected and how this could help to the efforts of increasing trust and helping those who are affected. And Roxana, please go ahead.
>> ROXANA RADU: Yeah. I would just reiterate that unless we link this to the accountability discussions, we're not really getting very far. It's absolutely crucial to raise the bar for everyone, but we need to do that in cybersecurity by having multiple stakeholders involved and having a clear accountability process in relation to it. So there is this general saying that if you can't measure it and you can't manage it, and it's true to a certain extent. It's not necessarily holding true across the board, but you need to be able to measure in relation to a solution that you want to find. So I think it's key to make the problem visible but also see how the evidence you are collecting feeds back into a broader public policy process and how it actually fosters accountability.
Because there are several ways to go about this. One is to just say, look, we've made the issue visible, and now it's up to the other stakeholders to take it forward. The other way to do it is to say we work with a collective of interests in mind, and we are going to bring around the table all the right stakeholders and actually be part of that decision‑making process and change something. The effects of cyberattacks tend to be quite localized. So it is sometimes really, really difficult to bring this to a global agenda. Everybody's going to experience a little bit of harm in their local jurisdiction, but putting that on the global agenda requires global support.
So measuring harm can be this first step towards a proper understanding, but it has to be part of a bigger process. And it has to have multiple parties involved, whether that's, you know, the government changing legislative frameworks, as we just discussed, whether it's the private sector introducing new safeguards and raising the bar for everybody, or it's Civil Society assisting victims of cyberattacks. Without accountability, we continue to operate in this space with impunity where there is only more data to be collected and more to be said about the damage, but very little to go after the cyber criminals. So we need to make sure that this is not operating in a silo and that it does change the incentive structure for cyber criminals at the end of the day and maybe changes that beyond the borders because that only means they reprofile their activities and move to another jurisdiction rather than acting, you know, if we act ‑‑ if we collectively act on a global level, then their incentive to continue to do so much harm is reduced. But if we only tackle it in certain jurisdictions, then they're only going to move to another place or target others.
And because we generally tend to have this individual‑oriented frameworks where as Emma was explaining, the burden is mostly on the victim to prove the harm, we are missing a collective approach. We don't have anything for communities. How do you go about NGOs that are targeted in different countries and they are brought down by cyberattacks? How do you go about entire groups of people in a country that are no longer safe from cyber harm? These kind of harms are not yet properly tackled in our legislative frameworks. We simply don't have anything that actually protects communities as such. Maybe it's time to think about introducing new safeguards and new protections. I'll stop here for the time being. Thank you.
>> PAVLINA PAVLOVA: Thank you very much, Roxana, for bringing accountability into the discussion. It is at the core of the Institute. I will give the floor to Peter before we proceed to more questions about breaking. But before that, please pose your questions into the chat so we know whether we should open the floor for the Q&A or just proceed with questions from the moderators. Thank you.
>> PETER STEPHENS: Thank you, Pavlina. I hope this links to breaking silos, but just to reiterate, I agree with Emma's point about the importance of having, you know, this being used for triage processes and also the really interesting point that came through from that was about insurance and about kind of liability, because I think this has the potential to move sort of markets if there's a business interest for, you know, there is an existing precedent set in a number of existing sectors to say if you encounter harm in a place where you didn't expect it, there is compensation thresholds that exist, and they have been established. They have been set. And that's potentially a helpful starting point.
Now, I think that cyber insurance is a very interesting area, and I think insurers have a really interesting role in this space as well for thinking about, you know, redress and with the conversation that was previously had. I also think that we talked a little bit about the select challenges with regards to law enforcement. You know, we talked about kind of the fundamental global nature of cybersecurity and the fact that perpetrators of an attack can easily be based in one jurisdiction and perpetrate an attack where the victim is in another. And this brings a really complicated question about norms, about how can someone be prosecuted. So I think that is a very challenging area. So I do think that that's something which, you know, there is growing momentum behind how can we get some accepted norms around that, but I also think that there is a really interesting possibility in the future around cyber insurance as well and liability questions. Yeah. Thank you very much.
>> PAVLINA PAVLOVA: Thank you, Peter. We have many stakeholders in this field and many communities which still, as we said, like, stay in silos for most of the discussion on this topic so far. So only up to this we have cyber insurance. We have law enforcement. We have international community. Roxana also mentioned the challenge of representing different communities in different countries, very spare examples who have shared ‑‑ who may have shared experience but in different contexts. How can we build coalitions, and who should be building coalitions on this topic? Is there ‑‑ you know, I like the point of, for example, cyber insurers. Is this a group which has invested interest in moving the discussion forward? Is this the diplomatic community? Is this the Civil Society sector? Who can lead the discussion? Who should lead the discussion? Who has more potential for resources? And if it's a multistakeholder problem, which it most probably is, how to break the barriers between different stakeholders and how to build coalitions between them. What can be some powers that bring the communities and different kind of actors together on this topic? Peter, go ahead.
>> PETER STEPHENS: Thank you, Pavlina. I think in terms of, of course, this is a very multistakeholder challenge. Everyone ‑‑ we need to be working in partnership. I think we all have a collective ambition which is that we are protecting users and organizations from a range of harms and the assessment of what those harms can be and how we quantify them is an important part of that puzzle.
There's also a question of how can we embed trust within the Digital Economy. The Digital Economy is taking place very quickly and has a multitude of benefits which we should be allowing citizens, businesses, and organizations to embrace. I can see there's a question from Allison which was talking about the importance of schools and importance of awareness. I think that there's a really interesting question there because I think that as the world just changes so quickly, young people to be equipped with some skills to enable them to flourish in that world. And a part of that is to enable them to have basic cyber hygiene principles, you know, to recognize the importance of things like two‑factor authentication. There's also a question of how can we prepare young people for a future world where there is a huge range of career opportunities for them in this world of cybersecurity. Because, of course, there is a huge potential of work in that space and something which I think more young people need to be equipping themselves with skills.
I think, however, the point I would make is that we shouldn't be expecting citizens to become experts in cybersecurity. I think that there have long been an expectation to say that we should be bringing awareness, making sure that young people, all people are educated about how to make sure they protect themselves from ransomware attacks. How can we make sure they have three random words for their passwords.
And we know and research continues to show that people's Internet hygiene is not often great. We should be empowering young people and we should definitely be including that within our future plans, but also I don't think that we should be expecting them to do so.
>> EMMA RAFFRAY: Yeah. Thanks, Peter. Really interesting. Maybe I'll touch on the education just to sort of follow in the flow, and then I'll back to Pavlina's previous question on sort of the leading and co‑leading multistakeholders. But, yeah, I think on the schools and education, I think it's, you know, it's extremely important actually at the CyberPeace Institute this year we actually brought young students in, different age ranges. Some of them in the 12‑year‑olds and some of them sort of more University level to actually talk them through careers in cybersecurity, sort of like open days of discussions. And it's really, really eye‑opening for them. It's actually quite surprising how little people actually know is happening in cyberspace, especially given that, you know, cyberspace isn't really a thing when we think about it. It's the real world today. I mean, everybody's lives are on their mobile phones and on their laptops. So I think that, you know, really in terms of the education and awareness raising, it's really important. And, you know, the earlier we can get into, you know, increasing the understanding of actually some of the simple steps that can be taken to protect your devices, you know, the earlier we can do it, the better.
In terms of the multistakeholder and who should lead, it's a good question, Pavlina. And, I mean, my answer is why not have co‑leads? And I think that there is a really good example of where this was done this year or last year, actually, is the multistakeholder compendium that was done on related to the health care sector where Microsoft and the Czech government and the CyberPeace Institute as the Civil Society representative came together to actually look at the harms from cyberattacks specifically on the health care sector. And through a series of different workshops, you actually had entities across the stakeholder community who actually brainstormed what it actually meant in relation to the threats that the health care sector was facing and what potential recommendations could come out of that. And I think that approach is something I would like to test again and again on different thematics within the cybersecurity world with different sectors. Because the sectors are facing very different issues. And I think it's one of the things that we're looking at with the development of the methodology for cyberattacks and the harm that they cause is actually should we be splitting the methodology based on sectors or based on the types of targets? In essence because the indicators that you're going to build are going to look very, very different if you're looking at a hospital that has very different operational activities than if you have an energy company, for example.
So I think that engaging the multistakeholder community but notably with the experts who work in the field, and so bringing in the health care professionals so they can give their interpretation of what impact on harm meant to their organization and to their ability to deliver their services I think is going to be instrumental here.
>> PAVLINA PAVLOVA: Thank you very much. Go ahead, Roxana.
>> ROXANA RADU: Yeah. I'll just add to this that there's quite a bit of stigma in this space when it comes to how we treat the victims of cyberattacks. So there's still a taboo to be overcome before we can actually think very clearly about the responsibilities of different stakeholders. So in a way we all have some work to do when it comes to just accepting that any one of us will sooner rather than later fall victim to a cyberattack. It's just that widespread nowadays. And the more we are able to talk about it publicly and have the right remedies in place, have the right frameworks in place to bring this to the fore, the more it's a conversation starter rather than a topic we keep on the side.
And once we've overcome this on a societal level, and it has to do with education as well, as Allison was pointing out in the chat, of course not all the responsibility can be placed on schools. When there are so many different levels of problems all across the board. So it's really difficult to say schools can actually do everything in terms of education, whether it's parents, whether it's student associations. On the school side there's some work to do, but it's clearly more than that. So we have to think about all the responsibilities of ‑‑ all the actors in the system. It goes from the liability for the manufacturer all the way to what governments might do differently because they might want to increase collective trust.
So to me the responsibility in this space has to be distributed, and it has to be also up for conversation. It simply can't be that each sector defines their own role in this like the cyber insurance companies tend to do. They decide whether they want to cover certain damages or not. Governments decide whether they want to change certain parts of the legislation or not. I think it's more about these collective pressures and understanding that we all need to do a little bit more to change a situation.
>> PAVLINA PAVLOVA: Thank you. Please feel free to raise your hand if you want to ask the questions yourselves instead of posting it to the chat. And we will be sure to give you floor. I have a small ‑‑ two questions which arise from our discussions. And one is we mentioned this, that across the board for different sectors, the framework for measuring it will be looking very differently. Is it still ‑‑ will it be still comparable? Is the framework and vision something that you can still compare across the board even though the harm is very different so you can see different sectors of harm rather than having, again, sector‑specific things? And the second question, we mentioned so many stakeholders, and we mentioned that this is collective issue and collective responsibility and also the leadership should be multistakeholder, should be more partners coming together. But as we are ‑‑ where we are right now, as Emma described with the work of the CyberPeace Institute, who is it from the community, from the experts, that you would especially like to hear from and reach out to, for example, to the CyberPeace Institute or to the wider community to extend this discussion? Because we met here today to also discuss how the community can come together. You touched upon it so well. It's just this is an opportunity for you, if there is anyone you could call on as part of this panel, who would that be? Whose contribution at this point?
>> EMMA RAFFRAY: Thanks, Pavlina. So two tricky questions. On the ‑‑ you know, if we were to develop the methodology looking at sector‑specific aspects, I mean, there are other parts, and we can't go into the details of this today for the matter of timing, but there are actually parts of the framework that would look at generic indicators of harm that are applicable across any sector. So the idea would be that you could have subscores that would look at your generic indicators that are comparable no matter what sector you're in, that could be comparable no matter what type of attack you've suffered. But then there might be specific indicators that are relevant to your sector or to the type of attack, for example. And the idea is that these scores then come together to create a single score, but in essence the subscores might be very relevant in terms of comparing an attack that's happening in one sector against an attack in another.
But something that we've been very keen to stay away from for now is the notion of an index that allows for, you know, the concept of harm and impact from cyberattacks to be compared across geographies or to be compared across sectors, that's really not the purpose of the methodology that we are developing. What we are really looking to develop is an understanding of harm per incident and looking at, you know, how incidents are comparable to each other rather than looking at an index that would compare a country's performance in terms of the harm that is allocated. And there's a number of different reasons for that. I'm happy to jump on another discussion on this another time. But certainly, yes, I think that there would be some opportunities to compare different attacks from different sectors based on how we actually build this scoring mechanism.
In terms of the community that we are looking to outreach, I mean, it could be very, very, varied here. So we are interested in speaking to economists and mathematicians. Those who have worked with mathematical models, I think this is going to be very important in terms of being able to ensure that the model that we are going to be building is actually stands a test of scrutiny in relation to mathematical components. There is the cybersecurity experts who need to come in here who are going to have the knowledge and expertise related to cyberattacks and what we are going to be able to acquire related to specific cyber incidents and cyberattacks that we are seeing.
And then you've got, of course, the policymakers and those who are going to be telling us what is going to be the value that is generated from a methodology, what is it that they need this specific methodology to do in order for them to be able to use it in their day‑to‑day activities. And, you know, one of the things that we keep hearing is that, you know, governments might want this to be converted into a financial cost. That's not a tool, what we want to do at the institute because we want to make this human centric. We don't want to make this a financial measurement of harm. So in essence, it would be interesting to get these different actors in a room to understand why there is certain need to convert this into a financial metric when, in essence, we would be more inclined to keep this towards a human‑centric metric of harm. Thank you.
>> PAVLINA PAVLOVA: Thank you very much, Emma. So you've heard a call, economists and mathematicians, policymakers and cybersecurity experts, please get in touch with us. Thank you, Emma, for pointing out the human centricism which is a human‑centric approach which is at the core of the Institute and why we are developing this methodology. And we have both Peter and Roxana to take the floor. So please go ahead, Roxana.
>> ROXANA RADU: I'll just be very quick on your first question. I think there's a lot of value and understanding results and findings in relation to harm and impact across the board, so looking at different sectors, potentially also different geographies, but obviously there's very little comparison. Each case will have its own specifics. So it is mostly about changing the focus from having simply the analysis to having the analysis in relation to potential actions that could be taken to avoid that being repeated in the future. From my point of view, it would be really important to link this back to the accountability framework. So as soon as we have the detailed analysis of an incident as soon as we have a proper assessment of the harm and the multiple levels of harm that were brought about by this specific incident, that this actually fits into a process, and then there's collective work so we make sure it's not repeated in the future. Whether that's simply about raising the awareness of all the stakeholders that one type of attack could have all this multiple effects or if that's really about how do we go about prosecuting the criminals behind, it really should be part of that bigger conversation.
And on your second point, Pavlina, who should be called to bring additional expertise to the table, I think academia has a role to play here, whether that's really from just eliminating theoretical possibilities that just shouldn't waste anybody's time, so really on the matter of what has proven not to work in the past all the way to what might actually work with some examples and insights from other fields as well.
>> PETER STEPHENS: Thank you. And just to go over some of the points that have already been made. To your first question, Pavlina, and, of course, I think it's come across in this message that this is a multistakeholder endeavor. We need perspectives from governments, academia, cybersecurity research community, industry, the third sector, and many other organizations, super‑national organizations as well, and they all play a very valuable role in helping to make sure we share the same language. I think there's always challenges when you have particularly security research communities, previously known as hackers, who would be able to sometimes have to translate what they are saying to policy. And I think that's a really important partnership and also with academia as well, there's a need to share a number of different languages on how they can be coalesced around the same ambition, which is fundamentally to protect them from harm.
To Emma's point, I think she asked the question about, you know, how can we, you know, the challenges are, in a practical sense, legislation needs to go through a fundamentally financial hurdle before it can be approved. And so that's just the reality of the world where, you know, when government has to make a decision, whether they will apply incentives on industry, recognizing there will be a knock‑on consequence of that, there has to be evidence to show that there is a financial benefit to account for that.
So I think the challenge here is recognizing that the realities of the world we live in and the way the legislation is developed, the way that government operations take place, and think about how we can, again, translate the existing work into things that can help make it usable for those purposes. So I guess this comes back to the question that was raised about how can we translate those issues around the various range of harms into something that is then usable for those kind of products because I think that's a real challenge that I remember very vividly going through of, like, knowing full well that there was the range of harms available to us. But because it was very difficult to quantify that, it's very difficult to then input it into the model which you sufficiently need to get what you need.
So I think that's a really sort of important sort of role about how we can all work together on that. And, again, I think the theme that's coming through for me is around making sure that we're aware about when we're speaking slightly different languages and how can we nuance it so that we are preparing for the end in mind, which is fundamentally progress for government operations for norms, to make sure that there is progress being made.
>> PAVLINA PAVLOVA: Thank you very much, Peter. It was very nice closing remarks. But before officially closing the session as we reached our time, I would like to give the last opportunity for our participants who may want to raise their hand or have any comment. So here's some ten seconds to express your interest to speak at this session. And I will also reach to the panelists, if you have any remarks which haven't been said, anyone to wrap up the session, feel free to raise your hand. And I will make sure to give you the floor. And as I see no hands raised, I will just thank everyone for coming to our session. We greatly appreciate it. We understand this is a complex topic, but it's a very important conversation to have. As we said, this is a shared responsibility. It was highlighting all through our workshop today. And it's so important to gather the harm methodology, right, because we need to build trust and protect and expand online spaces so they can serve the day‑to‑take purpose. They can serve the people and to increase their security, to increase their wellbeing, and the prosperity of our societies. And I also believe that as we mentioned the importance of raising awareness about this topic, we had a very effective awareness‑raising exercise with this session. And thank you very much, everyone who attended, and see you at some other opportunities. Thank you.
>> Thank you.
>> Thank you. Bye.
>> PETER STEPHENS: Thank you.