The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> The devices are used for medical purposes but because of that new technology, we should understand how it will affect not even us as human beings, our evolution but also the cybersecurity issues and the emerging technology issues, so we will try to understand during this session how this technology will affect us and we are joined by the wonderful speakers here. I will introduce them all shortly. They will try to answer your questions.
So how the discussion will went, we will be divided by blocks, the first will be medical, and then we have little Q&A session, and then we have AI emerging technology block and then little Q&A session and last one is cybersecurity block.
But it depends on how you want, if you want to ask questions after the session, it's possible. They are going to ask us to finish early so we will finish about 10:15, so thank you very much. Today we are joined by Lev Pestrenin, deputy Department Head, researcher. Igor for applied development of artificial intelligence online. Gabriella Marcelja CEO of impact ventures.
James Amattey is also with us online.
And also we have Ana Carolina Dias who will help me and answer the questions coming from the chat online.
We will start with the medical block with understanding how this devices are used in the medical purposes. So Lev, please start. The floor is yours.
>> LEV PESTRENIN: Thank you very much, good morning, every one, thanks for coming. I'm a researcher in the Moscow center for diagnostics and telemedicine. I think I'm sure that artificial intelligence and internet have something in common.
Both are new technologies. And we want to benefit from them while we also want to avoid risks of these technologies.
So today I would like to show you on the example of artificial intelligence implementation in the healthcare. I would like to show how is it possible to real benefit and to avoid risks at the same time.
One moment.
Something is wrong with my presentation.
Yes, thank you very much.
So, in a few words, what is radiology in Moscow looks like?
As usual, patients undergo diagnostics and images from all hospitals of Moscow get into the data center. Which is located in our center for diagnostics and telemedicine. And after that, a radiologist describe these images, make reports and in one, two or sometimes a little bit more hours, a doctor and patient already has results of the examination.
Next slide, please.
Okay.
So it is possible due to centralized healthcare system.
It took several years to centralize data.
To centralize all radiological descriptions and it's possible to start Moscow experiment in implementation of artificial intelligence in healthcare.
So we started this experiment in 2020. And we had a lot difficulties, a lot of situations at the beginning because it's new technology. And there were no answers how we could use it and avoid different risks of artificial intelligence.
But, generally, we managed to overcome all these risks in artificial intelligence in healthcare.
So today I would like to tell you about three components. Three main components which were the key components for this success.
The first component is data. Datasets. High quality datasets are very important for artificial intelligence training and testing.
What is datasets? It is a set of radiologist study the report using these datasets it's possible to train artificial intelligence to find some kind of pathology like pneumonia or cancer, for example.
And it is possible to test artificial intelligence and --
Data following the main principles.
Organized storage and organized collection of data.
The second principle is artificial intelligence system.
Scientists, researchers, developers all over the world search about artificial intelligence, about its power.
Someone said it is possible that artificial intelligence would be used and doctors will be without work.
But now we see that it is not so.
Artificial intelligence is a great (?)
It could make measurements. It helps doctor to find some pathology, but now artificial intelligence cannot replace doctors at all.
So that's why monitor artificial intelligence work. We developed and successfully used a life cycle for checking the quality of artificial intelligence.
We do a review every month and we make an assessment of artificial intelligence to be sure that medical care is of high quality. And it's life cycle also helps us to improve artificial intelligence.
Because we can find mistakes of artificial intelligence and we could change these AI services and improve their quality.
And the third principle is ethics. It's basic principles of medical ethics which were used for many centuries and we still follow them.
So one of the main principles is privacy of patients is the safety patients and patients confidentiality. So artificial intelligence could improve quality of healthcare. But without following ethics, I think it is impossible to use it because it can bring more harm for patients than benefits.
So through these components, high quality datasets, the monitoring of artificial intelligence and the third component is basic principles.
We could provide for patients in a very simple way of radiological studies.
It is as usual. Examination, the doctor describes and writes a report and after that patients and physicians get the results of the status.
And here on Step 3 you see that now doctors, radiologists get two images for every patient.
The first is the native image and the second is image which is processed by artificial intelligence. So, that's why it is possible to use artificial intelligence and get and to benefit from it.
And what are the key achievements, results of implementation of artificial intelligence?
Here you see some -- generally speaking there are three main achievements.
The first is improved quality of healthcare.
The second is ease access to health services. And the third principle is enhanced safety of patients.
And I would like to show you one more slide about artificial intelligence, how it works.
You see we have more than 50 A.I. solutions in Moscow.
They can detect different types of pathology on different types of studies like x-ray and computer tomography or MRI, for example.
To conclude my presentation I would like to say that artificial intelligence it's our possible future which could help us to live longer and to be healthier during our life.
So, I would like to invite you to visit our center to learn more about artificial intelligence in the healthcare and it's very easy to visit. You could organise a visit to the center.
Thank you very much.
>> MODERATOR USTINOVA: This is a very interesting presentation. I guess people learn a lot about it and maybe question before we move to the next speaker. In your opinion, in your experience, how do you think, will the introduction of this device or integration into the human body will affect the human evolution. Like for example, we will be part robots, that's a futuristic question.
>> LEV PESTRENIN: Yes, and thank you for the interesting question.
I think you are right. Maybe in the future, we will look like robots, a little bit. Now we don't know of technologies. There is a very interesting thought that now in a short period of time we tend to overestimate technologies. And for a longer period of time, we tend to underestimate technologies.
After 20-30 years maybe we have bionic device or something like that. Yes, we would do our work faster, I think. We have to be, can get a lot of (?)
I'm sure using all these devices should be controlled, first of all, by the doctors.
Because these devices never should be harmful for people.
>> MODERATOR USTINOVA: Thank you very much for the answer. We will move to the next speaker and try to understand how monitoring devices are actually making people's lives easier, especially in the remote areas where the hospitals is not an option for some people.
Igor, please, the floor is yours.
>> Thank you. Good morning, ladies and gentlemen. I will present thoughts on the state of the home healthcare sectors, limitation of diagnostics and (?) device.
The pandemic COVID-19 has become (?) for the development and (?) of devices (?) in clinical institution, sometime I do for patients is these kinds of devices, the solution for wrong patients.
This development of personalized medicine, implement solution including artificial intelligence to improve the level of diagnostics, among the adult population.
Currently the (?) this is the website (?)
Website with dot ii.
(?) websites tasks go through system for the medical decisions, electronic, legal, pharmacologists, these websites, (?)
For supporting medical decisions. Dot ii.
Functional indicators began in the 2000's. But it's widespread implementation in the system of high tech began much later.
An example for such implementation in the Russian Federation could be Medicare and (?) systems.
Development and implementation of health education (?) will provide undeniable advantage for interested participants since the use of the services, will allow, produce level of monetary and disability of the population due to early (?) of development (?) and create risks in (?)
The process of continuous monitoring and analyze professional health indicators. To ensure doctors (?) in case of patients monitoring indicators critical deviations.
To decrease the number of patients, to increase (?) of medical care for the population without the need for visit to medical institution. Reduce costs of medical institution by reduction of the need for hospitalization.
To improve medical knowledge by physician care and licensed -- professional medical data.
My report completed. Thank you for your attention. I can answer questions.
>> MODERATOR USTINOVA: Thank you for the presentation. I guess you are right in telling that sometimes when the medical services are not available, it is better to have some devices that can track your medical conditions and you will be able to send it to the healthcare institution to understand what kind of illness you have.
And that's why it causes another risk as we are going to talk about. Cybersecurity risk. We know that we can talk about data security but what about human security. For example, sometimes when we speak about, we need to understand if something is integrated into your body, that means you are a computer yourself and you can be hacked as a computer. So the main question is how to get rid of it, if you can say something. How can we manage the cybersecurity risks in cyber human risks?
>> Let me start from just introduction into the types of classification of the different types of (?) I would highlight from a perspective of whether it can influence your, as a human, or your body or not. The first step of devices that also need to be protected are so-called wearable devices. And the main feature is to promote data to a central storage, to a central data house. And the main risks arise from these types of devices and data transferring data link and all the risks we know about data leak are applicable for this type of devices.
Another group is more interesting from a cybersecurity perspective. As a group of devices that could interact with a central processor. And that could send a manageable influence on your body.
For example, we do not expect that cochlear implant could manage the tone and could send a signal to your ear, even if you are deaf. But it can manage and it could change your behavior.
The next example that Igor mentioned and devices that Igor mentioned, insulin pump.
When you can change the dosage Rhode remote then you can influence and make danger to the body.
All of you remember the fresh example of September tourist attack with the smartphones and pagers and Lebanon and Syria. That's an example how the transferring signal could damage big groups of people.
If you are talking about widely-spread technologies for people with different types of disease, they could be an interesting and very important targets for ethics. For criminals, different types of criminals. And you know this example when this types of cyber risk are fixed when one of the heads of U.S. get a pacemaker as an implant, ask to turn it off from a remote management. When he got this Chair, very high level Chair, he asked to turn off this remote managing due to the risks being attacked and being hacked with criminals.
From this perspective, we should talk about two types of influence. The data leakage and the total control of doing impossible, and doing unacceptable event. We name it an exceptable event and we have to prevent using different types of organizational, technical, instrumental features and practices.
Let's talk about what we can get in case we do not think of the cybersecurity risks in terms of internet violence.
The first credibility gap of the end users of these technologies.
And as Igor mentioned about providing healthcare for remote regions, for different types of people who can get healthcare fast and quite close to the place of living, it becomes a vital problem for them and it derives a lot of questions around that.
We have a choice between whether we can fix this risk and find ways how to work with them.
On the other hand, to ignore that and face the problems of leaking healthcare impossibility to help people to live their best life.
And as Alina mentioned in her introductory speech, internet of bodies is kind of part of internet of things topic.
And it derives a good point for us.
We know a lot to do with the internet of things from different points of view. In terms of what every actor has to do managing cyber risks.
As we are here on United Nations floor and we are sharing multistakeholderism framework, I got some inputs for every participant, every group of participant from this perspective.
What have the producers and drivers of internet of what it can do in this topic?
The first one is to think of cybersecurity as vital question as very important point.
And as we think about governmental and environmental and social points that are important for every company, then the cybersecurity is crucial for the company as well.
If we can place it in one row we can talk about ESGC. Concept of ESGC framework as good practice for managing companies.
Another point is to test the devices they provide to the end users to the consumers and using different practices and inviting the best analysts worldwide.
For that we've got some platforms that are well known platforms that can help to test in circumstances quite close to real ethics, quite close to real life. And techniques that criminals can use.
It helps to improve the systems.
It helps to make it more sustainable and applicable for end users.
Another one is, another group is cybersecurity providers.
They've got their own responsibility in improving their own skills, their own expertise in the testing internet of bodies with their own focus on specific of it, that it's not just something that implemented somewhere so far. But it is implemented in the individual's body and it has an influence and impact on the person.
And the second one, to provide specific solutions for protecting, it could be software, hardware items. It could be processes and organizational features, all the instruments that we have should be included into the consulting services by the experts in cybersecurity.
And the third pillar is governments and authorities and I think they have to do, they used to, establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people without any dependencies.
To conclude my speech, I would want to make one point, if we do not take into account the cyber risk, or if we underestimate it in this exact topic, Internet of Bodies we have to pay ever high price and that's human life.
Thank you.
>> MODERATOR USTINOVA: You covered a lot of topics that I actually -- are kind of at risk today.
You mentioned lots of cases where the devices were hacked and very bad things happen when human lives were lost. But what about human sanity, you know?
For example, there was an episode in "Black Mirror" the person was wearing the lenses and his memories were not changed but he was caught and repeating them correctly.
Do we need to regulate the usage of IoB devices by humans themselves so they don't harm themselves with them. Or is it just a point of free will and we should leave the fate of humans in their own hands? Just like that?
>> Thank you for very important question. I believe that the individuals, on the one hand side are rather smart and they can understand what impact and what influence has different devices to their lives from a positive side of of usage.
But they underestimate the risks. It must be, from my perspective, it must be a common discussion, it must be open discussion of this topics, started by the experts.
And there must be a balance of the views that enthusiasts who can try and who understand the risks to discover the steps that we do not know at the moment.
Because some risks arise and could be clear period of usage, after entering different situations. And in this sense, so-called ethical hackers could play a very important role because they could test from unpredictable ways, from unpredictable points of views to identify the risks, to find a way how to fix them and how to provide secure solutions as a result.
Thank you.
>> MODERATOR USTINOVA: Thank you very much.
Yes, I guess you're right, and now we need to also cover another risk that is not always recognised, probably, properly. When we speak about using these devices. Because when we use devices like phone or a tablet, we do not actually see the difference between different marks of the phone. Maybe the usage. But if we put a device into the human body it will be seen.
The main question, won't some IoB devices cause segregation of people? Like for example, you will have the device that is very costly and very high-priced within the human body. And he has the price of his life, the person with the cheaper device doesn't. So how can we avoid this? Gabriella, if you can share with us, your point of view?
>> GABRIELLA MARCELJA: Yes, thank you. Thank you very much for inviting to this panel.
So in terms of segregation, you actually are hitting the jackpot. There will be the enhanced people and unaugmented people, people living life like we do now. When we talk about body augmentation powered by A.I. and different body technologies, such as again, neural implants, prosthetics and so on, these technologies could widen the existing gap in the socioeconomic classes.
This is something that, you know, we need to understand because we will definitely have advantages in intelligence, in strength, in health. Maybe even life span.
So this is for sure rising a few ethical questions being put forward. Perhaps we can think how to answer some of the topics related to access to augmentation. So should body enhancement be treated as a basic right or a luxury?
Or topics about authenticity? Will humanity actually lose the line between natural and synthetic existence?
You do also have topics related to decision autonomy, as was mentioned also by the colleague here before.
So who actually decides what is acceptable, right? Body augmentation. Is it the government? Corporations? Individuals? Doctors? All of these are topics that need to be discussed. We need to think as a patient-centered healthcare going forward, right?
So like who is the ultimate decision maker? Of course it's us individuals, but the doctors are the ones with the knowledge.
But on the other hand they have the knowledge of the body. The technology is most probably a private-owned entity which knows the technology.
So they will sell the ideas to the doctors.
So it's a very complex, I would say, setting. And the ecosystem will be regulated by the government.
This is like something that we need to, in general, think when it comes to this future socioeconomic situation that is going to happen. There is no way out of it. Whenever you keep on getting new technologies, and you try to do something new, for sure some impact one way or another is going to happen.
Here we can mention the A.I.-powered worker. We aren't talking about robots, but enhanced humans that could outperform normal workers. Like us right now. Eventually you won't need a lot of vitamins, or vitamin D, every day.
So perhaps we will be eventually a little bit more faster, smarter in that sense.
We have the access to health augmenting implants. So this could, you know, of course raise the question of wealth. It will create divides. The augmented elites, if you will, and under privileged bio traditionalists, let's put it that way. Of course you could always pick, but if you want to kind of power up, perhaps some people will choose like a new way of existence.
And we don't know how is this going to be perceived. Is this going to be perceived as something, you know, cool? Or is this going to be perceived as, oh, you are sick. It's this type of thinking we can in general discuss. I can't say I've been on many panels discussing this. Global equality will be discussion, some nations will adopt technology faster. We will see marginalized group of countries and we definitely need to eventually ensure equal access if this is in the interest of the patient.
On this point, if I may, I would like to continue also on the harm that these technologies can do. We do need to understand the IoB devices like smart lenses are capable of recording everything you see.
So this could revolutionize healthcare and law enforcement as well. But also pose major threats. Because we have this surveillance and privacy risks on one side.
Because these type of lenses could easily be some covered surveillance tools. And without regulation in the sense, they could for sure record people without their consent. Enable, of course, governments, companies to track citizens' every move.
We have different, we have this that is already doing all of this. And now if we put of course in our bodies this is just going to implement it even more.
Create deep fakes, realities, manipulating video recording through these lenses, eventually. Any type of exploitation you can think of, from corporate manipulation, the corporate world is focused on profit, right?
So we need to make profit in order to be sustainable. Because otherwise we need to lay people off and it's not a good way of doing business. So recording users' environments, course, deliver ultra targeted ads. So you have Meta. Or Facebook, Instagram and all the similar platforms that of course are using the data that we see right now as their actual capital.
But this is also an open door for blackmail, social control by hackers also, who can access footage and can of course exploit these private moments of individuals.
So in this, I would say context, we definitely need to think about the moral implications and think about some comprehensive governance strategies that definitely must ensure ethical approval of devices, strict licensing, I would say, and penalties for enemies used and algorithms that regulate how data is stored and shared access, so all of this. Right now, A.I. has still big problems to solve. So we hope in the intelligence of the experts and the analysts that are working on that to actually, you know, work on all the black box and all the issues that A.I. has right now and the biases it creates.
So this is, in any case something that rises cybersecurity and A.I.-driven privacy violations and risks and ecosystems that need to be monitored and talked before we become kind of Guinea pigs and we don't have a clear understanding where could this go and what could we do.
We for sure are individuals and patients, or doctors, we will for sure be the ultimate decision makers. We will for sure sign papers that we understand the risks. There is no other way out of this.
But at the same time, the supply chain from the idea to the development and then to the implementation in the body definitely needs to be monitored. And for sure, also the manufacturing and the understanding of how can you fix if something breaks? So where do you go? My implant is not working, I'm glitching, where do I go? We have some centers like phone centers where you go and repair yourself. So this is a little bit of the supply chain that eventually would need to be thought of with all the ethical implications at hand.
>> MODERATOR USTINOVA: Thank you. You actually covered lots of things we are trying to understand with this new emerging technologies.
One you mentioned yes we have phones and we use it every day of our lives. And if we put inside our bodies it will definitely change. So I will go to James and ask him to start with that question.
How do you think, James, how can a person be offline if a device put in him be online constantly. Please share how internet of bodies will be developed in the future?
>> JAMES AMATTEY: Yes, thank you very much. My name is James Amattey from Ghana. I do hope I'm clear.
Internet of Bodies is very -- it's not new but it's one of the imagined complements of Internet of Things which embodies the integration of chips into regular devices to be able to track, collect and analyze data.
Now in this new realm, for example, when we look at health, we are looking at the three phases of the field, that is preventive, curative and then protective, right?
So for example, when you look at a disease like asthma, if I want to know what the patient's triggers are, which when the triggers happen, how often they happen, and sometimes, although that consultation cannot be done in the hospital. Now there are certain cases of autoimmune diseases that do not have any known, should I say, any known cure. So the use of IoB can help doctors and help hospitals and health facilities to be able to determine what causes it. What the triggers are. And what could be done to prevent it.
Now unfortunately, there's life after the hospital. And you know, the person has a different life other than the condition they are faced. So it is very difficult for you to constantly track them and constantly put them online because of other certain risks. Other members of the panel have spoken about cybersecurity risks and data risks but I would look more at social risks, right?
For example, we are looking at the problem of dependence, right? How can we make sure the patient doesn't become over reliant or dependent on the device. Unless it's an artificial limb that helps the patient to walk.
If it's a pill, there's a new invention of pills that have camera sensors that stay in the body and actually record internal body action.
And we want to ask ourselves how long is data allowed to stay in the body? How much of an influence does it have on things like genome edit, DNA configuration, and DNA programming and how much of an effect can that alter the unique character of the person?
So if you're, what kind of data are these things sending?
We have something we call the crisp and Cass 9, form of genome editing, bacteria immune system data to modify the DNA of liver cells.
So that means that currently there is data and there's research and there are certain elements of IoB that can modify the components of your DNA. And that modification then leads to modification of character, modification of behavior, modification of influences. And that could have very long-term effects.
Now all of those issues is an issue of manipulation and social engineering.
There's something we call (?)ization syndrome.
Now currently if you are -- let's say you and your friend are having a conversation on WhatsApp or any of the social media platform, when you jump on another platform you could just see an ad, of the watch you were talking about with your friend. Now how about this in your body and understanding your feeling of what are you feeling currently?
That could have very diverse issues.
For example, when we talk about suicide and mental health, we want to know how often can a device manipulate and influence suicidal thoughts and how often can it manipulate a person to take on suicidal thoughts. The realtime modification of DNA can lead to realtime manipulation. Whereby the person is online, or should I say the person is offline but there's an internal online modification.
Now we do have what we call data breaches, when the data that is supposed to be stored in a particular place has been given unauthorized access to a third party, either through hacking, breach force or use of spyware.
If the (?) system there could be preventive measures that could be implemented. But if it's in the body and the body is now being misconfigured, how do we reconfigure that?
Also we have to look at things like we talk about devices and software, we have to look at things like updates.
Now updates are very thin line when it comes to human-computer interaction.
If your phone is updating on its own, it has the ability to influence behavior of application and the also the data that you use, right? And also the permissions these applications have.
If you are looking at Internet of Things and Internet of Bodies we have to design updates, mechanisms and frameworks that do not have adverse issues or adverse effects on the body.
We have to look at compatibility. We have to look at biology. We have to look at the device version.
And that,you know, those things are currently regulated and not in the health field. It might be a difference, I'm not too much of a doctor.
But from the national regulatory framework, I'm here to see a comprehensive study and a comprehensive law for die vices, their updates and then the changes these updates would bring to the body.
Now all of this data we are doing and all this realtime tracking of the person that is online, even though the person exists offline can lead to things like predictive profiling.
So currently in analytics you do a report definition while you are looking at the typographical nature of the person.
What a person thinks, what a person feels. What influences their buying decisions.
Now with IoB and DNA programming, there's this ability of companies, people who have malicious intentions to use the data that our body provides them to these devices to now profile us and use things like predictive analysis to then determine what can -- what someone is more likely to buy.
To now tweaking devices in your body. So for example, we have the new wearable devices, especially for the eye.
These have the ability to pupilate the iris and send different signals to the brain. These things lead to attack on cognitive security.
Once you bring the brain into the picture, then you now have an attack on the cognitive behavior of the person.
You are now going to exploit cognitive biases, you now have to deal with brain-computer interface vulnerabilities.
So for example, we have issues of something that has been implanted into the brain to allow differently-abled people to be able to interact with computers, to be able to move things.
Now how much of, we have to create a balance between reenabling that person and integrating that person and for that person to be independent of that device. And having the personal intuition, or the personal drive to turn it on and off, right?
I think these are some of the things we can look at in terms how can a person turn off some of these devices that are in them?
How much control do they have on the devices, is it a matter of manufacturer versus patient and who wins?
If there's an issue with government and the government between the manufacturer and the government, who is going to win in the ability to determine, how, when and where this data is used.
Thank you very much.
>> MODERATOR USTINOVA: You covered a lot of issues. Before we move to the Q&A session, I have a question for every speaker. Yesterday (?) said an interesting thing, the future of humanity eventually, to survive, humanity needs to have bio synthesis with AI. Maybe you can give comments, do you agree with this point of view? Do you think eventually we will become part AI, part human, et cetera. Just short answer.
>> I believe humans are smart enough to implement A.I. to the fields they can get the positive impact rather than the negative ones.
And we have enough power at different levels to fix the negative.
From time to time, I'm sure we will have some examples of misusage of this technology.
But at the same time, in parallel, we will have an example how to fix it.
>> MODERATOR USTINOVA: Does anyone want to add something? Tell his opinion or hers? Gabriella?
>> GABRIELLA MARCELJA: I will be just very quick.
I think this is a philosophical question. Where do we want to go as humans?
And perhaps at the level of Kissinger, global strategists, you are already trying to understand what else, right?
Perhaps this is a way to go.
I think it's inevitable evolution of the technology.
Because once we reach a certain limit of development, whether it's a country, a company, people need to think, need to sit, think, and then decide where do we go now?
It's just a matter of understanding the possible futures and if we like those options or not.
At this moment, if everyone is feeling excited about this, I think humanity wants to try it out.
Not knowing what will happen. It's more the curiosity inside of us that wants to keep moving. Oh, let me see what will happen, even though we probably won't be happy with the result. But it's just human nature to keep pushing in the unknown when we already have everything.
I think we definitely need to fix the basic problems of the world.
But since the technology has gone so much forward, and of course, there's still quantum computing and cybersecurity issue that's come with it. We are still far from it. Far from getting all the AI chips running and so, so many things for humans to do.
But I think this is an additional curiosity that just people want to see happening.
And we, I guess, will live long enough to see what happens.
>> MODERATOR USTINOVA: Thank you. If no one has anything to add, we are get to the questions.
If anyone has any questions, please raise your hand. If anyone online has questions, write it and Ana will read your questions.
Do we have any questions?
No one has any questions?
Okay, I will ask my question.
We covered actually a lot of aspects at a time. But I like the futuristic part especially when you see end game, in the movies and in the future they show we have -- future, in the games if you have the medical insurance that is more covered more, that is richer and you have, as I said before, the implants that are costly, will have the medical attention and you will live more than the person who doesn't have it.
How do you think, maybe it's a very cynical question, but will the Internet of Bodies actually let companies, not government, will regulate who will live and who will die, depending what kind of influence it has in their own body?
Do we move from government regulation, to company regulation in terms of technology? If anybody has anything to add, you are welcome to add something to that.
>> JAMES AMATTEY: If I may?
I think it has to go hand in hand. We cannot just regulate the companies and leave the governments. The governments also have to regulate themselves.
So we, of course in this era where we have a lot of wars and interruptions to global peace, governments can now use some of these things as tools to be able to programme, to be able to, re-programme people and give them extra abilities.
And you know, it's just a matter of limiting how far we can go with this, right?
Because we do not want the case it's anybody can do anything.
On one hand you have to regulate the companies that are producing this. On the other hand the government also has to regulate itself to prevent itself from using this as a weapon, rather than as a tool, right?
Because if you take a knife, a knife in the kitchen is used for cutting vegetables but when you put the knife into the hands of a killer it becomes a dangerous tool.
The technology in and of itself is not dangerous but once it ends up in the hands of people with malicious intent, it can have that dangerous elements to it.
I think the current regulation of both governments at the national level that is themselves and also international bodies that are part of, for example, U.N., ITO and all these regulation bodies, it's up to them to be able to -- who regulate themselves and also the governments that form out of them.
To be able to make sure that technology does not go out of hand. Thank you.
>> Gabriella, you wanted to add?
>> GABRIELLA MARCELJA: Just real quick. I just want to draw a parallel here. I think what your question is asking, who will live and who will die, who is deciding this, I think this is already happening.
If you think of just the line in hospitals to get body transplant, right? It's the same thing right now.
You have a line. You have a line. So you need to wait. Whether you are a billionaire or have nothing, have a line.
>> (Off microphone)
>> GABRIELLA MARCELJA: Exactly. This is a little bit of the body transplant parallel that I think it's very, very similar. It's just a matter of augmentation, maybe fixing a problem rather than having a new organ transplanted. Because of course, how I see it, we will have, I don't want to say which sector is going to come up with this, but I already can imagine, we will have fake implants.
So we will have people that will do this and sell you in the gray economy, hey, can you sell me today something else and tomorrow will be the same product but in this gray economy, it will happen. It's like inevitable. So people will try to copy. People will try to do business, in a way. And they will produce harm. So it is inevitable for this to happen.
It's a line, you don't have money for the good quality products? No problem. You have a cheap version. Yeah, maybe you will die tomorrow, but hey, this is your chance of getting augmented. I think this is going to happen if we go into this sector here.
Because as with body transplants today, yes, have a line. But, my specialization is in criminal law. I do understand the underneath unfortunate situation we have. And we do have criminal organisations and et cetera who do work on skipping the line, if you will. Right. So this will happen. I think it's already happening in the system we have now with the problems currently we have now. It's just going to be, the problem is going to be augmented, let's put it that way.
>> MODERATOR USTINOVA: Lev, you wanted to answer?
>> LEV PESTRENIN: Thank you. I wanted to add about control. I think you are totally right. Now we see that private companies have a lot of control on our phones, on our devices. And sometimes this control is much more stronger than governments provide.
I think in this case, it should be a balance for control and control should be from all sites. All participants of this process.
Private companies, first government. People, the third.
As it was said yesterday on panel discussion, knowledge is power and they could help us to survive in the future.
So I think we should study about new technologies and we should be aware of how to control and manage them.
>> MODERATOR USTINOVA: Do we have any questions? No. So I guess, on this -- you have?
Okay, can you please give the mic to the person?
>> Thank you. Really interesting topics. And given the work I do, I think often of the risks and the challenges for controlling these types of technologies, as well as harnessing their power.
So I guess my power is do we feel that there is enough understanding of the technology to be able to create committees, organisations, frameworks to make sure we can make the most of what IoB can enable, while still protecting the individuals who ultimately will bear the risks of the implant, as well as the benefits? Do you know what I mean? I think I want to understand, do we have the knowledge and capacity to, as an international community, harness the benefits, as well as make sure we are able to understand and control the risks of the technology?
>> MODERATOR USTINOVA: Basically asking, do we know enough to make no harm? Does anyone have?
>> JAMES AMATTEY: Maybe I can answer that.
Yes. So I think it depends on where you are coming from. So for example, I live in Ghana from Africa, where we are mostly, when it comes to IoB we are mostly consumers. Most of the wearables we get are imported. Sometimes it is seen as a lifestyle thing.
Where, for example, BBL's, that is seen as a lifestyle thing.
But people do not totally understand the implications the technology. People don't understand where this is coming from and where this could lead, right?
I think we need to position, or put more effort in where there is awareness, education on what is truly at stake here. And how best we can move forward.
Because like Anna said, some of these things are inevitable, they will definitely happen. It's just a matter of are we ready for it when it happens. Or are we going to allow it to overtake us. And now we have to play catch up on things like regulation. Things like trying to control the spread and the protection of these things.
Much like social media, we relaxed a bit, and then we had to play catch up.
I think a lot of that is encompassing the A.I. space. We want to be able to make sure we are ahead of the trend. Which is very difficult to do. I must say.
But we have to start from somewhere. We have to hope that we stay ahead. Because once this overtakes us, it's actually part -- it's very difficult to get rid of just by policy or regulation.
>> MODERATOR USTINOVA: Thank you, James. Lev Pestrenin wants to speak.
>> (Off microphone)
Thank you, is it okay? I think now it is not possible to have knowledge of all the world because --
( off microphone )
Artificial intelligence, only them could not be confident that artificial intelligence is safe.
For example, engineers, they are also could not be confident that artificial intelligence is safe. Because it is multi-disciplinary projects and technologies. I think if we want to have some control from outside people, the future is for multi-disciplinary teams, groups, countries or even it's very good to have some friends with knowledge in different spheres.
So communication is our key opportunity to survive.
>> MODERATOR USTINOVA: Thank you very much. We are out of time. Thank you for a wonderful discussion. We covered lots of aspects. If you want to change contact with the speakers, you are welcome. Thank you to our online speakers who joined us today. Have a good IGF. Thank you very much.