The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>>SPEAKER: The five years in Microsoft and the rest of the time at various universities. So recently I returned to U.S. and joined university Berkeley. So we also see that this is a good time for us to discuss probably broader issues about religions. So certainly in the past few years, we have witnessed the rise of ‑‑ so we would characterize something like a perfect storm. There is a lot of factors have all together at the right moment to create something phenomenal. Some even call it AI is a new revolution of turn, industrial revolution, in the sense that industrial revolution, they use machine to replace man power. But now that AI is trying to use machines to replace the brain power, the labors that, you know, so in a different sense. On the technology side, so we can see that there are several factors that come all together.
First is a computing power, right? So the investment of oh, the parallel and disputed computing, technology really enables us to ‑‑ the process power can get scaled to up three sizes these days. The size of the center just grows proportionately to the size of the application. Second, it's a data. The availability of big datasets derived from the internet error that those data actually enclose tremendous information about human behavior's knowledge.
And now third is ‑‑ that especially in supervised learning that has enabled us to leverage precisely the computing power to learn from the data sets, that tremendous information about human behaviors and information. Also in addition today right so the ubiquitous internet and the social networks, mobile networks, really has served as a very efficient infrastructure that for quick dissemination of air technologies were never some good products out there. We can quickly reach and be embraced pie huge mass of people. So the impact basically can be very quick and very large. So just use U.S. and China as example. I probably will see lots of most of the sensation that happens in these two countries recently.
But AI really has become the fastest growing sector in the industry by almost all measures, best and size and starting companies and so on, business and products and so on. So but both country actually also if you look at it more closely, they have their own strengths and so on. And they're also very different in nature. So for technological standpoint, we can see the United States is very strong in the traditional fundamental technology for AI, such as chip design and manufacturing, server operating systems, therefore big data center, and also computing platforms such as for deep learning and force. Whereas terrorize very much focuses on applications and products of AI such as speech recognition, face recognition, ‑‑ very much probably due to its large scale market size and marketing there's lots of.
In terms of from the research of AI, the both the United States and China lead the world. The rest of the world by a large margin in terms of papers published or patents filed each year. However, the United States university is a company focused more on the fundamental theory, machinery theory, technologies, and also systems level. Well, AI researching chart again is the most of the application treatment about as many of the applications of mission learning and other source. So in terms of investing AI, based on the numbers of ‑‑ venture capital is in the United States is certainly larger in China, but China is really catch up very quickly.
Already in the same magnitude as America. And again the investment in U.S. covers almost all areas of AI technology, from the hard wears all the way to the hardware system, softwares, and all the way to the applications and problems. Whereas China shows strong focus on applications and end products, especially on consumers level, for mobile networks. Nevertheless, the Chinese company has made international strategy to strengthen its position on artificial intelligence and a plan to pouring tremendous government support, not just in financially but also a policy and so on in the next decade.
So we probably will see a potential change of the balance in the U.S. and China. So ‑‑ and also so ‑‑ so we will know that the process of technological amassment is actually something that can take a long time. And it is such a transformation also that acceptance of technology by users and scale requires really a gathering of all the stakeholders, all UN nations. ‑‑ and its applications and how the technologies can potentially transform better our lives.
In the past 20 years since I graduated from UC Berkeley, I have witnessed the development of technology. This technology were invented 30 years ago but was thought really ready for the general applications probably until now. Today we really see that more and more companies world wide that applying at technologies and a scale to mass market. For example, speech recognition, face recognition, striving, are really ‑‑ those speech or facial recognition already integrating our cell phones, smart phones, and also that smart mobile payment through China and many other methods has become part of our daily life. This is just beginning of ‑‑ in fact, we really just see this as the beginning of the era.
We work on the witness very quick de‑employment of many many AI technologies at massive scale and years to come. So today I'm very, you know, like to think that IGF and also university of Zurich, other chart of academy ICT, for creating this opportunity to gather people here to discuss the AI, not only its impact on the IT industry, but also its far reaching implication on our society and the world.
We hope that just like the internet, through really a multinational and multi‑stakeholder discussions and collaborations we will be able to reach international consensus for the potential governors of AI technologies so that it can really best benefit us and our next generation of the future. That's all, thanks.
[APPLAUSE]
>>URS GASSER: Thank you very much. Good morning, everyone. My name is Urs Gasser. I have the pleasure to moderate this panel. I'm with Berkman Klein Center for Internet and Society at Harvard University. I'm also Swiss, by the way, so I'm sorry that we give you such a cold welcome today with the snow outside. But I hope you will get from due later this week that will warm you up.
Despite architecture of this room and the panel itself, which doesn't invite for a dialogue really or a lot of interaction but is more designed for statement after statement, we will not do the statement after statement. We will really try to have a conversation and we have a fantastic line‑up of speakers these customs which were already introduced by Chia. But I thought maybe we could start with a quick round of self introductions. And if I may ask my colleagues just maybe briefly to highlight the few things that you're working on, how they connect with the panel here today, and the topic of social responsibility and defects in AI. And secondly, maybe after the very helpful high helve introduction to share with us what's your favorite AI based app or technology that you are using and why do you like it. So Danit, maybe start with you please.
>>DANIT GAL: Hi everyone. My name is Danit Gal. I'm originally from Israel and based in Beijing. My line of work is the safe ‑‑ fail safe design of autonomous systems as well as the conclusion. I work in several countries. I chair the outreach committee for the global initiative of AI ethics and I'm really really passionate about embedding ethics into the design. So instead of having back and forth about values working on the side, really making sure we build a technology that can encompass a multitude of values in order to become really inclusive. My favorite use of AI technology, that's a tough one. It's going to sound horrible, but I really enjoy having fast commutes between airports because I fly a lot. So having the ability to use fingerprint scanning and matching it with facial recognition really helps me get through airports fairly easily. So there are really positive sides to that kind of technology.
>>PING LANG: Hello, everyone. I'm Ping Lang from Chinese Academy of Social Sciences, and my background is international studies. So actually I worked in the Institute of World Economics and Politics. For the past several years, I've been focusing on cyber space governance from the prospective of international studies. So we study cyber governance or so cyber diplomacy and AI is a important pillar in this digital revolution. So we also focusing on how the new technology like AI will affect the social sciences and maybe global governance and international relations. So that's why I'm here today.
My favorite app is I think is the Google translate. Which is the most ‑‑ which I think is the only one that we can use in China. But actually I use Google translate a lot, and sometimes we just scan it and the picture will automatically comes to the language I want. But I think it still has some problems. When we scan a lot of sentences or paragraphs. So I think Google translate maybe also has some rooms to improve.
>>IRAKLI BERIDZE: Thank you. All right. Good morning, everyone. First of all, I would like to thank you for organizers of this event for inviting me here and we have already long‑standing cooperation with them and extremely thankful for this type of cooperation and also recognizing that how much this company is actually doing for the good application of AI and representation to do more.
My name is Irakli. I'm working for the United Nations, namely for eunuch in the heading they just created center for artificial intelligence of the robotics. And just a very few words about the center. The center is focused on actually bringing multi‑stakeholder corporation into fruition, so bringing together the governments, private sector and academia and other interest in it. Also running educational training mentoring session programs for different stakeholders, including government officials, diplomats and others.
And at the same time soon we're going to start doing the country assessments and matching the available technology to solve the world's pressing problems like the mis‑sustainable development goals and so on and so forth. So as far as the favorite app, yes I have a lot of favorite apps. Of course any translation app really is helpful or the hotel suggestion, I'm also travelling a lot and having stayed in different hotels and having good consumption. My daughter's favorite app is musically, they make a little music clip videos and that app just happened recently. It's Edward by Vyvanse and my daughter was very happy when I told her I know people very well who own this app.
>>KAREN MCCABE: Good morning. I'm Karen McCabe. I'm with the I arc AA. It's institute of electrical and electronics engineers. Don't let that define what IEEE does. We work in many technical spaces, 42 technical societies. Think of them as industry sectors or domains, if you will. There I'm a senior director of technology policy and our international affairs which basically means I get involved in a lot of initiatives and rips and partnerships. And several that I am working on right now is one is internet initiative or related to the IGF. But also we have a new program in digital inclusion through trust and agency. And more so why I'm here today is to talk a little bit of our effort on the global effort on autonomous and intelligence systems.
We heard a lot from our opening remarks about the very quick and large impact potential of AI, and we really based on the title of the session, look at the social impact and the social responsibility associated with that. The impact that we'll have on jobs, the way we live, the way we work. Put I think in part of the major focus of our noble initiative is we can't lose site of that human impact and wellbeing. Hopefully we can talk a little more about that. Thank you.
>>URS GASSER: And your favorite AI based application?
>>KAREN MCCABE: I have to say, it's Google translate. More so I tend to use that more than I have done in the past. So it's very handy.
>>URS GASSER: Thank you very much. How about you, what's your favorite application?
>>SPEAKER: So I think ‑‑ you know I use a lot of app works in this area so probably unfair for me to say anything, but I'll say something on behalf of my kids. Also says why it's important to have a dialogue like this. This will probably affect many of our hives and I already have a lot of habits and traditions that we 93. The technology may not have as a big impact on us that are kids. So just amazing these days, I saw ‑‑ I have a 16‑year‑old and 18 years old.
Their probably favorite app is to play with Siri, the speech recognition. So they ask all kinds of questions, you know, hike what's a score of sea hawks, they lost the game or not or what's the time and how to get movie tickets online. It's just amazing to see how they have fun with the cell phone, right. And just total hits me that the technology going to change your iPhone. They're going to have a completely different life than ours. Their knowledge, their social behaviors, and including their customs will be very different from ours. We have to think ahead about the impact, not just us, on our future generations.
>>SPEAKER: I fully subscribe to that because my children also really enjoy playing with Siri and Alexa as well. I have not figured out which is better or not. Basically both have advantages and my son every morning wakes up, asks the score for four years. ‑‑ how did warriors do and very satisfied with the answers as well. But they're really growing up with this and extremely for life, yes. Extremely different and their perception is shaping up in a different way. We should really pay very close attention how new generation will be coming up. For us, this is something we acquired. For them, this is something they've been born into with. So that's a very very different aspect, yeah.
>>URS GASSER: Thank you. And this is already a great segue into the my first question I hope we can discuss a little bit. I'm actually a very optimistic start of the panel. Usually we are quite quick the point out the risks and challenges that come with AI, whether it's concerns about privacy, whether it's issues around bias and the like. And I'm sure we will take seriously and address these concerns also during this panel conversation, but let's stay for a minute or two if I may with the opportunities as you already introduced it that the promise of the technology, how we may improve our lives, whether it's transportation or recommendation systems or the way we communicate.
What are for you maybe starting with some of the biggest opportunities that you see in terms of social impact of the next life of technologies and particularly also since you have to benefit to live both in China as well as a you have deep understanding of what's happening here in Europe and U.S. how does that play out globally, some of these opportunities? What are your reflections?
>>SPEAKER: For me, I think the biggest opportunity is really harnessing the technology to create meaningful impact. I think that we're beginning to really understand the value of data and are wanting to collect it, and this is something that applies to both developed and developing countries, and I think that the smart or intelligent use of data could really help countries ‑‑ developing countries leap frog and kind of chose the gap in terms of development. I think that that kind of use that really takes into consideration the kind of inclusion perspective that really lends itself to the cultural and societal aspects of each country, if done right, could meaningfully help countries. ‑‑.
>>SPEAKER: I was wondering since you mentioned sustainable development goals and there has been a lot of conversations also of course here in Geneva, particularly at an event earlier this year ‑‑ (Audio fading in and out.) And issues to health and well being and the likes. Where do you see some of the biggest potentials for using AI to help build a better growth (Audio fading in and out.).
>>SPEAKER: Thanks for asking this. Certainly one thing what comes to mind when you are working for United Nations is the UN sustainable development calls and chasing to use technology like AI to apply ‑‑ you mentioned the AI for Summit which was initially to you and it was a fantastic undertaking actually earlier this year where a very large community of UN beyond the together and start thinking about how to actually use this applications for the good. Now, I would not want to point out any of the ‑‑ I think they are ‑‑ to contribute to all of them and play a major impact.
But basically one of the ‑‑ or maybe two sort of bigger issues I would really like to see in making breakthroughs would be with the health and applications in the healthcare and making really breakthroughs there and also in educating poverty. And I think in both cases, if we manage to do that, that would pee a really great achievement in the short‑term, but in the long‑term, I guess all of the goals would definitely benefit from any of the breakthroughs basically from that tool, what AI is to be applied to that and having benefits out of it.
>>PING LANG: Thank you so much. China has taken a very systematic approach to think about deployment of AI and the development and just kind of looking at some of the recent documents that we're, you know, released by the governments. Where do you feel within the Chinese context that has opportunities and we mentioned a few areas. How does that play out in a very large country like China?
>>SPEAKER: Maybe I can take a few minutes to introduce China's policy on AI.
>>URS GASSER: Please do so, yeah.
>>SPEAKER: In China, internet and digital revolution has really been a major focus of the Chinese government. And since the year before last year, Chinese government has launched a series of initiatives and policies like internet plus, admitting China 2025. And also last year that is the national informationization development strategy to boost development of AI. And just in this July, the Chinese government also issued a new document next called the next generation artificial intelligence development plan.
The document says it's trying to seize a mage major strategic opportunity to advances development of AI, and this new plan, which will be implemented by a new plan promotion office within the ministry of science and technology, out lies Chinese ‑‑ China's objective for advancing AI in thee stages, which first by 2020, China's overall progress of applications and Chinese should keep pace with advanced world level while AI industry becomes important economic growth point.
And the second step is by 2025, China should have achieved major breakthroughs in AI to reach a leading level with AI becoming a primary driver for China's industry advances and economic transformation. Ultimately, by 2030, China intends to have become the world's premier AI invasion center. At that point, China believes it can achieve major breakthroughs in research and development to occupy the commanding heights of AI technology. So if we look at this plan, I think it's not just a technical or industrial development plan but also includes social construction, institutional reconstructing, global governance, and other aspects. In other words, we are facing ‑‑ the task we are facing is not to achieve revolutionary technological breakthroughs in a particular field or industry, but to vigorously promote comprehensive changes resulting from technology codevelopments. So I think in China, the major challenge to Chinese people is to achieve the economic reconstruction. So AI and also together and also the digital revolution has becoming a major economic growth point to realize the economic reconstruction and also to help people make a with a better life. Thank you.
>>URS GASSER: Thank you very much. It's fascinating. And Karen I'm monitoring whether you want to comment on that or provide some sort of complimentary perspective. What we just heard I think also resonates if you are to go around many different regions and countries where there is of course the question of what's the impact on the digital economy and what does it mean, how can we use these technologies for growth. But you in the introduction also immediately made the point and said, well, we also have to think about the impact of the human being and on well being.
And I was wondering, would you be willing to share a little bit more, yes, there are institutions, yes, there are systems. But what does it do to us as human beings. And again staying a little in the opportunity, asking a little segue to the challenges of course later on. Please.
>>KAREN MCCABE: Sure, thank you. You know, initially when you think of these ‑‑ the use of artificial intelligence or through IE Erica E we're actually trying to stay away from the term artificial intelligence and now we use autonomous and intelligence systems. I think sometimes the label brings about some more scary or negative connotations. I think there's a lot of opportunity as we noted so far on this panel, the impact it can have through all kinds of industry sectors and all parts of the world. But if people don't necessarily trust that technology ‑‑ and I don't want to go to the dark side too soon, not that there is a dark side necessarily.
But it has a tremendous opportunity, when you think about agriculture and health and learning and education. But also we need look at it from the trust factor. There's concerns about what's happening to the data because of the massive amount of data capture and use, which could be extremely impactful and very beneficial. But people aren't necessarily informed about how data's being used or it's not being used in a good way, in a sense that it can really expedite developments and the benefit of AI. We can run into some challenges there. And when we look at the impact of AI with the evolution of jobs and thing of that nature, we need to also be very cautious of that also being in the impact on human, when it comes to that.
We are going to be in the transition period. There's going to be an evolution where there might be some gaps as people in the job market and education may not be necessarily sealed or have the capacity or there's a transition where AI might be impacting some of the jobs, and there's going to be job transition. And that can really impact well being and how people, you know, not only from the impact on their economic livelihood, but that impacts you emotionally as well.
So when we're looking at autonomous systems, intelligence systems, looking at from the tremendous opportunity is at our feet. And what that can mean to our future generations. But also thinking about the impact on well being, not only from a medical health perspective but your mental well being as well.
So I think there's tremendous opportunity in that. Today, we hear so much about the channels that are out there with health and mental well being and it's almost like at a cries through many parts of the world. And I think artificial intelligence, autonomous systems can really help with that situation. And we sort of build trust into that and when we're building the systems, we're taking into consideration that level of impact. We really to look at these things very holistically in that sense.
>>URS GASSER: Great, thank you Karen. You already mentioned also in the introduction this aspect and the notion of inclusion. And I would like the stay with that theme, which is also mentioned by dan eat. How can we make sure that the next wave of technology provides similar opportunities to people around the world or also the ones who are on the side of the digital half knots currently. How do we close some of these digital gaps? What are your reflection study?
>>SPEAKER: I think that also relates to the biggest challenge I see and kind of like obviously the biggest challenges is also the biggest opportunity, which is a kind of normal paradiagnostic. I think the way we construct our discussions around autonomous and intelligence systems, especially learning AI, is we tend to talk about it in the polarizing sense. We talk PT U.S. and China. We focus on the two big players and kind of contrast them with each other. But I think that is a really missed opportunity to take a technology that could fundamentally serve billions of human beings, and I think that is one of the major things that we lack in inclusion is the ability to empower other countries, smaller countries too, really take to technology and harness it to their benefit.
I think that we are kind of maybe stuck in a zero sum game mindset when we think about two countries, winner take all. And I think my best solution for feasible and sustainable inclusion is to move away from that kind of zero sum game mentality and really make sure that whoever gets the best technology, that's great. But we really need to start thinking about who is not having the technology and who should be having it and how they could employ it. Even if you have the best technology, if you don't share it, if you don't profit from it, if you don't distribute it, it doesn't mean anything.
>>URS GASSER: You want to jump in?
>>SPEAKER: Yeah, maybe I can jump with a comment. I'm in fully agreeance to what you said. The question would be is how the smaller country is going to embrace that if in most of the smaller country, their understanding of AI is very limited, especially the understanding of of any of these technologies which is coming out is very limited and who is going to champion that, is the bigger countries going to share that or some intergovernmental organization should do that or someone else? And this is a rhetorical question, obviously.
I'm not asking that. But one of the solutions which we saw could be practical and certainly could contribute to that process and something from, our center we are really going to start implementing is to start the pilot projects in the countries on a practical application of these tools to solve practical problems for the benefit of the countries and the people. And see that this works and then make sure that the other parts of the world countries will see that these product can be sold like this and scale it up, get others excited, and actually make that move.
But when the ‑‑ the companies who have the technology available would be actually willing or inclined to share that with the countries which would need to solve particular problems. So that type of scalable approach, which we thought that could be one of the things which would we would definitely like to try out in practice.
>>URS GASSER: Before we turn back to Danit, I was wondering from a computer science perspective and someone who also lives life in China and U.S., how do you ‑‑ where do you see the opportunities for countries that are not already in the AI mix? What will be your kind of road map that you would foresee? Is it putting emphasis on education and building capacity for young engineers? How do we get there that everyone can benefit from this technology equally?
>>SPEAKER: So I think that Danit has provided very very good perspective. Right now, U.S. and China sort of really has been a little leading some of the development of AI technology, at least advocates for it. In fact, we really hope our products or whatever, the results, could benefit really the broader, you know, international society for other countries. So just by internet, right, really gets disseminated into UN sur countries very quickly. But that does require a little bit, you know, how do I say it, the recognition of some of the countries, leaders of those countries, that such technology can't be affect their society.
And that there would need to some investment, not just in terms of technology, but also in a human talent. I think China has done ‑‑ the is a lot of really accident that China was able to catch the a little bit, you know, 93 very closely with the Ai technology because they're not highest being investing a lot in the computer industry for a long time, starting for the internet H has been trying. China wasn't behind but really trying to catch up with a lot of the companies investing in the past 10, 15 years. And when suddenly some of the technology becomes applicable, this company will have the talent, people and resource to put those technology in mace and scale it to mass market.
So that might be a lesson can be probably learned by other nations as well, if they want to probably emulate the process to catch up with the technology.
And also opportunity for all the nation is that great things about AI that's mentioned before our discussion is that if you do not invest, you are definitely left behind. But if you do invest, we able to really need other people by a large margin. And also because of that, you see there's a very interesting ways to patent AI technology, all the major companies hike to share. So there's a lot of source. So that's an article point of view, right? I think also. So all the platform, especially computing platforms, and a lot of them are open source. So that really allows AI to be dissimilated very easy to many many countries, the benefit of many many nations if those nation are willing to learn, willing to. So the bar is actually not as my as any other technologies. So it's all very prohibitive as other AI technologies, like manufacturing or other. So that's actually I see a really great opportunities for, you know, all the nations.
>>URS GASSER: Danit, did you want to comment on that briefly?
>>DANIK GAL: Just a brief point. I really agree with what you said. And we are very fortunate to have in the panel two bodies that embody that approach, which I call effective incentive alignment. We really need to create incentives for powerful companies and governments to team up with developing ones. Obviously, they're not going to do it out of the kindness of their hearts because they're running a business. In that sense, reaching that kind of effective alignment and incentives to make AI more inclusive is something I think we could really speak greatly about because this is what they do from the governance and the technical side. And I think that they embody that kind of approach to ethics and responsibility that the the overarching theme of our panel.
>>URS GASSER: Thank you. Before we move to the challenges and the risks, which were already introduced briefly by Karen for sure, maybe one more methodological question to ‑‑. We've heard already now in this kind of initial round the large scale that we are dealing with, that AI is really likely to affect so many different parts of our lives and many different industries and has this transformative potential and power. So I'm wondering, as we look at the opportunities and later challenges and see the scalability and order of magnitude of what's happening potentially, what's coming after us, or what we need to interact with at least, what's ‑‑ how do we even assess impact in such a ‑‑ at such a large scale?
So what, you know, looking at it more from a scientist perspective now, or an academic perspective, what methods of social impact assessment do you work with in your centers on the academy and what can we learn from the past? So is AI something that requires an ecosystem perspective like the environmental movement and environmental impact assessment? What are the frameworks you're working with, maybe from your perspective?
>>SPEAKER: It's an excellent question. First of all, we are really in the beginning of the process and therefore whatever we are doing, you're testing out all that the, this is something which is unprecedented. We haven't had anything like that before and obviously to assess the impact of that would also require some new approaches. And what has happened through this year of course, you say that more and more conferences, more and more people are getting involved, communities also getting larger and bigger and we're testing out different type of methods how this could be assessed.
One of the things which we would like to see is how countries would be using the technology for the benefit of solving different problems, right. And what I mentioned with the sustainable development cause would be really good assessments to see how much of these technologies would be really able to solve it and how quickly this would happen and how quickly we can bring the message to the larger sort of part of the world that this can be done and this can be done really in a responsible way and this can be done in a way that it could really benefit the countries and benefit the people because that's what end of the days the most important aspect of what we are doing, right.
So therefore, yes, we should really go on and test out different methods and see how we can bring the benefit. But at the same time taking into account the risks, but I think that it's something which you would like to discuss likely later in the panel, as well.
>>SPEAKER: I think the impact from my perspective, the perspective of social sides, one thing is the impact of AI might improve the production efficiency of the economy. But on the other hand I think the most important of course the challenge of course also is that the machine intelligence comes into our social life. And because social science ‑‑ social science studies is the people's behavior. How people interacts with each other people could control themselves and be responsible for their actions. If artificial intelligence comes in, maybe for example the identity kind of bias or this new things ‑‑ new issues will affect the social governance. That's my point.
>>URS GASSER: It just reminds me that we are ‑‑ it feels to me, are less well equipped even to measure and track some of these impacts on our social fabric. We have metrics and measurements for economic impact. We can argue, well, are there sufficient for this new world of AI, put definitely when it comes to social and vinyl well being, for instance, we don't have standard metrics that we use. And so I know that IEEE has also invested quite big in exploring this question, what are new ways to measure impact as the machine enters the human lives even more so.
So this obviously is a great segue to the challenges part of our panel. And since Karen you already introduced some of the key words and I liked very much your starting point to look at it as a trust challenge. Yes, we have a data privacy problem. We have the question of bias. So there's a long list of specific challenges, but I think it was very helpful to look at it as a trust issue. And I was wondering whether you can perhaps expand a little bit on this notion of trust and why it matters and also how we can break it down into chunks of problems maybe.
>>KAREN MCCABE: Sure. Thank you. As I was alluding to in my opening remarks, technology is amazing and it's rapidly increasing in its use and its positive development. But it also can put fear into people. They don't necessarily understand. I think we're also raising a lot of awareness about our use of our data. You know, the headlines today, you can't open a newspaper or read your local news feed without seeing some type of breach or concern over data and not to get into that issue, but it does make people pause sometimes about the uses of data, the use of this technology.
And you know one of the concerns could be is that if there's a fear and concern about data and technology, not understanding it technology, what's happening to my data, there might be a slower up take of that use of that technology that can really benefit human beings, societies, governments. So along with addressing these issues, we have to look at it from that overarching concept of trust. And I know there's many definitions of that, and it's conjectural and cultural to a certain degree as well. But how we can help fill that gap, if you will, or close that gap, where, you know technology is advancing. We're coming up with great acts and solutions and systems that can really benefit many sectors ‑‑ all of society, in fact.
But yet there is sort of this I'm not quite sure what's going on. For sure, there's definitely a population, probably more western, that, you know, we just sort of give into the technology. You know, there's convenience factors associated with it. We just plug in and use it. But we also have to look at it especially in the concepts of IGF, and one is digital inclusion and internet conclusion. We are bringing internet to to people who are underserved and don't have access, what other issues are we introducing to that as well. Part of what our responsibility should be is learning and helping from our learnings when we do that so hopefully they don't experience some of the same discomforts or challenges that many of us have.
But kind of what it appeals down, when you're talking about digital identity, the use of these technologies, what is that really going to mean to wealthy and into the human factor. IEEE's mission statement is about advancing technology for the benefit of humanity. And that takes many forms and many aspects and many industries in every day life, but we are really working hard to ensure that we're focusing on that benefit for humanity aspect and the well being aspect of it.
>>URS GASSER: Great. Because you're also involved in the development of the technology we're talking about, what are you most concerned about?
>>KAREN MCCABE: So in terms of ‑‑ great that information ‑‑ really ‑‑ really the technology and the people, right? From a users affected or people affected by the technology. And there's another sort of if you look at the horizontal direction. That's a vertical, but there's a horizontal direction, but there's a lot of you have to move fast to see people benefit from the technologies, AI technology in particular, that there's another where trust has to be really bridged. It's really between industries that this is also posed as great changes for the AI sides opportunities but also changes. This day we see from some of the perspective, we know some of the especially knowledge based disciplines or field can really benefit from us in computing or AI technology, such as for example medicine, right?
A lot of diagnosis and image based analysis, prognosis. So I've learned that when we try to you know swing that or disseminate the technology to those disciplines or fields, something there is barriers between. So I think we try to get help, work with Harvard medical school, and somehow it goes through. Basically also has to be trust across other fields. But also, the automobile industry. The trust, or the driving technology, would it put them out of the business. So again we've I hope that people have to get out of this mindset of zero sum gain. So ‑‑ they will take our job. Rather, it's really that we should build a trust in the technology so that you actually be able to have a win‑win situation and in fact the industry will benefit from all the new technology rather than you know the job gets taken away.
So how to use the technology combined with traditional industries to create more opportunities, make the industry more efficient, have a higher standard, qualities and so on. I think that's kind of a challenge we're facing today, to have AI technology to have bigger impact, different industrial sectors as well as aspect of human lives.
>>URS GASSER: Thank you. And this is a great segue. Irakli, you mentioned already the impact of AI on work and labor. I was wondering, can you share a little more what the status of discussion is currently. I mean, the studies that are circulating are proposing very different numbers, what, you know, will happen to the future of employment, how many jobs will get lost, others are more optimistic and say, as well, as other previous technologies, it's not only about destroying jobs, it's also creating new jobs. Where do we stand, what do we know where the knowledge comes.
>>IRAKLI BERIDZE: I think this is a really big issue and requires certainly a lodge discussion. Having said that, I would say the challenges are not only with the jobs. There are many many things associated with this technology. But let's talk about the jobs. Let's see, there are many different assessments and many different predictions that they just want ‑‑ and just released was something like 800 million jobs would be lost very soon in a few years and/or by 2030 it was like that. I mean, arrest say really what is happening and who's going to be active with that.
Let's look at the developing world, for example, and how this is going to impact the job market there and then what other sort of chain reactions it's going to have. We're not only talking with the Asias of the jobs here, which I think that jobs are also very much associated with the issue of migration and this is associated with the issue of security and the peace in the world at large. So we need to actually take a look at the really larger picture.
If we look at the job issues, what is happening in the developing world? We have interaction with many governments in the developing world and discussions related to the automation or technological automation is mostly nonexistent there. One thing that really needs to be done is to proper understanding to the field there, to the countries, and they would really need to realize that how this is going to impact. The second thing of course what we need to bear in mind is the rate of automation, how quickly it will go. And this nobody really knows because nobody would expect that bit coin would become $19,000 within matter of days and so a lot of people are regretting not to buy bit coins or people will regret that they bought it. Basically the rate of expiration is really unknown and very difficult to predict.
And the second aspect would be how are we really going to match up our preference with the rate of expiration. So how quickly will we react to that speed of automation? We really need to be working very closely with all these developments, but certainly one thing that needs to be done and needs to be done right now and very quickly is creating different understanding of these changes, bringing the knowledge to the countries and to why the spectrum of the world and finding many solutions. Because right now what we have is the two sets of solutions really at the table. One is universal basic type income type of solutions like taxation and so on and so forth.
And the second one is training and education, right? And basically none of them are pullet proof. It's very difficult to sustain the universal basic income type of solutions and obviously at call, it has something very interesting in it but it's very difficult in principal to apply.
The same thing on retraining and education. On education side, you would need to fundamentally change the entire side of thinking behind that. When we're talking about how our kids are born into the AIs type of implication, we need to to think about what sort of skills they would require, but this requires really a global movement rather than selective movements. Therefore, these are massive jobs which would need to be done.
>>URS GASSER: Lam pina, is there a moment where you want to weigh in as well?
>>PING LANG: I totally agree with our panelists. I still remember that a quite famous Chinese VIP predicted some terms ago. He said over the next two decades, people will lose 80 percent of the current jobs. But personally, I'm quite optimistic about the jobs. Because when we lose some jobs, we will have some new jobs. Maybe AI might affect the employment in manufacturing sector. But for the service sector, I think we don't need to worry too much because people need to leave. We need to entertain ourselves. And a lot of new jobs may come up. So the ‑‑ just like the technological revolutions over the past centuries, every time we have some new technology and a new round of ‑‑ a new round of ‑‑ a new kind of jobs will come to us. So I totally agree with my colleagues, that the education industry might have a big change. Thank you.
>>URS GASSER: Before we open up for Q&A, so please be ready in about a minute or two. Karen, do you want to speak to the educational challenge or opportunity in front of us?
>>KAREN MCCABE: Briefly, because I know we want to have time for the audience Q&A and discussion. I too you know it's always good when we don't agree. But we do agree. You have a little more of a dynamic panel maybe. I think education and capacity building is going to be fundamental. But as in any revolution or how we have evolved over many times when technology has come in, we have a generation that's being born into it. So they're familiar with using it now.
Whether there's ‑‑ and I really couldn't speak to it that definitively. Whether there's a gap between our education systems and how it's working or preparing our young children as we're going through school, how the education system is using this type of technology could be a gap. I know when the internet was kind of rising up and still some schools, also there was a gap ‑‑ we were teaching with traditional white boards, black boards, if you will, and ten then technology came on board
>>SPEAKER: There's still challenges with that because I can speak just from my kid's perspective. You know, the homework, I saw it increasingly become more online. You know, so if you weren't ‑‑ didn't have access or you didn't have the equipment in your home, it was a challenge, right? So same thing. You know, we have to be cautious of it. That doesn't necessarily happen with these other advances in technology of autonomous systems or artificial intelligence, that we're not leading this gap in how we're expecting to use it but yet when the children go home or the worker goes home, they don't necessarily have access to it.
So by doing that, we kind of keep this gap ‑‑ an unfortunate gap kind of in place. So I don't think there's necessarily one magic Answer two to any of this. I think it's going to be a lot of institutions and systems and kind of working together, if you will. So there lie as little bit of a challenge, but I think if we take an example of what we're trying to do at the internet governance form and how we've looked at a multi‑stakeholder approach with that, I think if we sort of apply that to this, we can definitely address some of these challenges.
>>URS GASSER: In that spirit of working together, let's work together. So Larry if you don't mind to introduce yourself briefly. Of course, be reminded, we are recorded. Also, like to invite our remote participants to submit their questions and we will ‑‑
>>LARRY: I'm Larry, the cofounder and CEO of safe ‑‑ I'm also a journalist with CBS news and I might write something about this. But every new technology brings about a moral panic of course. And 23 years ago, I wrote one of the very first booklets on internet safety. And I have to say that many of the things I predicted actually did not happen and many of the things that I didn't predict like Russian interfering with elections and things like that, did happen.
So what I'm saying is that it turns out there was routine to be concerned but we actually were concerned about some of the wrong problems. Having said that, Urs mentioned a few issues like privacy and bias. And mostly the panel is focused on jobs, but I think it's important to address what's out there in the ecosystem. So it's the kind of thing Elon Musk is talking about. I wonder if you would kind of address some of the common concerns that people have, where it may be relevant or irrelevant and maybe some of the things you're thinking about that many of us may not even be working about yet.
>>URS GASSER: Who would like to take on this challenge?
>>SPEAKER: I'll step up. I think that one key concern that we have, and I think that that has been very much present alongside like within the panel is we want to build trust in our technology. We're scared of the implications of the technology, put we don't really talk about what the technology is. It's not magic. There are people creating that kind of technology and there are many ways of creating that kind of technology. If you want to build trust in the technology, we need to interact with the people designing it. And if there are risks embedded within the certain news of the technology, then we can design it to do better or we can utilize it in different ways.
One of the main things that we have that artificial intelligence does not is creativity. And this is humanity. We adapt. We adjust. We develop. So in that sense, I think that's something that a lot of technical people are talking about, is really making sure that we design the technology and talk to the people who designed the technology because it is a human outcome.
>>SPEAKER: If I might suggest, so from when I give speeches on internet safety, I like to talk about what the risks are or what people think the risks are and what actually is true. And I want to hear your perspective about what people are worried about, specifically about bias and privacy.
>>SPEAKER: I won't be able to comment on privacy, but there's one thing from the really research perspective, a little bit of concern about is that, like internet. Internet really is collecting people, right. But AI is actually ‑‑ is emulating some human capabilities, ability, skills. So that could potentially has ‑‑ again, it's also ‑‑ mostly software based. So it can be very ease el, how do I say, gets replicated, disseminated through the internet or whatever.
So some of the skills or technologies can be easily basically hands‑on some people. Just like any technology, just hike double edged sword. You can use it for good or bad. So that could raise some concern, that some of the missionary capability, if it falls into the hands of wrong people. And also that can be very easily falls into the hands, unlike any other physical materials of usually much easier to monitor. So those could be technological concerns I would have for some of the technology falls into the peoples. So that also raise so people can bridge privacy, bridge security. You know, people can use it to interior with elections and so forth.
>>SPEAKER: If I may, just to enter. Obviously, this is a huge problem and security breaches are definitely identified A. Few weeks ago we were in China and the founder of ‑‑ was giving us a facts that every day right now, 300,000 malicious codes are created to breach one or the other systems every day. So within the period of three days, inquiry talking with almost 1 million malicious codes being created. And some of the states are also and some of them are criminal purposes.
But this is creating a huge issue. And once you add machine learning technology to it, which is going to serve as an amp fire effect, we might actually end up a day where we will have every day billions of this type of codes created. And it would be very difficult to deal with. So this is going to create actually quite a lot of pressure on security of quite a lot of pressure on actually entire topic of our society.
>>SPEAKER: To pick up what Larry challenges us to address too right, this is an example of someone actually having bad intentions and creating software to achieve harm, right, or to do harm. But what about some of the unintended consequences, right? Let's assume even if we have good intentions and strongly pistol systems that systematically biased against certain populations and how concerned are you about that from an engineering perspective? And maybe also from social impact perspectives?
>>SPEAKER: HP had a facial recognition system for their PC's. It turns out, I believe all the engineer has really built for the best ‑‑ doesn't work very well with African American people, dark skipped people. So that caused quite a social or public setbacks for the company. So those are the things basically we can do things out of the good intention but potentially you have some gets perceived as as unexpected, unfavorable products. So those are the things could raise eyebrows. And maybe other aspects we have magic today. That's why a dialogue or discussion like this, try I guess have many people try to find out potentially about other things we should be worrying about today or in the future.
>>SPEAKER: I really like your word, the moral panic. It comes to me that the Chinese people ‑‑ the Chinese public got to be impressed by artificial intelligence I think is last year when alpha goal comes out and be it the most famous go may player in China. And it makes people think that when we have machine intelligence, it could memorize lots of things much much better than us. And it could calculate very fast. And so what's the meaning for people there? So what we can ‑‑ many things we can do has been replaced by the machine.
And so that was for people in the future. I think that's the fundamental routine for the moral panic, for people when people get to know artificial intelligence. And I think another source of the panic is about the ‑‑ is from the drama and the films that we watched before and the machines take control of the human word and the humans is totally lost. I think that's also ‑‑ that's also a scenery that it's not happen right now, but we will wonder whether they will come true in the future. So maybe some fields.
Another thing is some days ago I watched the TV news. The female robot, Sophia, was granted the first citizenship by Saudi Arabia government. And during the interview, Sophia said I wanted a family. ‑‑ Sophia said I wanted a family. And I think that's the moral panic, the concern that we got from those films and drama. So when we come to artificial intelligence, especially for the experts or scholars or researchers, we need to take some forward looking approach to this technology. So we might concern that if we give some emotions to the artificial intelligence, because I'm not scientist. I'm not sure whether they will happen. But if they have the emotion or they come into our social life and get ‑‑ interact with the real ‑‑ with humans, maybe some ‑‑ in some sector, we should strictly forbid those artificial intelligence has quickly used. Because my field is international studies.
I also thinking that just like was just said, technology has always been a double edged sword, just like internet. If those terrorists use artificial intelligence, artificial intelligence has been used for military purposes, what will bring to us. So I think for those consequences we need to really take a forward looking and concern. And of course for social scientists, we also need to know what artificial intelligence will develop. That's my response.
>>URS GASSER: Thank you. We have one more question.
>>RAMONE: Okay. My name is Ramone. I am the Swiss operative at the general assembly by ‑‑ international federation of all ‑‑ societies around the world.
And I have three observation. You can transform it in a question afterwards. The first thing is I was in Paris in 1989. No internet. No social network. So big data, no IOT, no cloud and so on. And I do some clearing at home because I am retired. And I look at the paper, the accommodation of the ‑‑ I see units ‑‑ ICT and education. It was four pages, 15 recommendation, and you cannot apply today. But we don't do anything during 28 years. We were only distract by evolution of technology. Everything is already in these recommendation, very old recommendation but quite at the ‑‑ quite for now. You can learn at home, you can learn at distance, you can have books, doesn't exist in 89. And so okay.
So I think we should be bit more wise to take content and a thing like that and not only to see the exponential evolution of technology. That is my first remark.
By the way, I have here a nice article from the institute of the future, gives for 2020, 2030 and so on. And you have a map with six trends, actual trends, and ‑‑ competency to acquire for the future. Nothing to do with actuality ‑‑ in any country. Nothing. Complete ‑‑ educational system nowadays are producing handicapped people because we are not in the right ‑‑ we are not in the right vision. That is my second point.
The third point is another silly thing, but I have to say that here when you are speaking about artificial intelligence that I define as the opposite of natural ‑‑. It's the text that we have written with some colleague in ‑‑ and presented last with this in ‑‑ in June and we are now at the CSTD in the election of the general assembly of UN. The title of this text is very simple. It's human digital right and responsibilities. And we thought that if human being is delegating to AI, to network or robots, or whatever it's the end of the specials. And we have to keep the power of and the responsibility of using AI, ICT, and thing like that. Okay. It's not to say it's forbidden, but you have to balance and not to be only ‑‑ and it was the three‑day in June before the ‑‑ with this forum with preference AI for good, okay, where everything is nice in the Geneva, during many decade there was a restaurant there was a black board saying two more rolls ‑‑ is free.
Sometimes I think we are exactly in this situation. So that mean we should be a bit more visionary, more critical. And not to do the same mistake as for the security counsel. Sorry to say that in this building. I am from Geneva. But the best country selling arms are those with the ‑‑ to keep the peace. So don't do the same mistake a second time for robot, AI, and so on. I think we have already ‑‑ here. If you hike to use AI or ICT in any direction for care, for education, for security, for everything, okay, you have the 17 ‑‑. So show that what you propose as the usage of the technology. Resist is not worse by looking about the 17 ‑‑ okay? And I think if you can declare that we use ‑‑ we should be with a lot of attention around human digital right and ability I think it's a key issue for the future and the future just now.
>>SPEAKER: Thank you. The future is now but not equally distributed. So if I may, to take the previous two comments, also the comment by Larry and turn it actually into a question to the panelists. I mean, I would agree with both common taters that both we've had disruptive technologies before, and society has coping mechanisms and establishes frameworks and standards and criteria how we deal with the new thing that interacts with us.
Yet I also have the feeling based on our previous conversations that the next wave of AI based technologies is different from previous technologies. And you made already a comparison to internet technology and where you see differences. And Karen maybe I can ask you again because you also in your previous comment highlighted and replaced the term artificial intelligence and started to zoom in on the question of autonomous systems and other panelists mentioned also what that may mean in terms of shifts of responsibility.
We also talked about the scale, and someone also mentioned the speed of transformation that is ahead. Can we have one round, what may be different about AI compared to previous technologies and what might it mean for our dealing with that technology? Okay.
>>SPEAKER: I think it's the algorithmic aspect of it that can capture the speed of that technology, put also the use of that technology in very vital symptoms. You're talking about critical infrastructure, health systems. It almost seems it would be expedited in a sense, and the impact can be expedited. So that, you know, some think that that's sort of where there's a difference in the sense of just how much it really comes down to the data and how much data is going to be collected and how data is going to be used in such an exponential
Way. And I think that's where it's sort of ‑‑ a lot of the concerns, you know, is a speed and a scale.
>>Maybe I can just mention a few of the points which we've liked, the discussion. First of all, I do believe that it's different, and we never lived in 2017 before. Therefore, yes, I mean, any of the analogies with the revolutions are there to look into but we've never been here before. We've never had so much data collected, never had so much computation of power and never had so much money invested in that. So we never had that many people around the globe. So this is second big difference obviously is that while other technology revolution is replaced ‑‑ this is replacing actually the cognitive ability of the human being. Whether we will get the or not, it's a different story. But it's certainly challenging there. So it's certainly very different.
So then there are different other things about what we need to do about it. Whether we're going to stifle invasion because the invasion is going to bring a lot of opportunity so we should not do that. On the other hand, we should be very careful how we apply it because we don't want us to turn into something which is very different. Tim cook some sometime ago mentioned that I'm not worried about machines thinking like humans. I'm worried about humans start thinking like machines. So we should preserve that humanity to us, whatever we do, otherwise, we will be replaced by something else.
>>I think artificial intelligence is different from other technological revolutions is that because it's kind of intelligence. Just before I came here, I especially looked after the term artificial intelligence how it's defined. And I think it's defined like this. Artificial intelligence also called machine intelligence.
In contrast to natural intelligence displayed by the human beings, it's applied when the machine may make the cognitive functions that the humans associate with other human minds, such as learning and problem solving. So I think that is the major difference between AI and other technological revolution. Because the human beings has been the major actor in our society. But if artificial intelligence can also communicate with the human beings and maybe in some areas the artificial intelligence can also be an actor in the governance system. So that will cause a lot of problem.
Maybe in some fields, it's strictly forbidden to those artificial intelligence. But for most of the areas that artificial intelligence could be applied, I think that's the the problem for the artificial governance. That's my point.
>>SPEAKER: I think that something interesting to consider is that we're at a point of convergence where a lot of technologies mature to the point where they are able to provide us with meaningful changes that have not existed before. I think that there is a tendency for humanity to androphonize or really think about technology in a way that compliment us, that is similar to us. And in that respect, artificial intelligence or autonomous systems are similar to us because they are shaped based on human intelligence, only made stronger.
And that creates a lot of fear, moral panic, and suspicion, that this technology is going to replace us, deny us certain things. And I think there's a fundamental misunderstanding of where the technology is right now, and also an underestimation of where the technology could be in the future. And I think that that kind of uncertainty gap creates a lot of panic in that sense. I think that artificial intelligence is different because it is designed to think based on human intelligence only in different ways.
If we look at alpha zero, that devised incredible moves in chess that we had never thought about, this is something that scares us because here's a machine that was designed based on human intelligence but gave artificial intelligence. However, this is very rare and very advanced. And I think that the really good thing about having this panel, IGF, is that we can really combine where the technical developments are with the concerns. And if we have that kind of conversation, I think that there would be less room for moral panic and more room for instructive and inclusive discussion.