The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> OLGA CAVALLI: Hello.  Good morning.  Bonjour.  Good morning.  We will start in -- if I see that clock over there -- in three or four minutes.  Thank you very much for being with us this morning in this first main session of the 13th IGF here in beautiful Paris.  My name is Olga Cavalli.  I come from Argentina, and I am the academic director of the South School of Governance, and my dear colleague, Vladimir Radunovic, will be collaborating this session with all of you, and we have a beautiful group of panelists that I will introduce in some minutes.

     Let's take in mind, if you want to take the floor, you have to press a button.  Maybe can you test it now.  It's a button to the right of the mic.  It turns on the red light.  When it's on -- so if you want to take the floor when we have time, the idea is to make this session really interactive, to have input from our distinguished panelists, input from you, from the audience, so don't be shy, share your thoughts and ideas with us.

     Also, let's have in mind that we have one hour and 20 minutes for this session, but we have been told that we have to leave the room five minutes before, so I know that the dialogue will be very active and interesting, but there will be a time that we have to wrap up.

     Also, we have remote participation.  We have our remote moderators that will let us know if we have some inputs or comments from remote, so we will start in two minutes.  Thank you very much. 

     Okay.  Okay.  It's 10:00 a.m. sharp, so let's start our main session.

     As I said, for those coming into the room, my name is Olga Cavalli.  I come from Argentina, and my dear colleague, Vladimir Radunovic, will be co-moderating this main session with me, which is about emerging technologies, which are changing our lives.  When we say the concept of emerging technologies, we may think about many things, artificial intelligence, machine learning, costs of them themselves, and new jobs made by robots, not by us, threatening our jobs, what will happen with our professions in the future, what will happen with mobile telephony, how we will change our lives, how our homes will change with all this technology and stuff everywhere, so the idea is to shape this dialogue with our distinguished panelists and you in to some questions that we have prepared basically on a concept of transparency, ethics, and security, and before starting, I would like to introduce our panelists.  Thank you very much for being with us, and also thank you very much for the organizers of the session that invited Vladimir and myself to be co-moderating.  It's a big honor for us.

     So here on my left, I have distinguished panelist David Redl, assistant secretary of communications and informations and administrator of the National Telecommunications and Information Administration of the United States Department of Commerce.  Welcome, David.

     Layla El Asri.  She's Microsoft research manager.  She's from Canada.

     Satish Babu.  Welcome, Satish and welcome Layla.  He is the chair of the APRALO and ICANN, and he comes from India.

     My dear friend Maarten Botterman.  He's the chair of the Dynamic Coalition of Internet of Things, and he's a member of the ICANN Board, and he lives in the Netherlands; right?

     And Lorena Jaume-Palasi.  Lorena.  She is the founder of the NGO Ethical Tech Society, and she's based in Berlin right now, but she's from Spain; right?

     Thank you very much for being with us this morning, and I will give the floor to my dear friend Vladimir that will introduce some new dynamics for you share your ideas with us. 

     >> VLADIMIR RADUNOVIC: Thank you, Olga.  Good morning to everyone.  Welcome.  As Olga said, we need dynamics at this panel because these are the emerging topics or the emerging topic at the IGF.  We just discussed in the -- before the session that actually last year we didn't have that much discussion on AI and all these things.  It's popping up, so we need your inputs on that one.

     We don't have much time for discussion, but we do want to make it interactive as much as possible, so that means that we encourage you to press the mic or raise the hand and then press the mic whenever you actually want to intervene, pose a comment or a question, but at the same time, since all of your or many of you are connected or trying to be connected and working from computer or a mobile phone, use it smartly this time for the session.  So go to the menti.com, type the code 17207 and respond to the question.

     The first question is -- there will be a couple of questions throughout the session.  The first one is what are we talking about?  What are the emerging technologies?  We mentioned some of them in the script.  It's artificial intelligence or AI, it's Blockchain, it's IoT, Internet of Things.  Then there is VR, AR, virtual reality, augmented reality.  I see some of you added quantum computing and so on.  The go add your thoughts, what should we be talking about today, and we'll get back to that in a minute when the Cloud is there to comment a little bit, right.

     And to start and map the field, since this is an emerging topic as well at the IGF in general and the governments are dealing with that, there's a dozen of governments which already has strategy on AI, for instance.  It's mainly focused on the jobs development, the economic aspects, a little bit of security, but there are many other things that we want to touch upon today, which is transparency, accountability, even ethics to some extent, and we want to see what other possible governance model for the emerging technologies.

     So I want to start with a question today, actually, since the states are also seeing emerging technologies has something important, there is sort of a light race of who is going to dominate in the new field, new technologies, how do states actually look at the emerging technologies, how do you approach that from, let's say, light governance approach, what is the example of the U.S. and what maybe does the NTAA work on that field.  Dave. 

     >> DAVID REDL: Sure.  Well, thank you for having me.  It's great to be here in Paris at the IGF.  This is a group and an event that we've supported at NTIA since 2006, the first meeting in Athens, and even before that, as it was generated through the World Summit on Information Society, so it's exciting for me to be here represent NTIA, the National Telecommunications and Information Administration, of the United States.

     I'll give you a little background to give you some color commentary on sort of where my comments are coming from.  For anyone unfamiliar with my agency, NTIA, we're a small part of the U.S. Department of Commerce.  We have a big role when it comes to Internet policy.  Our approach to policy work has been guided by our longstanding commitment to the multistakeholder process, free and open Internet, minimal barriers to global exchange of information, and throughout the government, we believe in these sorts of bottom-up, transparent multistakeholder approaches to Internet policy because that's how you end up getting lasting and really permanent change in the space, so that's why I'm here today to talk about the work that we're doing at NTIA and across the U.S. government.

     I'm also excited to talk about the promise of these emerging technologies that are, I'm hoping, starting to populate behind me behind the screen.  NTIA works with other agencies in the Federal Government as well as other multistakeholder groups to develop areas like privacy and cybersecurity, and those have a direct impact on emerging technologies, like artificial intelligence, that we're all here to talk about. 

     We also play a significant role in the expansion of connectivity.  Part of our mission at NTIA is to expand the use of spectrum and broadband around the country because most of these emerging technologies will be heavily reliant on being connected to the Internet, so having parts of the world or parts of our countries that lack basic connectivity to the Internet will hinder the ability of these technologies to expand and grow and to be brought to bear on the challenges that face all of us.

     Domestically, NTIA has made progress on a number of these issues through multistakeholder processes, and during these processes, we act as a neutral convener, bringing together representatives from stakeholder groups, fostering discussion on the policy problems.  Outputs from our multistakeholder processes have taken different forms, including best practices or in industry codes of conduct, but what matters to us most is the stakeholders themselves develop them on a consensus basis.

     Last year we convened a collaborative effort that produced recommendation for guidance on how to make sure Internet of Things' devices were upgradeable and patchable, and prior to that we worked on improving coordinated Vulnerability Disclosures.

     Currently we're working on bringing stakeholders together to talk about software available materials and how to be more transparent about the software that's in the devices we use every day.

     We believe a stakeholder-driven policymaking process is key to fostering ingenuity and promoting technologies, whether they be established or emerging.  These processes enable development of controls and best practices in a manner that's flexible and speedy as the technologies that we're seeing today.

     We've also been engaging with stakeholders to better understand how to adapt consumer privacy to today's data-driven world, so it will only become more important in the coming years as technologies evolve and billions of connected devices come online.  It will lead us to vast amounts of data being generated, and we need to be prepared to act accordingly.

     We're looking at how we can provide high levels of consumer protection while giving businesses legal clarity and, importantly, the flexibility to continue to innovate.

     With AI still in its early stages, the U.S. government is focused on research and development, removing barriers, particularly regulatory barriers to implementation, and training our workforce for the future.  We're also taking steps to leverage AI and machine learning to improve government services.

     NTAI is engaged on a number of AI-related activities across the Federal Government.  Earlier this year, the administration formed a White House Select Committee on Artificial Intelligence to coordinate efforts across the Federal Government around AI.  The committee has asked for feedback on potential updates to our National Artificial Intelligence Research and Development Strategic Plan to ensure that U.S. R&D investments remain at the cutting edge.

     I'd also like to mention that our sister agency at the Commerce Department, the National Institute of Standards in Technology, considers AI a strategic priority.  NIST, as they're known, is uniquely positioned to deliver tools to measure and understand AI technologies and their underlying data to broadly address the performance, the trustworthiness, and reliability concerns that may otherwise stifle innovation.

     NIST is working with academia, government, and industry to develop standards for the design, construction, and use of AI systems.

     As we engage with our colleagues domestically on these and other emerging technology issues, we're excited to work with our international partners on expanding adoption trust in artificial intelligence technologies.

     Along with the U.S. Department of State, NTIA is participating the OECD's Experts Group on Artificial Intelligence, and we're also actively following developments in the G7 process, especially with Japan's leadership and the G20 digital economy consultation process.

     The Experts Group at OECD is comprised of academics, government officials, and technologists who are developing high-level principles for how to enable artificial intelligence adoption in all of our economies.  We believe this work has the potential for making a meaningful statement from governments on how to approach policymaking for an AI future.

     I think that's a pretty good overview of where we are in the United States.  I'm excited to be here and to hear the perspectives of my fellow panelists, and I look forward to further questions. 

     >> OLGA CAVALLI: Thank you very much, David, for sharing with us the activities that United States government has taken into -- in relation with emerging technologies, and also, thank you for reminding us that if we don't have an infrastructure that enables this, we won't have this in many parts of the world, so this is something important to have in mind. 

     >> VLADIMIR RADUNOVIC: Olga. 

     >> OLGA CAVALLI: Yes. 

     >> VLADIMIR RADUNOVIC: One interesting note is that what I noticed, when you were mentioning, actually, the topics with the angles that you have to cover, it's connectivity, privacy, security, transparency, consumer protection, trust.  It seems like it's the same old things now packed in a different framing, which is quite interesting.

     I guess we'll get back to that, which means we can use some of the things that we already discussed previously. 

     >> OLGA CAVALLI: Exactly.  Thank you, Vlada, for that.

     And by the way, let's focus on transparency and accountability.  What will happen when all these emerging technologies are surrounding our daily lives, our works, our industries? 

     So I would like to go to Satish.  He's an experienced colleague in Blockchain.  We have heard a lot about Blockchain that will change our lives and bring transparency and accountability for everything that we do in life, and it's the new -- the new Internet that we're entering into it, but maybe can you share with us your views about it.  Thank you. 

     >> SATISH BABU: Thank you very much, Olga, for the opportunity to be here today with all of you.  I see that you've mentioned Blockchain and cryptocurrencies, and I'll very quickly comment on two things, one from the technical perspective as a programmer of many years, the other from the trenches what may be the grass-roots, what this means to us, and then pose a question for the government.  From a technical perspective what Blockchain has done very briefly -- it has done many things, but most important to me is that it has broken the cyberphysical barrier, and that needs to be understood definitely.

     The cryptocurrency perspective, which means -- earlier as a programmer, when I was programming an e-commerce site, the last bit of it, the checkout, is dealing with the so-called real world because that's money, and that used to be a separate step with lots of security and stuff like that.  That is because the programmers' domain had never had any money inside it, and money was outside this whole cyber world and that is the physical world. 

     What the Blockchain has brought in is every entity on the Blockchain, whether it is an individual or entity or spot contract, it has got an address and a balance of money.  Today when you programme an e-commerce site, it's just a matter of a transfer.  You don't have to go to third party, it just bridges that cyberphysical divide, and this is a very kind of thing to the -- it is done.

     Looking from the perspective of grass-roots, one of the communities is a small fisherman.  For a small fisherman to get a loan, he has to make five trips to the bank with a particular leader or parish priest over a three-month period to get a loan, and he has to have the methods, collateral.  If you have crypto lending, which is now coming up, this whole thing can be done in maybe ten minutes, certainly within a day.  The promise of the Blockchain is actually very exciting to many of us, but it leads to a question.

     If you look at the -- which part of the world the Blockchain and cryptocurrency is legal and where it is not legal, you see a very sharp north-south divide vis-a-vis cryptocurrencies.  If the developed nations support open access to cryptocurrencies, the developing world not so much.

     The role of the government as a steward, the stewardship role, is actually quite critical.  Many of us feel they should open up, the government should open up.  My country, India, there is a ban on cryptocurrency, but Blockchain, the field is open.  Some of us feel they should be more liberal, not only there but many countries which currently perhaps -- I mean, there is adequate reason to be cautious from the government side.  This is because a lot of people are going to lose money because of the scams, you know, many of them -- the ICOs apparently are also gray area kind of scams.  Some of them are genuine; some of them are not.  The whole sector's got a bad name.  Governments are naturally very cautious about it, so there is the kind of -- the -- how are we going to achieve the promise of the Blockchain if the governments are going to be hesitating to legalize some of these technologies?  I will leave it there and listen to my panelists for more.  Thank you. 

     >> OLGA CAVALLI: Thank you very much, Satish, and also for bringing the perspective of cryptocurrencies, that it's in our minds here and also it's been watched very carefully by regulators, central banks and other stakeholders in our society.       Lorena, I would like to ask your perspective, being the founder of an NGO focused on technology, your view about accountability and transparency related with artificial intelligence and these emerging technologies. 

     >> LORENA JAUME-PALASI: Thank you.  Thank you for your question and thank you for having me here.

     Well, I'm sort of cautious when it comes to this discussion on thinking that transparency is going to solve all the problems that we have and all the potential conflicts that we have.  Transparency is not a value in itself, it's something, it's a tool.  It's something that we use for a specific purpose, so when we talk about transparency, we need to first be clear transparency to whom and for what purpose, and it's only the first step for -- and second step is needed.  It it's transparency to have more accountability, then you need to provide the tools to have a check and balance and to make people -- the actors behind intelligent systems, so to say, accountable for it, so transparency's just one very first step on a more complex set of rules and norm that we need to make AI feasible in a way that potential risks are contained on the one side and the potentials of AI and these new technologies are also on the other side used because I think that there's a lot of potential behind there, but it does not -- it doesn't mean that just because there's so much potential out there, it does need to be evaluated and oversighted, but overall, I think that when we talk about AI, we are having a discussion that it's not still at the peak of a more concrete, more sophisticated debate, and I think that's needed.  We are very much concentrated and in a state where we're thinking of AI as intelligent, we are humanizing the technology.  There's a lot of fear behind this technology, and this is sort of focusing the conversation on the assumption that when everything's transparent, then it can be contained, and the transparency implied behind that term, it's very unclear to me.

     Some people say it should be the code, and I don't think that for Civil Society having a transparent code is going to help them on a very basic level.  For developers, for the technical community, of course it's interesting, but what we talk about transparency on the other side, we also think on a political level or many expectations from the political side with transparency, it is meant as sort of explanatory declaration, so, thus, this technology makes something that is like human action, what are the potential risks, and I think when we talk about that assumptions or that implication, that is pretty well -- that's pretty -- it's good enough to have a more social discussion about AI than in the very end, it is not about -- when we talk about accountability on artificial intelligence, what we mean, actually, is not to contain the technology in itself but to understand the human object of technology and how the potential human conflicts that are emerging through technology or that are being amplified by technology are being dealt with. 

     >> OLGA CAVALLI: Thank you very much, and you're opening the space for some other comments that we'll make in a moment about ethical implications of artificial intelligence.

     Before we were starting this session, we didn't know if, say, machine learning or artificial intelligence -- we were thinking of which was the best way of naming it, and we thought that artificial intelligence was better.  Thank you very much, Lorena.

     Any comments from Maarten or Layla?  If you want to just --

     >> MAARTEN BOTTERMAN: The ethical expert, the one that comes up next, this comes up enormously fast, and there's no way we can preempt everything that's going to happen, so how do we make sure that we're not creating the ends in the future that get unresolvable or really need major investments to resolve.  Basically, I think to take ethics into account from the outset is crucial there because developers and employers need guidance, and that main guidance is to continue to consider that it's all about people in the end, and if you look back to the past, we see a period of rapid development in which time to market was the main gain because there was most to win, and I think when we move from where we are to more intelligence, like in artificial intelligence, et cetera, we cannot afford that.  We need to be more pro people in the development itself.

     Now, sometimes the sounds in the industry is here like this will stifle innovation, and I think it's our time to find in platforms like this ways forward in which ethical approaches, ethical ways forward are not there to stifle innovation but in a way to guide and enable innovation, so ethical considerations are things that we increasingly need to do from the outset, and we need to talk about what that actually means because "ethical" is such is a word that has different meaning in different cultures and different legal frameworks.

     Now, also, ethical is in two ways.  One is so -- does it give people the transparency, which is a crucial element to make something ethical, user choice.  Do we secure it well enough?  If it's not secured, can you make it do whatever you want, but it won't hold.  And last but not least, and increasingly also, as David indicated in his opening, privacy by default in the thinking of developing tools and services is the one hand.

     The other hand is that it's available where it's needed as well, and I think later this week we'll have also more sessions where we talk about achieving the strategic development goals in which technology will play a major role as well, so --

     >> OLGA CAVALLI: Thank you.  Thank you very much, Maarten.  Before passing to the next question, I would like to have a sense from the audience.  Any comments, questions from the audience?  You are very quiet. 

     >> VLADIMIR RADUNOVIC: In the meantime, we did manage to locate the remote moderators, so where are the remote moderators?

     >> OLGA CAVALLI: Someone is pointing to you. 

     >> MODERATOR: Olga, we do have a question on the first row. 

     >> OLGA CAVALLI: No mic?  No mic.  Okay.  Can you say it loud? 

     >> AUDIENCE MEMBER: (Off microphone)

     >> OLGA CAVALLI: Oh, no connectivity.  I thought it was no mic.  Sorry.

     >> We have another question on the left.

     >> VLADIMIR RADUNOVIC: We have another comment here. 

     >> OLGA CAVALLI: Yes, please.  Siva, welcome.

     >> SIVASUBRAMANIAN: My name is Sivasubramanian.  I'm from the Internet Society in Chennai.  All these new technologies and advanced technologies, emerging technologies are exciting and promising, but are we getting too excited with emerging technologies that we are making technologies that ought not to be obsolete -- become easily obsolete?  Why is it impossible for us to call an Uber unless you have a mobile phone?  And sooner or later, it's going to be impossible to drive unless you have GPS, and you can't access your own data unless you have Internet and you're connected to the Internet, and in many ways, we're forgetting -- there is no backward compatibility of technologies that are being developed. 

     And then the main point is that what not ought to be obsolete becomes easily obsolete.  To develop technologies with all of this in view and from the perspective, we'll also have to preserve the older technologies and make the older technologies compatible with the present technologies. 

     >> VLADIMIR RADUNOVIC: Thank you.  I think it's quite important when it comes to developing countries in that context, particularly because of this dependence on the older technologies as well and trying to leapfrog to the new ones; right?

     To get to the question on the floor, a couple of interesting points that you also raised in the meantime was that there's a big human dimension in all of the code, and we're talking about code more and more.  It's not, okay, connectivity is something that we used to be discussing, now we're focusing on code, and I think, Maarten, you mentioned a couple of times and the others, developers, so the people that are basically developing the code, and one of the questions is, so who do we ask that transparency and accountability from?  Is it actually the developers or is it someone else?  In what is a different mechanism on the top of transparency?  How do we ensure that we have more choice once we have transparency?  A couple of interesting questions that you raised.

     Before I pass the floor back to you to see whether there are other comments, can I ask the tech guys just to move on to the next slide.

     So we had quite an overview of technologies over here with someone promoting 5G heavily, which is good, but also many, many other aspects of the technology, and the next one will be the risks vs. the opportunities or the potentials of the technologies, so if you can move to the next slide, I would appreciate it.  Yeah.

     So you can go back to -- not to vote but to express your opinion whether based on at least these technologies, whether you see them as a high or low potential and high or low risk, and then we'll --

     >> DAVID REDL: Vladimir, while people are voting, I'd like to chime in about values.  This is an important point you raise, and it's one we've talked about in the United States, frankly.  You look at autonomous vehicles.  My dad's '67 Camaro is never going to have autonomous technology in it, he would never allow it to happen.  As we look at these things, there are ways to look at the different technologies, ones that we are choosing to adopt and ones that we are choosing to retain because we don't want to adopt, and I'm not sure that they have to be an all-or-nothing proposition, you know, that when it comes to AI, I have an Amazon Echo in my house.  My son -- I hear comments on fear.  My seven-year-old son loves it.  I don't think he's afraid of Alexa.  He actually embraces it pretty well.

     While she has taken over some aspects of controlling our house, thermostats, some light switches, not all aspects, not all the light switches and, frankly, not all the thermostats.  The point you're making when it comes to the value determination about how you want to employ technology in your life, I think as we look at the technologies that are emerging, it's important to remember they're not going to be ubiquitous, they're going to be individual decisions.  Rarely is there a technology we are forced to interact with without options.  I think that's an important thing to remember as we make these value judgments. 

     >> OLGA CAVALLI: That's an interesting comment.  Yes, they are not going to be ubiquitous, but sometimes, at least myself have the feeling that you cannot do new things without some tools.  You cannot -- I was in Spain two weeks ago, and we wanted to make some visits, and all the -- we couldn't get in because everything had to be online three or four days before, and I didn't realize that because I had no time, so I said, oh, every time I go to a place, I have to check if I have to get my ticket online, so I totally agree with you, but it's changing so rapidly, I think that the main challenge we have is that these changes are changing -- impacting our lives so quickly.

     You want to introduce the next question, Vlada or you want to check the -- oh, it's moving. 

     >> VLADIMIR RADUNOVIC: We can get back to that after a while when we get more comments.  You can go on with that, if you wish.

     >> OLGA CAVALLI: So we have another question for our panelists and for our audience, how the ethics are to be considered from a policy perspective, which is very -- it's in our minds all the time what will happen with autonomous machines, what they do, they will perform well, do they know what they do?  There are machines at the end.  Can ethical considerations be enshrined -- oh, that's very difficult to say -- enshrined in these technologies and are there relevant approaches that could be shared as best practices?  And I would like to, again, call to Lorena, her comments about ethics and algorithms and artificial intelligence. 

     >> LORENA JAUME-PALASI: Well, thank you.  No, you cannot enshrine ethics in code.  There's no way to do that?  Why?  Because the way code is made is a very inductive way, at least the way we're working in algorithms right now, be it simple algorithm, be it more complicated complex algorithms being used, so the nature of an algorithm is a mathematical nature.  The nature of ethics is a deductive nature, is a language, social, reference-based idea, which is just totally the opposite.

     What algorithms can do is pretty much statistics.  It's pretty much an approximation, and it's not an approximation of reality but an approximation of what a specific coder has perceived as the reality, so when we do statistics, it's not about quantifying in an objective way.  When we do statistics, what we do is we enshrine in mathematics our social concept of optimization, of fairness, of affectivity, of any specific context because mathematics is a language on the one side, but what these mathematical formulas can do is very inductive in the terms that it can only show what we are trying to compute from that perspective, that subjective perspective on the one side, and it does not necessarily mean that this is the reality.

     On the other side, it is -- it is something that does not understand, that isn't able to contextualize, while when we human beings do ethics, ethics is pretty much about contextualizing, about breaking down the specific context whether a specific law applies to a situation or not.  This is the reason why a judge needs to know a lot about the whole context to understand how to decide on things, and this also applies for ethics.

     So technology is unable to contextualize, so what we can do is, of course, when it comes to technology, try to develop a methodology of systematics to scrutinize technology, to look for ethical gaps, to try to understand what are the metrics behind that might have an impact on a specific part of an ecosystem where this technology's being applied, but that is a specific -- but that's something different than enshrining in code ethics.

     Now, what we also need when we talk about ethics, it's two things.  First, ethics, it's not law, it has a different function, and ethics is the dimension where society comes to terms, where society needs to decide what do you think -- what do you think is fair, is just, and needs to be not enshrined in law or perhaps do enshrine in law, but that's a source that it's always in the flow, it's like a river, it doesn't stop, it changes constantly, it's very much dependent on the culture, and what ethics means is very different also with regards to the ecosystem of these technologies, and it's not only the coders.  When we talk about applying artificial intelligence, we usually have a set of people that starts with a code -- with a mathematic formula, which is not made by the coder usually, but can be a mathematician.  Then it's translated into code.  Then we have data scientists in the process.  But then we also have the bank managers, the marketing managers, the people from a company or from a government deciding to apply that technology in a very specific part of a process, so the people involved in that process that might not be coders, might not know anything about statistics but are interacting with this technology are also shaping the technology and are also making decisions, and they also have to share in accountability on that.

     So when we talk about ethics with regards to professional ethics that are much needed -- because we have ethics for law, we have for psychologies, for medicine, but we don't have law -- we don't have a sort of professional ethics in marketing, we don't have a professional ethics in engineering, and this is something that we need to start thinking more and more about in the same way we've been thinking about it when it comes to psychiatrists or psychologists or a lawyer. 

     >> VLADIMIR RADUNOVIC: Quick comment before passing on to Layla or back to you.  So you basically said it, we kind of enshrine ethics in the code in a way, but if you boil it down to particular examples, like back doors in a software or programming the duration of the software we had this case with Apple in Italy, or collecting the data, these are all the decisions which are in the code because the code is actually something that we want the software to do, right, so -- but it's good that you broke down into a couple of responsible areas of people in all structure, right, so I think that's an interesting question again on a particular example, how do we make sure there is no back door, there is no, you know, particular examples. 

     >> OLGA CAVALLI: Thank you, Vlada.  I understand your comments is a layer, but it's soft.  It depends on the different stakeholders and the change of the development of that technology.  It's very interesting how you phrase it.

     Layla, your perspective about this ethic dimension from private-sector view. 

     >> LAYLA EL ASRI: All right.  Thanks for having me on the panel.  I am going to agree with Layla on a lot of points and follow up with a few points as well.

     First of all, I agree that ethics is really complex because it changes over time, it's different across different cultures, et cetera, so it's not something that everybody agrees on.

     And second, Lorena was right when she said that the big question is when you build an AI, how can you formulate ethical considerations into a mathematical formulation, which is what basically every AI boils down to is numbers and functions, so how can we make ethics a mathematical expression? 

     So one thing -- one example of success within the research community in AI is fairness.  Fairness is something that most people agree on that algorithms should be fair, and by this I mean that they should treat different groups equally.

     So let's say you have a model that does automatic speech recognition, like the models that you can find in Cortana or Siri, when you talk to them, they should understand what you say, and the idea of fairness here is that the model that you build should work equally across different genders, ages, accents.  If you want your model to be accessible, if you're going to make it accessible for everybody, then it should be fair across the groups, so the mathematical formulation for this, is you look at the error that your algorithms makes and you make sure that the probability of making an error is for the different groups of interest.  So that's one example of success in building ethics into a model.

     Then the second aspect that I would like to talk about is building models so that they can work with humans, and here ethics plays a role as well.

     So at Microsoft, we are trying to have -- we have a really human-centered vision of AI, and we build models that help humans in decision-making, and so the ethical responsibility that we have here is that our models should communicate efficiently with the humans that they work with.

     In a domain like healthcare especially, if you have a model that helps you make a decision, it should -- it should be very clear about its own uncertainty, and that's a very -- that's a bit technical challenge because right now our machine learning models are not very good at communicating their uncertainty, so that's a big topic, and that's what's going to help us make our models more ethical in the ways that they work with humans.

     And then another big topic is interpretability, so can you understand what your model does?  Can you understand when it fails and kind of what it, and so our models are black boxes right now.  We give them data, they give us an output.  We can't tell what's happening in between because of the way they're built because they learn statistical relationships between patterns, basically.  They learn very differently than we do.  We tend to have a very object-oriented mind.  If I see what's around me right now, I'll see this glass, I'll see this bottle, I'll see objects, but the models that we build, they think very differently.  They won't learn those concepts naturally, they will learn different -- to see the world kind of differently, and they will be able to accomplish a lot of tasks through that learning capabilities that they have, but we need to acknowledge that we kind of live in different worlds, us and the AI.  We don't perceive the world the same way, so we need to build some sort of interface so that we can understand what the AI algorithms are doing and we can kind of understand the way they perceive the world.

     So -- and that's -- account -- accountability through interpretability and as well communicating uncertainty is what is going to help us in building AI that can help people efficiently in making this -- in automating decision-making, and so there's a lot of ethical responsibility in making sure that when we deliver an AI that will work with humans, we make sure that this AI can be reliable and the human can understand what it does and also understand when it's not sure about its predictions so that the human can make the final decision.

     So those are kind of the two aspects of ethics -- building ethics into AI, building ethics into AI through fairness, metrics, and making sure that the AI will treat different groups equally, and maybe there will be more examples in the future of building other ethical considerations directly through the data and through the metrics that we build our algorithms to perform on and also building efficient and safe collaboration between AI and humans. 

     >> OLGA CAVALLI: Oh, that's very interesting, especially that concept of we don't know how they learn.  We know how we learn, but we don't know how they learn.  That's a little bit scary. 

     >> VLADIMIR RADUNOVIC: Quick comment which struck me a little bit is the reality check of, for instance, fairness, to what extent the business models -- one thing is programming, the other thing is business models.  If you have a huge Spanish-speaking society which can pay for a software, it's probably more likely that speech recognition is primarily going to recognize the Spanish English accent than someone else with a small community, so the question of reality is also something we should reflect back.

     But I know there were a couple of comments here and then probably back there.  Olga. 

     >> OLGA CAVALLI: I think Maarten wanted to add something. 

     >> MAARTEN BOTTERMAN: Yes, very much so.  I think you highlighted very well.  Ethics means something different, whether you're developing in an international complex or whether you're doing it at a global level because the global level you have to deal with multiple values and different legislative systems, et cetera, which means that if you develop technology for the world, it's not one guidance you have, so you need to make sure that it's transparent, that its user has a choice and it's clear what that choice and consequences are.

     As Layla was saying, well, fairness is then a generally accepted principle, I think you see the same for privacy as being a generally accepted principle, so you can take that into account, but if you move forward to AI -- and I see it out there being even more of -- well, almost as much as a risk as a potential, how does AI learn, how does AI learn to become even more intelligent that it's beyond our intelligence, and I just want to refer there very quickly to what Mo said.  He's the former business director of Google X, and he's stepped out and now is doing a project which is called One Billion Happy because he says in the end, AI -- the large -- the deep AI thing will learn from our behavior, and if he looks to us, what will he learn?  So maybe happiness is a good guidance there. 

     >> OLGA CAVALLI: Maybe not everyone behaves the same way. 

     >> VLADIMIR RADUNOVIC: You know, it's like with kids, you have to be the role model, so that's the same thing.  I think Satish wanted to comment, and then we can get back to the people in the room. 

     >> SATISH BABU: Thank you very much.  Two points really quickly.  The first is about accountability and this whole dichotomy between developers and users.  As far as the Blockchain crypto goes, the foundation that it's built upon is open source, and open source is actually a kind of community-oriented peer production process, and it does take into account some of these accountability issues, but when you move to algorithms, you find that many of these are beyond scrutiny by the public.  Civil Society has no scrutiny powers on the algorithms.  Even more complicated is the fact that algorithms are driven not only by code but also by data, and the data is probably more important in the long run than code, and the data, again, we have no -- Civil Society has no -- or even governments don't have any oversight on what is really going in.

     The second point is about embedding ethics into software or code.  Now, there are two interesting projects that are happening.  One is the model machine from MIT, which any of us can take that test.  You can go online and -- you're actually calibrating the engine.  You're presented with a number of ethical choices, and the more people that take the test, the more kind of model the machine becomes or ethical the machine becomes.

     The second is an initiative from IEEE called the ethically aligned design, where you're talking about embedding ethics into the design process itself, not just the coding or subsequent maintenance, so it is probably very hard to teach an algorithm compassion or empathy, but it's not as if people are not trying, and there is a fair degree of success at this point.  Thank you. 

     >> OLGA CAVALLI: Thank you, Satish. 

     >> VLADIMIR RADUNOVIC: David wanted to comment. 

     >> DAVID REDL: I wanted to chime in on a couple things that were said.  I like what Layla said -- as we look at these -- AI is a tool, people are the ones making decisions.  AI can help you make a decision, but ultimately, we have to remember that it's a tool being employed by people to meet an end and that it's the people that we should be looking to to make sure they're responsible parties in this, not to the technology itself.

     I wanted to say to Lorena's point, one of the more interesting things you noted was that we have ethics for lawyers, we have ethics for I think you said psychologists and doctors and others, but we don't have ethics for marketers and that they can't be done in law.  I don't know that I 100% agree.  I am a lawyer by trade, so, you know, take that with a grain of salt, but even legal ethics vary wildly from state to state.  I'm licensed in two jurisdictions, one is New York State, one is the District of Columbia.  In the District of Columbia, I can make a loan to my client in order to perpetual their lawsuit.  That would get me disbarred in the State of New York.  And this takes place across the United States.  I would note where you mentioned marketers not having some Code of Ethics, when it comes to American law, we very much do.  Section 5 of the U.S. Federal Trade Act says you can't market in unfair and deceptive ways, and the U.S. comes after you if you are unfair or deceptive to your consumers, so I think there are ways for us to take a look at some of the models that are currently in law and the fact that we look at, let's say, ethics -- you know, I think it's more of a legal construct than an ethical construct.  We look at ethics in very different ways across everywhere that we have these different jurisdictions and trying to find a one-size-fits-all solution here I think may be particularly challenging. 

     >> OLGA CAVALLI: I think Lorena wants to react to that. 

     >> LORENA JAUME-PALASI: Yes, because I think it's important to make difference between ethics and law, and when we talk about what law says, it's -- and even though my -- an ethical implication or an ethical origin was the reason why the specific law was created, it's still law.  It's -- and one of the interesting things of law is that it applies independently of the ethical values and the moral intentions of people.  That's the point of democracy.  Thoughts are free.  It doesn't matter what you were thinking, it matters what you did, so I think there's an -- it's important to make this distinction, that when we talk about ethics, we do not talk about what is the regulation in different countries, but we talk about what are the common values shared in a society, and that is something previous to law, that is something that runs parallel to law, and that is something that is additional to law, and when we talk about these ethical values in a global level, it's not the government explaining what the regulation is but it's more about what we citizens of this planet think as a common denominator that defines what we expect as moral value, what are the expectations, not from a law personality but really from an ethical perspective.  We know, actually -- there's many situations where we think that things are unfair but enshrined in law, and that's one of the points that really shows or that's an example that shows that sometimes law might be legal but not legitimate from a moral perspective, and that's the -- I think it's crucial to not expect ethics to become a substitute for law because that's not the point of ethics, and it shouldn't be the point of ethics.

     >> OLGA CAVALLI: Thank you, Lorena.  We will give the floor to our audience.  Please say your name and your comment or question. 

     >> VALICELA GYOZA: Good morning.  My name is Valicela Gyoza (phonetic) from South Africa.  My take is that you cannot divorce accountability from a human being.  AI, as you stated, it's a tool, and whenever decisions are made by a machine, depending on what type of decisions, ultimately a human being has to take responsibility for that.  You cannot point to a machine and say that this was taken by the machine, because then there's no recourse in that regard.

     And certain decisions, when they're made, they're made -- you know, emotional intelligence comes into place -- into play.  And in terms of AI, there's no emotional intelligence.  As human beings, like you said, we think differently.  You look at the situation, look at the environment, and then you make your decisions, which, you know, in terms of AI, that doesn't happen.  I don't know how we're going to get to that point where we can safely say that those decisions made by AI we should just look at them and take them, but my take is that that leads to an element of human intervention in terms of decisions that have been taken to say that I'm looking at this and maybe this was wrongly decided.  I think somebody stated that in terms of how these decisions are made, nobody knows how the decisions are made in terms of AI, so it's important to really understand the dynamics involved in that.

     And my other point is that earlier the gentleman spoke about the issue of technologies that we're not really adopting all these amazing technologies and not thinking of what will happen, you know, in the future, whether -- do we maintain them or not, but as the world moves, you are first, as a person, as a company, to actually adopt what's happening.  You look at the banking system, you know.  It's really impossible to do banking if you don't have the Internet.  Everything requires to you do the Internet.  Whether you want to stay off the grid, you have to be online, so I think we need to see, you know, at a global level how do we balance the two to say that if you want to stay offline, can you stay offline, but I think at the moment everyone is forced to move in the same direction.  Thank you. 

     >> OLGA CAVALLI: Thank you.  Thank you very much.  We have another comment from our colleague. 

     >> MARICELA MUNOZ: Thank you very much for a very interesting panel.  My name is Maricela Munoz from Costa Rica.  I just wanted to say that it's very interesting to listen to the different panelists and their perspective, you know, law vs. ethics, and I really appreciated what Lorena said in terms of the common balance of society and -- vis-a-vis laws because, for instance, we have not been focusing, for example, on certain industries like the military industry, which is using AI to develop lethal alternatives, for instance, and the technology is evolving so rapidly that we have not been able to as states come up with regulations to limit the development of these technologies and to try have this preventive approach to avoid human suffering.

     And another panelists said the human they mentioned is very important and the human being needs to be at the center of any developments that we made, and I think that, you know, it's different to talk about developments of AI in the health ecosystem vis-a-vis the military industry, for instance, and there are certain super powers that are currently developing these type of technologies, autonomous lethal weapons, et cetera, et cetera, and I think it's important that we keep debating over law vs. ethics, but as Lorena was pointing out, I think we have to have a very holistic approach and make sure we're preventing unnecessary human suffering and making sure that ethics is at the backbone of the, you know, development of codes and algorithms that cannot supplement the emotional intelligence, as my colleague from South Africa was saying, from a human being, so I just wanted to point that out, and it will be very interesting to hear about your perspective on this very particular issue of the military industry.

Thank you. 

     >> OLGA CAVALLI: Thank you very much.  We have several requests from the floor from the audience.  Just to let you know that the lights are very strong, so you have to wave very strongly so that I can see you.  We have others before. 

     >> VLADIMIR RADUNOVIC: Two in the back over there -- three in the back and over there, so we're trying to catch it.  Try to be concise, and don't forget to introduce yourself. 

     >> RAJENDRA PRATAP GUPTA: Yeah.  This is Rajendra Pratap Gupta from India.  So firstly, this is not just interesting, this is also very important.  What you call as new technologies to me appears to be necessary technologies I think for a larger part of the world.  Now, drawing your attention back to the theme which says the Internet of Trust, I think the work that I have been doing with the multilateral body on defining guidelines for digital tools, I feel there is a big challenge in terms of gathering evidence for these new tools, so I think if the forum really focuses on the evidence for these new emerging technologies, this will not just help in adoption and scale-up but also will lead to finally what we call the Internet of Trust.  Thank you so much. 

     >> OLGA CAVALLI: Thank you very much.  We have you. 

     >> TAYLOR BENTLEY: Thank you very much.  My name's Taylor Bentley.  I'm from the Government of Canada, so my intervention actually was a little bit -- it was both supported and made a little bit problematic by my friend from Costa Rica over here, so I agree with a lot of what's being said on the panel.  Maarten and I have spoken about this before, you know, saying ethics is, you know -- the enemy of innovation is a strawman argument.  It's more dynamic than that, and I think that's a theme that dynamism of technology, of people, of artificial intelligence, ethics adoption, everything, and so how we deal with that is not preempting all of the variations that exist between those, but I think establishing mechanisms that are flexible enough to respond -- and I agree with Lorena that there are a little bit more -- or they require a little bit more flexibilities than law can necessarily build in, and a good example is on things -- you know, the Internet of Things and privacy concerns and security concerns that are raised are actually -- the mechanisms exist in law.  Sometimes we just haven't found them yet.  You know, we have -- we have laws related to textile labeling that could be applied to the Internet of Things, and so the question is where does the conversation take place where someone can say, oh, well, actually I'm an expert in textile labeling and this is what works in labeling, this is what people care about, so I'd really be interested in the panel's perspectives on what mechanisms can ensure that flexibility and that dynamic and expert level of conversation so that we can address these changes as they come.

     And, you know, as I said, the military aspect of that is a little bit more complicated maybe, but I'm interested in the panel's views on these mechanisms.  Thank you. 

     >> OLGA CAVALLI: Thank you very much.  We will give the floor to the panelists once we have all the comments from the audience and also wrapping up with some comments about governance, how these technologies can be governed.  We have your comment. 

     >> ALEXANDRA: Hello.  I'm Alexandra (?), and as the name might suggest, my question would be how the emerging technology today interplays with other topical subjects, like environmental impacts, and stability on the long-term.  For example, we have some technologies, like Blockchain, who uses a lot of energy, and energy is mostly relying on fossil fuels today, and then we have many other technologies, like our smartphones, who are using mostly unsustainable materials, so do you see a discourse kind of happening there, and do you think people involved in emerging technologies are really seeing this take and taking action on it? 

     >> VLADIMIR RADUNOVIC: So we have one in the back over there or two.  They're in the back.  And I think we can close there.  One, two, three, four. 

     >> OLGA CAVALLI: Four, and that's it. 

     >> VLADIMIR RADUNOVIC: Now tweet.  Move to tweet.  Okay.  I think the lady in the back.  Yes. 

     >> OLGA CAVALLI: Renata.

     >> RENATA QUINO RIBEIRO: Hello.

     >> OLGA CAVALLI: Oh, sorry.  Happy birthday, by the way.  It's your birthday today. 

     (Applause)

     >> RENATA QUINO RIBEIRO: Thank you.  Thank you, everyone. 

     >> OLGA CAVALLI: That's the beauty of Internet. 

     >> RENATA QUINO RIBEIRO: So it's my birthday.  I get to choose about something I really like.  I want to talk about makeup. 

     (Laughter)

     And I did this provocation on the discussion of this main session.  Many of the emerging technologies for facial recognition and for biometrics technologies in general are flawed and are biased, so at Internet Freedom Festival a collection of artists gather once a year to do makeup to fool biometric technologies because human rights defenders everywhere are being targeted by facing recognition, and would he need to answer it in our own way with technology we have at hand, which is powder and lipstick, so it's amazing that such a forward-thinking, future-looking trend has to be discussed within an environment such as what makeup means to an urban city, an airport, and so on.

     And, actually, we need to change who we are to be recognized by a facing recognition system or to be hidden from it, so whatever laws you may be thinking of doing or formulating your recommendations, do know that there is a resistance, and do know that resistance is organized and getting better and better and also creating technological makeup.  Thanks. 

     >> OLGA CAVALLI: Thank you, Renata.  Happy birthday.

     The lady on the left. 

     >> AUDIENCE MEMBER: Thank you.  I'm from the United States.  I wanted to ask about Cloud computing, an emerging technology we're very excited about in the United States, and I wanted to ask the panel about emerging challenges, regulatory challenges, such as data localization, that will -- in many countries all around the world, and I know Mr. Redl knows this well from the work he's doing at Commerce.  Responding to the arguments that are pushing for data localization which seems to be under threat and would undermine the whole model of an open interoperable secure Internet if countries continue down this path. 

     >> VLADIMIR RADUNOVIC: There was one lady over there and one gentleman over there, and I think we can close --

     >> OLGA CAVALLI: With these two last comments. 

     >> VLADIMIR RADUNOVIC: So lady in the back.  Yeah.  Thank you. 

     >> IOANA BURTEA: Hi.  I'm Ioana from the Medial Legal Defense Initiative, and I had a question that kind of followed up on my fellow Canadian's comment a couple comments ago, which is that we've seen a lot of talk today about regulation and possible ways we can implement preexisting mechanisms and adapt them to all of these emerging technologies, especially artificial intelligence and algorithmic-based initiatives.  My question or comment is that we have a lot of different points of implementation for these mechanisms.

     Satish, for instance, mentioned the data sets that are increasingly problematic with potential for unconscious bias so at the development stage we have the potential for implementation, but as David also mentioned, there's a human element of interaction that are the ultimate tool users, and we can see that with stuff like chatbots where the user interface has led to some questionable chatbot comments cropping up or even with simple machine learning technology that has been co-opted by the public for stuff like deep faking revenge pornography, so my question for the panel is at what stage would you think that regulation or regulatory practices would be most effective at trying to promote ethical practices in artificial intelligence and other emerging technologies? 

     >> OLGA CAVALLI: Thank you very much. 

     >> VLADIMIR RADUNOVIC: There's one gentleman in the front in the middle, yes, and I think that's it.  There were other two comments.  Let's see if we can accommodate.  We have ten more minutes, maybe, for discussion.  Here. 

     >> AUDIENCE MEMBER: Thank you.  I'd like to ask to the panel how do you see the economical dimension of emerging technologies because we're talking about markets, we're talking about very concentrated markets, we're talking about digital platforms, some of them with almost three billion users.  We're talking about industries that are monopolies, I don't know.  I think this is a discussion, and often I don't see the discussion in the panels because we're not talking about something that is abstract, we're talking about services that are provided by Microsoft or Facebook or Google, and I see that OECD and other international organizations are trying to discuss how to think about antiterrorists in the digital world, but I think when we're talking about risks and emerging technologies, we have to see them as a market and how governments' law and ethics can contribute to promote competition and promote users on this scenario. 

     >> OLGA CAVALLI: One more comment, I think.  Yes, please, go ahead. 

     >> AUDIENCE MEMBER: Thank you very much to the committee.  Russian Federation.  My question is directed towards Lorena, first and foremost.  As the speakers already said, the ethical aspects are very important, and we don't have an ethics for all guidebook or Ethics for Dummies because different groups, different communities, depending on their origins, their profession, their religious views, can, of course, have an impact on their ethical views and ethical norms as a whole.

     This begs the question, the people setting ethical norms that are embedded in algorithms, you said that software developers, technology developers create these algorithms, and often there are other specialists involved, for instance, psychologists, who are not technical specialists.  Will ethical considerations be taken into account and what will underpin these ethical norms?  For instance, the consumers and the software, the technology developers might have different views on these ethical conundrums, so at this early stage, when we still have some time left to grapple with all of these problems, the problems related to the fact that there are different views to be taken in ethical considerations, how do we grapple with that? 

     >> VLADIMIR RADUNOVIC: I know there are a couple of more questions, but I ask the technical people to pass on the next slide, please.  Pass on the next slide.  There will be an opportunity for you to leave a short comment, sort of a tweet box, so put anything you didn't manage to mention.  To wrap up -- I think we have ten more minutes.  To wrap up, we are back to the panel.  Basically, what you all touched upon was governance, and that really goes well with the wrapping up in a governance perspective, so I will start with, I don't know, David, and then we'll just make around the table with quick comments on all of that from the governance perspective. 

     >> DAVID REDL: Sure.  I'll be brief.  You know, in trying to sum up as many of the comments we got as possible, I think there's two things that came up in the comments that I think are worth tying together.  One is the question of how do we deal with the potential of bias, how do we deal with the potential of data sets that are unethical by whatever standard we decide that would look like, and the other is the issue of cross-border data flows that were brought up, and I think it's important for us to remember that machine learning is only as good as the data set you feed it, and if we're going to have limited data sets to fit into machines for artificial intelligence, you're going to get limited responses, limited and biased by the set of data you're plugging into it.

     So the more we see around the world that's done to limit the availability of anonymized data, to limit the availability of data that is stripped down and able to be used with a way that comports with privacy but also makes machine learning smarter and faster, I think that's ultimately to our detriment as a society.

     The second thing, and I'm going to pivot, I know nothing about makeup, so my apologies to the woman in the back, there are just some things that are beyond my kin, and this is one of them, but I would like to say, you know, the question came up at what point do you regulate?  The U.S. government has -- when it comes to new technology and has, since the advent of the Internet and even before that, taken a pretty hands-off approach to regulating technologies.  I think the best way to describe it -- and I've used this expression before -- is that in some ways the U.S. government has adopted the Silicon Valley ethos of move fast and break things, but we've taken it a step further and said, move fast and break things, but if you break it, you bought it, and our legal regime comes in, and that's where the strong laws that the Federal Trade Commission enforces, Federal Communications Commission enforces and our Department of Justice enforce provide a backstop for what happens when things start to go off the rails.  So we believe pretty strongly in the United States that if we keep our hands off, let innovators innovate, we're going to get some pretty amazing technology out of it, but that doesn't mean there aren't guardrails in place in the form of U.S. law. 

     >> OLGA CAVALLI: Thank you, David.  Maarten. 

     >> MAARTEN BOTTERMAN: Sure.  Thanks for that.  So many interesting questions, and it deserves another hour, and we don't have that.

     So first, all transparency -- and basically, Taylor Bentley from Canada brought it up -- is how do you help -- and there is things out there -- like in everything, if you're doing something digital, it's probably already existing in the -- in the tangible world like the washing labels, the clothes labels.  Labeling and certification of new technologies will help us to make better informed choices as users, one thing.

     Second thing, technology isn't good or bad in itself.  Whatever we do may have a good side but can be used for the bad too and the other way around very likely.  Privacy is an example of that.  Of course we want to be private in what we do on the Internet and we don't want others to get that.  At the same time, we do want certain perpetrators to be caught preferably before they do the act or if they do the act, so that collides with anonymity.  How do we deal with that?  That will be an ongoing discussion for the years to come for sure.

     How can ethical norms become effective?  And David gave a good start there.  He says, well, basically, if you break it, you bought it.  I think that is -- in contrast to the European side where it's a more principle-based legislation, and I think ultimately, what we'll see, these kind of legislative models will grow together more and more, and the first area we're likely to see that is privacy in my belief.  Where we go towards principle-based front ends, we have a harms-based back end, so that's the move there.

     And for now, I'll leave it there, with the last remark, GDPR has a big impact on the world.  I think GDPR is there because industry hasn't been able to take up the privacy-announcing technologies and the things that were already there in the '90s, and there wasn't a clear demonstration, so something needed to be done.  So that is the interplays, again, what I think ultimately will lead to a worldwide approach of a principle-based front end for guidance and harms-based back end, if you still break it, you pay. 

     >> OLGA CAVALLI: Thank you, Maarten.

     Layla, your comments. 

     >> LAYLA EL ASRI: Yes.  I will comment mostly on the issues of biases in data and governance of AI within companies.  So there are kind of two forces working towards this within companies.  One is in research.  The research community, which I am part of, is building standards for data sets, data reporting, how -- how can you report the data set that you just built and made available for other researchers to use so that you tell them about potential biases that might be hiding in there, or you can tell them about the different groups that your data represents so that they know what your data can be used for and cannot be used for.

     There are conferences organized around those topics.  There is the FAT* Conference, which stands for Fairness, Accountable, and Transparency, and it sells out every year, and that's to say that there is a lot of interest within the research community because everybody agrees on the fact that AI is becoming better and better and more and more ubiquitous, so we need to make sure that its use is ethical and that its design is ethical.

     Then across companies and within companies you have a lot of ethical committees.  Within Microsoft there is an ethical committee called AETHER, it stands for AI and Ethics in Engineering and Research, and it is consulted to make AI-based considerations, either to sell to AI partners or when we put AI in our own products.

     There is the Partnership on AI, which is big a cross-organizations institution.  It gathers Microsoft, Google, Facebook, and lots of other startups working on AI, and the role of this institution is to work on ethical AI and make sure that all those companies can come -- kind of come together and agree on the right way to do things and also educate about AI, educate about its uses, about how it works so that -- just to empower people, empower people to know more about AI, because the goal that we have and that we share with governments is that we need to build trust around AI.

     I remember reading this interview by the French president, Emmanuel Macron, that he gave to Wired, and he said, if I build an ecosystem so that businesses flourish and decide to come to France and at the same time I make sure that all the citizens can trust the technology these companies release, then I've done my job, I've succeeded.  I think he really said the right thing here in that we need to build trust, and everything that we're doing in terms of governance of AI within companies like Microsoft is around this, building trust with users, building trust with citizens so that this technology is for the greater good and can be used to help people be more efficient, make better decisions, make more educated decisions.  So that's -- those are my comments about governance of AI within businesses.

     There's a lot of work being done in research, and this work is transferred to executives within companies, and that's kind of the pipeline right now for this, and then there is also the work done with governance to make sure that regulations happen when they need to happen and -- yeah, that's basically my comments on this. 

     >> OLGA CAVALLI: Many thanks, Layla.  Two minutes to go.         Lorena and Satish.  Lorena. 

     >> LORENA JAUME-PALASI: Very quickly.  We already have regulations.  There's tons of regulations regulating algorithms.  It doesn't say "algorithm" in it.  It might say "method," it might say something about statistics.  The Basel Agreement is an international treaty, and it has very much in it a very specific regulation how bank and finance sector should do their algorithms, but that is not the point.  We don't need to regulate algorithms because it's not about regulating the algorithms but the human object of technology, and we must be careful that we do not try to regulate too much technology oriented because that is not sustainable, because as some of you said already, many of these technologies become obsolete very quickly, so it's more about what is the social conflicts behind that are provoked within regulation, and there is not many things that are new that are being introduced in our society by these new technologies to be honest.  There is one thing, though, that I think is challenging.  AI is a collectivistic technologies.  Algorithms do not understand individuals, they only understand fine granular collectives, and this means the way it operates, it sort of classifies individuals into specific types of profiles, and more -- moreover, what it also does is it lays a fine technology infrastructure on things we never thought that could be changed or converted into infrastructure.

     When we think about Google or Facebook, how come nobody's thinking about infrastructure?  How come we don't think about Google as information infrastructure?  How come we don't think about Facebook as social infrastructure?  And many other aspects of society which we never thought about them as infrastructure are about to become infrastructure because of the application of AI, and then, if we start thinking from the regulatory perspective, not at this individual use and individual legalistic perspective, as we did with Consumer Protection, with privacy regulation, and with many other laws, we try -- this is also the point of democracies and the point of many legal cultures.  They do not understand collectives, they only understand individuals, and the way from the law dogmatic point of view try to address harm is from a law individual point of view, so that's a challenge.  That's a tension there from the regulatory perspective, but if we think about these new technologies infrastructure and we start to look at the best practices or bad practices that we have in infrastructure regulation, we might be starting asking other questions, and one of them might be privacy, but moreover, when we talk about infrastructure, we talk about common good, we talk about parts that might be commercial, what parts may not be commercial, we talk about sustainability of the infrastructure, we talk about what happens if Google disappears.  What is -- who is going to keep this type of infrastructure running?  Because there might be crucial moments where we might need this type of infrastructure happening because this can be a catastrophe for society if these type of services somehow out of the blue disappear.

     So -- yes.  Sorry. 

     >> VLADIMIR RADUNOVIC: They'll kick us out.  Thank you.

     >> SATISH BABU: Thank you.  So we see two kinds of thoughts emerging how to engage with these role challenges arising out of new technologies.  We have to keep in mind that this is just the beginning.  We are only in the zero titration of these technologies, and things are going to be very different in the future.  One is to take defensive approaches like Renata mentioned, wearing disguises.  And fortunately, the algorithms today can see through disguises and beards and false hair and all kinds of things.  That has been built for national security.

     The other is to kind of engage proactively in a governance model, which could even be called as a multistakeholder because we have here multiple forces pulling us into different directions each representing some stakeholders, and it is important for us to start this dialogue.  I give -- is mentioned here, I gave us an excellent place to start this dialogue, and we have to see how we can engage continuously.  This is not a destination that we can achieve, it's a journey, and it's a continuous process.  Thank you. 

     >> VLADIMIR RADUNOVIC: Thank you.  There are many comments over there too important which link to your comment on IGF.  Particularly, one is that that might be a good idea for a Dynamic Coalition on AI or emerging technologies.  The other one that we should make sure that we link discussions on emerging technologies with Internet-related issues because it's IGF.

     And at the end, Gonzalo, you have a couple of tweets, you don't have more time than that, to wrap up the discussion from the rapporteur point of view.  Gonzalo. 

     >> GONZALO LOPEZ-BARAJAS HUDER: Okay.  So, basically, one of the main ideas is that ethics, it's not law, that ethics varies from country, from location, from individuals, so it's also one of the questions that was addressed is who is really the people that is putting the norms of the ethics, who is driving those decisions, what those norms are based on.

     Transparency is a part of -- it's not a value in itself, it's just a tool used and just a very first step to contain political potential risk while using artificial intelligence potential, so they mentioned that we are somehow forced to the adoption into emerging technologies, but they are not going to be ubiquitous, so it will be an individual decision, but also a comment was made that somehow we are forced to go with the trend and to be adopting those technologies.

     It's also that artificial intelligence and humans, we live in two different worlds, and we need an interface between them. And also that technology is not good or bad in itself, it depends on how we use it.

     So this is some of the relevant messages that we heard through the session. 

     >> OLGA CAVALLI: Thank you, Gonzalo.  Thank you, Maarten; thank you, Layla; thank you, David; thank you, Lorena; thank you, Satish.  Thanks to all of you for your very good participation and active dialogue.  I think we all deserve a big applause. 

     (Applause)

     Thank you.  Thank you very much.