The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> MODERATOR: One minute.
Okay. Good morning to everybody. And welcome to the Best Practice Forum on IoT, big data and artificial intelligence.
I'm here with Simon and Della. I'm co‑facilitator with Simon. And Wim is supporting the team from the Secretariat.
I will give the chair to Solomon who will explain about the BPF. Thanks.
>> Thank you, Titi. And thanks to joining here. Thanks to our panelists. I want to give a brief intro about the BPF, how it started and we all know that this free emerging technology is actually shaping our life and shaping our future and there are some positive impacts of it, and this may be some negative use of this technology and challenges as well. We don't know yet what will happen, but ‑‑ and this technology is evolving gradually. So we had a plan to start some basic guidelines so we can follow some standard and we do not end up in a wrong direction.
So from that observatory, we started the BPF and we have a mailing list and a lot of meetings there, and eventually we have come up with a pre‑IGF document. You can find it on the website, and based on that, we like to start the discussion today. We tried to figure out some ‑‑ some best practices, best recommendations, and some intergovernmental perspective.
So details are in the document and probably to be discussed about that. I will not go further here. I will give the mic to Wim to say a little bit more.
>> WIM DEGEZELLE: Okay. Thank you, and Titi and Simon. I'm working with the IGF Secretariat and supporting the work of the Best Practice Forum on the IoT, big data and Internet of ‑‑ and artificial intelligence. May be important to reiterate concept of a Best Practice Forum. The Best Practice Forum is not a singing meeting organized and IGF, or a workshop. No, the idea is that Best Practice Forums continue to work after the committee.
And tried to collect specialists, stakeholders and discuss a specific policy topic. And in this case, it was of the three technologies. I think this was a little complex in the beginning how to combine, but also it might be good for you to really ‑‑ before the discussion, really focus and know that we are focusing on the place where the three technologies overlap or ‑‑ not overlap, but combine each other or combined or used collaboratively on the Internet. So that is really the focus of this discussion because there were a number of ‑‑ well, endless number of applications of the three which have nothing to do with the Internet. So I would like to put that focus clear.
As Simon mentioned, throughout the discussion the discussions ‑‑ the BPF have identified a number of practices, that are in our draft documents, put forward as ideas and that should also be discussed here and hopefully before they are completed and get more detailed, based on this discussion. I'm not going to go through them in detail, but some of them ‑‑ or some of these best practices are, for example, be very clear on what you are talking about, because there's ‑‑ it's very easy to say Internet of Things, artificial intelligence and then nobody is actually knowing what specific parts of this whole field you are talking about. So another point that was mentioned ‑‑ or another best practice that was discussed is try to be technology and time neutral, if you discuss best practices, because it's very easy to focus on one specific issue with one specific technology and try to fix that today. Wall that a best practice and tomorrow this best practice or this guideline has become relatively useless.
Other points that were addressed in these best practices underline the importance of collaboration, cooperation and with as many stakeholders as possible.
Also think about ethics and human rights if you think about guidelines or guiding principles or best practices for these technologies.
And, yeah, the other important points, the transparency and make sure that these principles are used to support ‑‑ support small businesses and make sure that small businesses can use these technologies so that there becomes good competition and new and ‑‑ new players and the old players.
This is just a rough overview of what we did. I think the rest of the morning should be way pore interesting as we, I think, have a very interesting panel with different views.
I would like just to repeat again, the Best Practice Forum has been working and put out its draft report. The inputs that comes from this meeting will be incorporated in the document and the document will be published as an output of this ‑‑ one of the outputs of the IGF 2018 soon after the meeting. So that means between now and definitely between now and the end of the year.
The idea is that this is not a nice document that has to be archived as what has happened at the IGF. No, the idea is that Best Practice Forums give a kind of good overview or produce outputs that can be used to inform policy debates that go on the other place so the idea is ‑‑ and my question would be for all of you, take what you hear today, take the outputs document and use that as a background to information when you discuss best practices on these new technologies.
I think it's time for me to be silent and hand over to Alex who will introduce the first part of the discussion and lead first part of the discussion.
>> ALEX COMNINOS: My name is Alex Comninos. We have governments, civil society, academia and business. So we will ask each panelist to speak for two and a half minutes to best practices and experiences within the sector or stakeholder environment. And we will start with Nobu Nishigata. Can you tell us about best practices and experiences with regards to AI, big data and IoT?
>> NOBU NISHIGATA: Good afternoon, my name is Nobu Nishigata, and I'm from the OECD. And before coming to the OECD, I was with the Japanese government and working on the development of the AI principles for the research and development and for just, you know, the thing that I'm ‑‑ I mean, OECD is more like evidence‑based organizations so I'm rather looking forward to hearing from you, including the floor about the best practice so we can bill on the principles and OECD is working on the development of the principles and the AI, which would cause trust and adoption at the same time.
So actually, I can talk the whole day to introduce the best practices that we found in our analysis, but that's not the way to do it.
Let me raise some points. We found as the challenges. And then like, to me, the biggest challenge in this technology or like including IoT and big data together, obviously the biggest challenge is like looking at the opportunities brought by this technology. Then we can expect more opportunities but we found that it's unethical or challenges as' personal aside.
So in that sense, the biggest challenge is going to be the innovations or more opportunities at the same time, we have to mitigate or minimize the risks of the technology. Like, it is not only for AI. Like, once we got the new technologies, like biotech or other things, we have to think these things.
So in that instance, in the implementation level, then it will be a short way to say it will be the risk management for technologies, but how we can manage the risk in the same way and in the whole globe.
So this is the big question and then I notice that OECD is the one developing the principles after discussion at the G7 in 2016. When we ‑‑ when in Japan, we had the meeting that administered the Internet affairs and the communication affairs, and they proposed to proceed a discussion on AI and international fora.
And then we said ‑‑ so there ‑‑ just let me say that there are many, many best practices already and maybe it's going to be the time to collective effort is go to be there, and necessary for us to share the direction toward the bitter development of technology for our society.
Thank you.
(Off microphone comment).
>> ALEX COMNINOS: I want to go to Taylor Bentley from ISED and the Canadian government.
>> TAYLOR BENTLEY: I'm with the governance team at our group ISED. We started in 2016, the largest DDOS attack in history that leveraged insecure IoT devices that were used in the home and in businesses. So this is our main focus.
What do we do? How do we respond so consistent with Canada's best practice of taking a light handed approach, and really just trying to develop more of a framework to develop things rather than, you know, legislation on IoT, we did work and partnered with other folks and launched a process. We joined a process that we were a founding partner in with internet society, CERA, the dot CA operator, that has fantastic work on security issues and SIPIC which is at the university of Ottawa and Canary. The Canadian multi‑stakeholder process on enhancing IoT security.
Really, served as a forum for all Canadian expertise to be leveraged. Canada is always looked at as a champion of the multi‑stakeholder for international governance and domestically it works for very practical reasons. You get the experts from the room together, right? The government does not monopolize expertise in this space. In fact, we are learning quite a bit the more academics we engage, the more industry partners we engage. And then second of all, you get their buy‑in because they are actually codeveloping the approaches to this common problem with them ‑‑ with us. With all of us together.
So this has been a really important initiative for us. It's ongoing. We are about eight months in. Planning on developing a year one report by February or March 2019. And then it will likely continue on from there. I'm happy to answer all kinds of questions. I have been talking about this quite a bit through the IGF and really happy to be here on the panel today and to contribute to this Best Practice Forum.
Thank you very much.
>> ALEX COMNINOS: Perfect on time. And next we are speaking to Imane Bello and he works on AI and human rights.
>> IMANE BELLO: I have only one point, the need to advance literacy on AI. I wish that the term AI stops being used, and that we ‑‑ we start being more precise on what exactly we mean when we talk about AI. If you want to achieve trust, which is this year ‑‑ which is the theme of this IGF, we need to nurture understanding. And facilitate access to skills, knowledge and inclusion.
I'm not going to dwell on the demand for system transparency, especially when it comes from public actors that are clients of systems and systems are used to have significant impact on people's life, whether or not those decision‑making processes or solidified or not. My question is how do we collaborate in a multi‑stakeholder approach. How do we collaborate in a way that's inclusive and when do we start and how do we start taking advantage of the challenges of the systems.
Data bias is an issue ‑‑ I think we can all agree on that, but it's also a tool to detect discrimination. So what kind of partnerships do we build? How do we enhance understanding and inclusion?
Thank you. I'm happy to take any questions.
>> ALEX COMNINOS: Okay. So we will be taking questions for the next round after we have done with the next two speakers and we have ‑‑ Peter Micek representing civil society and he's general policy council for access now.
>> PETER MICEK: Unfortunately from our perspective as a human rights organization, working at the intersection of human rights, emerging technologies and voluntarily and marginalized populations we see more best practices than worst, especially when it comes to the Internet of Things and the massive new trenches of personal data that these devices and sensors create and collect.
So we do see that more devices means that more sensitive personal data will be produce and collected. The devices themselves are often largely insecure raising the risk of the breech in exploitation and leak of personal data.
We support initiatives like the governor of California just signed into law in an IoT bill in September, that requires manufacturers to have reasonable security features that is clearly broad language but we think it's a good step and the law only applies to devices sold in California of course. California is a leader in regulation which can reverberate globally. There's interests where the risk of arm of cybersecurity and deploying IoT devices it's too high to justify their use in the first place.
And the situations where connected devices should not be used including children's toys and gadgets and devices and tech aimed at young people, as well as in many medical devices and situations.
Moving to the AI questions, we have seen a lot of movement on ethics and AI and fairness, however, with we have seen scant attention to human rights. Probably this is due to the fact that the human rights community has only recently begun to consider the physical range of risks of AI and there's considerable uncertainty in how to conceptualize these risks. We have seen best practices. A new machine enabled enforcement of the community guidelines led to the removal and the deletion of hundreds of thousands of videos and entire channels, documenting atrocities in Syria. This is crucial human rights documentation that many ‑‑ that in many cases was lost forever due to this new implementation. And has led to a lot of multi‑stakeholder efforts to try to retrieve and repair the damage, the damage that's been done.
We have therefore put out a new paper building on the learning so far and showing how human rights can complement existing ethics efforts. The human rights did provide common framework and agreed upon forums that provide access to well‑defined remedied all of which are we believe are more concrete foundation to move forward urgently to address the risks.
Thank you.
>> ALEX COMNINOS: Thank you very much, Peter. And now offering a perspective from the private sector is Dr. Mike Nelson, a tech strategy from CloudFlare.
>> MIKE NELSON: Thank you very much. Since we have such little time, and my job is to be provocative, I'm going to try to speak in tweets. I recently spoke at a UN conference and I give five tweets about the Internet of Things, five myths of the Internet of things which summarized in tweets and five myths about Internet intelligence. So look for six myths, five myths, where the number is spelled out.
Let me give you three ‑‑ four very important ones.
First off, I really think it's important that we need to look at the whole system. This paper is special because we are looking at the combination of AI, big data, and the Internet of Things.
Solving a lot of problems with the Internet of Things is not going to happen if we just focus on the things. We need to look at the network, the gateways that connect the things and the artificial intelligence programs, the machine learning programs that can be embedded in the cloud that will control the things. Second point a big myth. This isn't about a few big companies. These opportunities with these technologies are something that even the smallest companies will be able to use as we build out platforms like Amazon, lambda, CloudFlare workers. These are technologies that allow the entire network to serve as a computer and to control the devices to process the information as a system.
The third point is already made very eloquently.
Stop using the term "artificial intelligence." There are 15 different definitions and they are all confused. What we focused on in our report and what the Internet community is focused on is big data related to machine learning.
And that's really where the interesting applications are for the Internet.
And then the last point is we can standardize and solve this problem. There are so many different types of things, so many different types of applications, there's not going to be one way to make everything work.
Certainly not globally.
So let's think of many different approaches to making this system more secure, making it more accessible to more types of businesses, institutions and governments. So I ‑‑ I just want to focus on those three myths and urge you to read the report, even if you only have ten minutes, scan it, see if there's something in there that you want to reinforce or see if there's something that you really want to reject.
And I look forward to your comments because we're not supposed to be panelists. We are supposed to be discussion catalysts.
>> ALEX COMNINOS: And thank you for catalyzing that discussion. I think a common thread seems to be the need to unpack the terms and to perhaps be critical of the terms. I think this was a question whether the IoT, big data Internet of Things is too broad of a topic. What it points to is an ecosystem that suddenly brings issues to the fore and makes AI more meaningful, makes the big data more meaningful and makes the Internet of Things more meaningful. And they collect the data and feeds the machine learning.
So now I would like to the turn to the audience. The audience, we will get questions again for the panelists, as well as reflections on best practices and I will take three questions at a time, starting with the gentleman in the corner there.
>> AUDIENCE MEMBER: Thank you very much. What you said, the last speaker, what you said is very important. And I think that the most important thing to solve is the trust. At all levels, you will have Internet of Things. So you have things stick to us. How shall we trust the sensors? How should we trust the applications? To use this 'em? Big data, the same. All is about trust and I think this is the first thing that we have to address.
>>> ALEX COMNINOS: Could I collect a second question?
>> AUDIENCE MEMBER: Hello. I'm here on behalf of the department of the Council of Europe, but then I also work back in Romania. So I have two points here and one is make sure that whenever you discuss about AI first, don't say let's not use AI but then the whole session is about AI because indeed, we are replacing the terminology digital with whatever ‑‑ it's a buzz word.
It is important to make sure that all stakeholder groups are involved when we design the technology, the future, and I'm highlighting here the youth as a secretary and the youth organizations to be a part of the process. I'm highlighting the part that young entrepreneurs right now, whenever you debate about AI, to make sure that they are fit in the box because everything that is innovation, it's funding for innovation, it has to be AI and blockchain or any of the kind.
Thank you.
>> AUDIENCE MEMBER: Hello. Thank you all. Going off the first comment, I agree and think that building trust with the public is going to be very important. And I have a comment on the number six, the best practice proposed making policy and transparency a goal. I think that's good.
I think extending that to also emphasize trust more broadly, and the need for that to be employed by governments and other stakeholders as well, in addition to just businesses. I think having governments take initiative and providing transparency is also going to be important.
>> ALEX COMNINOS: Thank you kindly. Mike?
>> MIKE NELSON: I'm very glad that two of you emphasized trust because that was the one thing I didn't emphasize as much as I wanted to in my two and a half minutes. Transparency is a key to that but another key thing is competition and choice. People will trust systems more if they know they can leave one system and go to another. The ‑‑ the machine learning tools that are being developed are often being developed in the cloud but you have to move your data into the cloud.
One of the problems today is that's not always easy. Look at what was just announced last month with the bandwidth allowance. A number of cloud services, they have agreed that they are going to work together so users of these tools can move data back and forth between different tools and get the answer they need from the best tool that they can get.
There might be a need to actually do ‑‑ you know, to use two or three different tools and we don't want everybody locked in to one cloud service. The other thing I would say is trust is so important that it's probably the thread that ties together this whole report. Several of these principles are things that are about increasing trust. So thank you very much for touching on that.
>> Yes, I appreciate it. It's the first point popping up after the introductions. I support what Mike said about the cloud. I would like to take the cloud a little bit further and include people in that. Both the people that used it, and the organizations that should cure it, that offer that.
So if the people that use it are informed and ‑‑ the gentleman took out of point six, make privacy and transparency that helps people to use it and to understand it. And if organizations take that responsibility, not only users but put that in, then I think we can create around.
The tech innovation is so fast. It's not right for people to fully understand it but let's make sure that we take it into account from the outset.
>> ALEX COMNINOS: Taylor?
>> TAYLOR BENTLEY: All great points and plus one to all of them. So mistrust is a toxic emotion but it comes from places of ignorance. I was really sad to hear ‑‑ I was a little bit sympathetic, a national IGF coming up has a session coming up, "Is Technology Moving Too Fast?"
And that is understandable because it does feel fast. They were talking about security issues that the IGF has been working on for 20 years. It's difficult sometimes to understand and to see what's going on. This is a very complicated ecosystem and community and so how you fix trust is not necessarily transparency, or I would say, we're not limited to transparency but it's conversation. It's dialogue. It's coming to the table, being very open‑ended. I have been very happy to see the response from my government of Canada colleagues, including from security and intelligence departments who come very, you know, candidly to these types of conversations.
So I think the more we talk to each other, and understand each other, and empathize with each other, the more trust that we'll engender.
Thank you.
>> ALEX COMNINOS: Before we move back to the audience, we will move to a very important stakeholder group and those are the people that come from the Internet. We have remote question. Pass.
>> This is from what is the key ‑‑ on not getting solutions from businesses.
>>> ALEX COMNINOS: Do we have another remote question?
>> Yes, AI, where is the best university and study these technologies from francophone people in Africa?
>> ALEX COMNINOS: Okay, we will move to panelists can answer the question and then back to the audience.
>> PANELIST: So as I understand the first question, what are the unique challenges to the African community when it comes to these main issues and I will focus specifically on IoT, but I think it's applicable to all of them, that they are all the same issues for all people who want to feel comfortable and enabled and empowered by our technology, but we are all facing the same technologies and I know in Canada, rural connectivity is a strong challenge that is omnipresent and continuing and there are a few IoT‑enabled solutions, including community networks and mesh networks to address these problems.
I think the more that we can all work together across international ‑‑ like, across international borders, the more that we can ensure that we are all part of the solution, because it's a mutual problem and a mutual solution.
>> ALEX COMNINOS: I will move to Peter but I will step outside of my role. I'm from Africa, South Africa. So I would say returning in some sense ‑‑ we have an era of ubiquitous computing, it's available on the edge devices and the cloud. 60 years ago when AI was new, you would have to go to a research institute and you check punch cards there and buy time on the mainframe. That mainframe is the cloud offering its own privacy and human rights concerns and access concerns that we have a number of cloud services to use and I think the challenges will be whether those cloud services are hosted and domiciled in Africa and some of the cloud providers are moving there.
Amazon cloud services were was conceptualized in Capetown but we are only getting services coming up.
And there's immense computing power that's collective. So I think there's ‑‑ there is a chance to ‑‑ to kind of leap frog certain developmental obstacles to powerful computing and we'll move over to Peter.
>> PETER MICEK: Thanks. Just two points to add. I think first, I am concerned about electricity and water use as the data centers are being increasingly built. And bases not perhaps best to host them. But more importantly, I think the data protection is a key challenge across Africa, and is ‑‑ is perhaps more pressing than in some other regions.
And Africa did just hold the data protection summit in the last couple of months which is an excellent step towards enforcing the convention on cybersecurity and data protection, which unfortunately has not been widely ratified much less implemented into national legislation. Data protection laws and exercise of basic data protection rights along with privacy rights are essential to addressing many of the risks that will arise from the spread of new sensors and IoT, tech, and can help actually have a direct role in mitigating risk posed by machine learning and similar technologies.
>> ALEX COMNINOS: Mike?
>> MIKE NELSON: Just a real quick add. He asked about challenges but some of these opportunities are opportunities as well. The fact that the infrastructure is being built for first time means that it will be built with leading edge technology, wireless. The fact that there's ail huge investment going into infrastructure in Africa means that roads and bridges will have sensors built in, and will be more efficient and be able to monitor them.
And I think the most exciting thing is that because of the younger demographic in most African nations, you have a whole lot of people coming into the workforce, being trained about the late technology, and even more important, they have got a young person's mind set, willing to try new and crazy things that might just be game changing.
(Garbled audio).
>> The best bringing technology to Africa is not good enough. It's more important that people on the ground who are understanding that they are dealing with and the technologies in use and able to match.
That the price of technology is a big investments, maybe needed but small investments can help, yet for both, you need to be able to guide that. So capacity building is a major priority moving forward from here.
>> ALEX COMNINOS: We have a question here and a question here and a third gentleman in the back.
>> AUDIENCE MEMBER: My name is Hutra from Germany acting as a MAG member in cybersecurity. I would not like to over tress of risks and benefits of AI, IoT and big data but still I was very glad to hear Peter mentioning children as a vulnerable group from this area and my question to the panelists is, how far do you think we could come with implementing the principle of safety by design in all these developments especially in the Internet of Things in order to build trust, not only for safety of children but also trust for ‑‑ and safety for all of users.
Thank you.
>>> ALEX COMNINOS: Thank you.
>> AUDIENCE MEMBER: Thank you. So one the things that is always mentioned when it comes to emerging technologies is the exacerbation of device across the boards and so one of the things that I'm actually interested to learn is when the technology decision making seems to be hetero‑normative and western centric, what are the best practices to put on paper that this is an inclusive process and does, in fact, account for the different considerations.
Thank you.
>>> ALEX COMNINOS: The person at the back.
>> AUDIENCE MEMBER: Thank you, chair. My name is Roth. I run my own consultancy. I think at this point in time, it seems like the whole economy is changing from a money‑based organization, basically to data‑valued organization. And if that is true that data becomes more valuable than money, it will be a major game changer. That means this discussion at this point in time, we have only one chance to do it right, otherwise we may miserably fail at several points.
I just come from a session on IoT security and a presentation from Canada from the Netherlands, from the UK, and the question was what can the IGF do and two out of three said, let's not wait for the IGF and come back next year and see where we are.
And here I am at this session focusing on IoT, big data and third one ‑‑ sorry.
AI.
Why are these people not aware that this work is going on two rooms down the lane from here?
So there's a tremendous amount of reach out. So Internet task force is working all sorts of solutions concerning the new Internet and the new Internet architecture. Why are they not here in the room? Why are they not on the panel. They have several solutions for the questions we are dressing. Where is the major industry that's actually developing these tools.
Yes, Mike, I know where ‑‑ are they bigger representatives? Where are consumer organizations? When it comes to my final point, this is such an extremely difficult and hard question to tackle for societies because we have many societies and many countries so who can actually play a leading role in this discussion? Because I come back to my second comment, which is ‑‑ I think we are only going to get once chance to get this right.
>> ALEX COMNINOS: So we have themes along currency of data, and bias, and also many competing norm setting agendas happening in Paris. I also want to know why we weren't as a panel invited to the Paris peace forum or to sign the Paris agreement.
Who would like to tackle this?
>> IMANE BELLO: On the question about best practices related to inclusive practices or whether or not different considerations are taken into account, when we talk about emerging technologies and their impacts and consequences on the exacerbation of the digital divide, I think that there are numerous efforts that have been made from the technical community and from a civil society when it comes to data discrimination and the need to advance transparency.
So if we talk about that, you have two main points being made. Then first there is this work that is done on mapping short that the historical training data sets are as diverse as they can be, as of now. And then, you know, there is also a monitoring work that is done on the outcomes of applications of the ‑‑ of the applications of match learning systems.
So you have several technical items in force and then you have the work of the civil society making sure and monitoring those efforts.
>> ALEX COMNINOS: Mike.
>> MIKE NELSON: I will take issue with your money to data‑based economy. And I think we are moving to an insight‑based technology. This feeds into the earlier question about diversement and having as many views as possible. Your insights are going to be flawed if you are not looking at the whole picture. CloudFlare has eight data centers in Africa and we will soon have 15. That helps us understand how the Internet is used in Africa to make it faster and more secure. We are trying to hire people from as many places as possible. Please apply.
This is really important, though, because we are going to end up with the wrong answers if all of our insights are based on data that only comes from a small subset of the Internet users and the data that's out there. I love those two questions. We can argue later over a better about how the economy is going.
I will refer everybody to an amazing article that came out yesterday on Singularity hub on Peter Deamondie, how the insurance industry and one of the most profitable industries in the world will probably be completely undermined by the technologies we are talking about. Because if you can detect the risk before they happen, if you can prevent the accidents, suddenly, there's less need for insurance at least the premiums go down dramatically.
>> ALEX COMNINOS: Peter.
>> PETER MICEK: Sorry. I just wanted to pick up on one element of the questions. As far as how far can we get and would can play a leading role, I think a lot of it is the substitution of whomever the lead is. So I do think government plays a very important role of conveying legitimacy of the process of a multi‑stakeholder process of leveraging its own networks to ensure that it has a diversity of view, leveraged own diversity within its government priorities, for instance, Canada's priorities on gender equity and gender‑based analysis plus.
I think, how far can we get in implementing the secure by design. It's the consensus of that group, a consensus of a fully representative group of all positions that helps us try to find that elusive balance of security and innovation that we're striving for and ‑‑ and trust always comes out of the ‑‑ I think it's almost the byproduct of that. Because you can trust that the process is done with legitimacy and it's done with good faith and it represents all of these views. Obviously that's exceptionally difficult and requires a lot of cold calls for myself.
You know, some of them go unanswered but you do your ‑‑ you do your best and as I say, it's just not ‑‑ it's not an easy task but it's one that we are all committed to.
>> ALEX COMNINOS: Thank you. That was Taylor for the record. I switched the names in the head. I will move open to the audience unless someone from the BPF wants to address the question, why the initiatives aren't on board and I would say that BPFs are available for anybody to get involved in and perhaps this does point to an issue of IGF education and preparedness in terms of ‑‑ you know, coming to the IGF but many don't know all the forums that are available to them and how to get involved.
>> AUDIENCE MEMBER: Thank you. I'm Claire Miln, I'm a consultant in the UK but I work internationally and I would like to start by answering the question from the gentleman at the back. You asked where the consumer organizations. Well, here's one representative of some consumer organizations. And we only wish they were more. It's not that we're not concerned, but as I'm sure you all know, there are terrible funding shortages and people get diverted on to shorter term priorities.
But actually, there has been a publication from a consortium of consumer organizations specifically on IoT security, and if people don't know that, and you want to hunt on consumers international or BAOC or ANO, you will find it. I'm among both its supporters and detractors because I think it’s a lot more work. I'm pleased it's there for a good start.
But what I actually wanted to say is thank you for wanting to abolish the term for AI. I would like to go further and say I think we should abolish the word "trust" and I'm sorry that it's actually in the title of this conference. But I don't want to just abolish. I would like to replace it by "trustworthiness."
Because we do not want blind trust. We need devices and systems and people who are worthy of that trust. And what is particularly important about the term "trustworthiness" is it's very much more specific about trust.
If you think about your own lives, you do not tend to trust the same person to drive your car and to look after your children. You trust ‑‑ well, it may be your spouse in both cases.
(Laughter).
But I won't say it's impossible but if you happen to be going to a wider circle, you may well look for two different people to do those two different things. And when we put AI, IoT, and big data into the same bucket, which we are doing in this forum here, I do agree we need to do that, but at the same time, we need to recognize that it gets to be an enormous bucket with so many applications in it, and so then my question for the panel and for the room is what are actually our priorities specifically for trustworthiness?
And we hear a great deal about security. We hear a lot, though maybe a little less about privacy. We hear less again about the dangers inherent in cyber physicality and I'm particularly interested myself if ‑‑ I have a wonderful old washing machine now, but I'm afraid it will break down one of these years and when I buy a new washing machine, am I going to have the option to buy one that doesn't have an intelligent chip in it?
If not, then is my washing machine still going to work when the chip breaks down? Thank you.
>> PANELIST: If I can clarify, it sounds like we are covering everything, but we are more focused. We are looking at applications that use Internet of Things, big data, and machine learning, artificial intelligence. We are just looking at the overland of those three technologies. So it's impossible but it's not that possible.
>> AUDIENCE MEMBER: Well, right, but there's still a lot in it and I think we have agreed before that that overlap is getting bigger all the time.
And what happens to robotics by the way? Isn't that in there too?
>> PANELIST: No, it's not.
That was kept off the table.
>> ALEX COMNINOS: Can we take the trust one and then a round of questions.
>> PANELIST: Microsoft agrees with you, they talked about trustworthy about five years ago.
>> PANELIST: Yes, they want the feeling and engendering trustworthiness, just on your point of privacy and security, I think they are all the same way, that AI, IoT, and big data are overlapping. I think privacy and security are overlapping. So we are definitely thinking about this. And also to the point about, you know, harm and then physical security, because thankfully, we have not had an incident yet.
If we did, then there would be plenty of post‑market mechanisms that governments such as Canada could use, you know, consumer safety act and a lot of, you know, like mislabeling or deceptive practices that could be leveraged. But the key is now the urgency of doing as much as we can as best we can, before that incident occurs.
So thank you.
>>> Alex: Okay. I am out of ‑‑ privacy of design is working on ‑‑ (Garbled audio).
So I'm going to take the gentleman in the pink and blue tie. And the gray sweater and from the Internet.
Okay.
>> AUDIENCE MEMBER: Thank you very much. I have some ‑‑ I belong to the government of the Russian Federation. Let me just explain to you some ‑‑ what's come to my mind.
By the way, there is very interesting discussion, and the ‑‑ the question was raised by the ‑‑ well, the distinguished guest from back, regarding the living role of that. It is extremely essential from our understanding. Why?
We know, we see that the world now, just to some extent is divided in no technologies and modern technologies or emerging technologies. You mentioned robotics or well, clouds, big data, but who should monitor. Who should drill on that?
On the one hand, we see that the world is changing rapidly enough. On the other hand, that gives us more threats on misuse of these technologies. So in this case, the issue of who would play the leading on role on that? That is quite clear. The UN system and their respective organizations could probably construct and ‑‑ (Garbled audio).
And by the way, just yesterday, resolution adopted by the third committee of the General Assembly on behalf of the Russia, China, BRICS companies and many, many others regarding the use of communication technologies for criminal purposes.
Precisely mentioning that we take note of the potential for emerging technologies including by the way artificial intelligence, and the communication technologies for criminal purposes.
So the only trouble ‑‑ (Inaudible).
The second issue. My pleasure to invite opportunities to Russia. Unfortunately we are not French‑speaking companies. Sometimes it's difficult to enter. The same can be said for probably UK and U.S.A. We have to start from the kindergarten probably to understand what is artificial intelligence.
>> AUDIENCE MEMBER: I'm a developer. And I would like to continue for a bit on the trust issues. Back in the days when the Internet was invented, it brought promise of decentralized wealth, and this idea I got from a book, which I think was called "The Promise of the Internet."
But in the end, the power of the Internet is centered around a few big companies like Facebook and Google. And I believe the technologies we are speaking about here right now are a new revolution in Internet technology.
Another feeling I also have is these big Internet companies value their short‑term profits over the health of their users and there were a few panels about that yesterday.
Is the health users and decentralization being considered as best practices?
>> AUDIENCE MEMBER: This is a question from Phi Gearia, how can we scale up low costs to IoT technology for effective healthcare delivery in developing countries? Thank you.
>> ALEX COMNINOS: I want to frame that we are running out of time and the last 20 minutes is about next steps moving forward. So definitely do ABC the questions but if there's any next steps or issues that you see open, you can also comment to that and then we have the audience.
>> MIKE: I will pick up the last point about healthcare. A lot of the most exciting public ‑‑ well publicized information of Internet of Things is coming from the healthcare arena. We are involved in one project. We are helping to secure Fitbits so people can use them for monitoring their athletic activities and because they are using our service, that data is secured and it travels back to the Internet and back to the Fitbit servers that collect it and make it more useful to the user.
But what's really exciting for the Internet of Things is often the most boring. That sounds strange, but some of the biggest changes are going to be in logistics and supply chain. You know, fit bit, they see that on people's wrists but making sure that the supply chain the pharmaceuticals is secure. So each pill bottle has its sensor and you can know whether it got too hot and the medication is no longer effective. That's really important but it's almost invisible.
This isn't true in general of the Internet of Things. It's the infrastructure stuff, it's the behind‑the‑scenes stuff that will make a lot of money and change our lives whether we realize it or not.
>> Actually, I had some comments.
(Off microphone comments).
>> I hope someone takes on the question from our colleague from Russia.
>> PANELIST: Two quick responses. I want to underline again, I find you very trustworthy to give that intervention. These technologies are being placed into systems with deep historical legacies of marginalization, exclusion, and just to take one example, the US criminal justice system where we do see machine learning technologies employed in bond setting and in sentencing. That ‑‑ the answer that people from these communities should be expected to volunteer their personal data to help create more robust insights or better data sets is laughable at best.
There's ‑‑ basically, they have done nothing to deserve such trust historically. That's outreach that needs to happen and I believe that the onus is on companies and developers governments. Companies can as far as text steps develop and implement human rights due diligence based on strong and robust and human rights policies. They can join multi‑stakeholder initiatives with true accountability and global network initiative could expand to take on more of a role in this and ranking individual rights indicators will develop further in this space. That's another framework that companies can help follow.
As far as the governments, I was in the third committee hearings on a number of resolutions, including those referred to by Russia, however, I had to basically hide my affiliation because the word "closed" was on the TV sets outside of those committee hearings.
And these are not multi‑stakeholder debates taking place at the third committee of the UN General Assembly and they are certainly not at the International Telecommunication Union. So these forums need to break open and really accept that more stakeholders need to be brought in and on an equal basis before they are trustworthy to develop the policy.
>> AUDIENCE MEMBER: There won't be any inclusion in transparency. And as education is concerned, it is not sufficient. It's only a first step.
>> ALEX COMNINOS: Martin is going to wrap up and maybe we'll take some questions if we have time.
>> MODERATOR: I will stand here and I have a roving mic. I think multilateral continues to be very important to underpin whatever we do. But it's also about how it's recognized and likely recognized that it's a multi‑stakeholder approach that we seek and no stakeholder can do it alone.
In that spirit, also, are documents, output document that you have seen in response of this meeting has been developed. The Best Practice Forum is to look at the collaborative, bottom up process and the out of document is to include an understanding of global good practice. (Garbled audio).
And how it can benefit Internet users and others around the world. And proceeded to ensure that these benefits don't come at the cost of ‑‑ and I'm sorry, trust. I like the word. Justify trust or trustworthiness is very much there.
So the question here to you is intent is to move to Berlin next year and have an advanced document with a deeper understanding of what is needed to combining those technologies and such right to benefit the net users and they can justifiably trust it.
You will have seen that there's a couple of principles that we have ‑‑ we proposed here. There was a special reference made to number three but there are eight principles.
We had some discussion in the group whether we would need a kind of overarching principle like ethical considerations from the outset, knowing that we cannot determine where it goes no to create a free, secure and enabling rights environment before these principles.
Whether that's needed or not, we had a discussion, and I will ask Mike to say a bit of his thinking on that as well.
And after that, I would like you to come back on the principles that have been proposed and if there's any you think don't deserve the attention, please let us know. If you think there's others missing, please let us know as well and if your specific remarks about the direction that we should look forward in developing over the coming year, that's the input we are seeking in the last hour.
Should we frame that overarching principle thing first?
Okay. So the first one, do we need to know overarching principles? (Garbled audio).
>> AUDIENCE MEMBER: Thank you very much, I'm Alexander Lutz, I'm the president of an association called climates and it's actually something I would like to put forward. It's been already touched by Peter, but I would like to emphasize it for the step forwards and I would love to see also in the future. So it's actually about ‑‑ (Garbled audio) at the same time, we said we have 12 short years in the global admissions and how we save the planets. I would love to see this concern may more integrated into the tech and the Internet community when developing those technical ‑‑ the new technologies and the frameworks and so I love to see it as the biggest point ‑‑ a bigger point for the IGF to tell us in '19.
>> MODERATOR: Okay. Thank you very much. Any specific feedback on this one?
It's about being effect sympathy with these technologies and also supporting this purpose of sustainability, right?
>> PANELIST: I think ‑‑ I will be quick. I think it goes back to what was said before by the lovely lady right here. So what are the priorities. In what framework do we want to work when we talk about the overlapping implications of learning systems, big data and IoT? So it's an open question. I don't have an answer, but I do feel that the environment and the ‑‑ the planet and all should also be one of the many focuses that we should focus on.
Thank you.
>> PANELIST: And I would go further. My first job in Washington was working for Senator Gore. My morning was spent preparing hearings on global warming and the coming climate crisis and my afternoons was spent looking at how IT and the Internet will solve that problem.
I think our report can do both. We can look at the problems but also look at the opportunities for massive cost saving, reduced transportation costs, more efficient supply chains with these technologies. That will ‑‑ that will help us save the planet.
>> MODERATOR: Thank you. So I think we will take away ‑‑ and I'm just looking at the members but also at you. We take away that next to the principles, we also look at what can we achieve if we are getting it right? Please.
Your turn.
>> AUDIENCE MEMBER: Hello. My name is Sarah Ingal. I'm with the youth IGF in Canada and I work with the Ontario digital service. So something as a person living in Toronto, that's come up a lot in the past year, the last few weeks, especially, is one very significant application of AI, big data, and ‑‑ I'm forgetting the third one, all combining is smart cities.
And that's something of significant concern because what we're seeing in that space, it's not only issues of data governance but the question of how does IoT and these technologies transform not only our virtual environment, but our physical one?
And sort of to the point of trustworthiness and security and privacy by design, I think we need to think about inclusivity and diversity by design and I think they are subsets of a broader issue in the tech space around inclusivity. So I would really like to see in these best practices more focus on thinking about meaningful consent in the application of these technologies and consultation with communities.
I also think I would like to refer you to sort of some of the work in digital government happening around digital service standards and what that looks like in terms of inclusion from the very beginning, design stage of technologies to their execution and further iteration.
And, yeah, lastly, just the point that, you know, trustworthiness and being able to trust both your physical and your virtual environments, I think is a massive privilege for. So really, yeah, bringing in sort of digital inclusion as perhaps an overarching principle may be useful for this conversation.
Thank you.
>> MODERATOR: Thank you very much for adding in the way of principle, inclusiveness and diversity, and for also emphasizing that there are also other areas where we need to look at the output and not just the principles.
Any reactions to that?
Any other points? Please. And then you. Wout?
>> AUDIENCE MEMBER: Thank you, Martin. Wout.
>> Can you turn on your mic, please?
>> AUDIENCE MEMBER: My name is Wout. I have been a consultant also of the UN a couple of years ago on different topics. My point is reach out. As I said already in my comment, it's necessary to get other constituencies and stakeholder groups on board, and you most likely are only able to do that when you visit them on their own turf.
That is something which panelists here and others may be able to do because a consultant can do that but you have the networks to reach out. And that is something which perhaps the IGF needs to get better in, to reach out to other stakeholder groups that may actually have the ideas on improvement or norms or just fill in what you need to have.
But reach out is extremely important to become more inclusive. Thank you.
>> MODERATOR: Yes, I think that's a good point. Please.
>> AUDIENCE MEMBER: I'm from Bosnia Herzegovina. Thank you for this confusing but positive. I had a chance to listen to Tony Chang who gave an example how for CCTV, they did the artificial intelligence software, which shortened the period of making summary of football match. They needed six persons working half an hour to have a five to ten‑minute summary of the football match.
Initially they made it and they shortened the period for ten minutes with two persons needed. After a couple of months, they needed no person and ten minutes after the match, the summary was prepared.
Recently, we have an example of Chinese television where artificial intelligence, just a person, announcer, was reading the news that somebody else prepared.
In a sense this last session, what's next?
Is the next step that we will have cameras surround, monitoring whole world, and that the artificial intelligence making the ten minute summary and broadcasting it to us?
Thank you.
>> MODERATOR: Since I'm at rapporteur of this session and I have to write a summary of it, I wish we had that technology today.
(Laughter).
>> MODERATOR: Well, it's the impact of this towards the future of work as well.
We haven't discussed it to be within the scope of this Best Practice Forum, but there may be faults about that.
>> PANELIST: Just a reference. I have been involved with a group called innovation for jobs ‑‑ the number 4. Vint Cerf founded this group about six years ago and we have done about ten international meetings. We have been looking at this issue a lot and trying to determine how to generate new jobs as old jobs get replaced.
There was a panel on this topic, and I wasn't able to go but I'm certainly going to watch the videotape of it and I would urge everyone else to. If he don't get this part back, if everyone fears the future, the tech lash will get larger and larger. There will be more constraints on how we can use the technologies we are talking about and we will miss a huge opportunity to save the environment, promote growth, promote peace. I mean there's a lot of things that won't happen and we may never know. So let's work on a positive vision for the future.
>> PANELIST: And if I may, include everyone when we do so.
>> MODERATOR: Okay. So we include this part of work towards next year?
>> PANELIST: The attitudes.
>> MODERATOR: The attitudes we will take forward. Thank you for that.
>> AUDIENCE MEMBER: Since, you know, I'm working on developing the present on AI and for the trust and adoption at the same time. So then ‑‑ let me introduce some findings, just looking at the purpose‑based practice and then which, to me, missing the point. Maybe covered by the other sentences. If you implement the new technology, then the data changes so if fast and then we have to think of the balance between the opportunity and the risks. And particularly for the ‑‑ if you ‑‑ as a person, like a stakeholders group and we have to think of the tradeoff, between, like, for example, accuracy of the AI, versus the transparency and the trustworthiness of the system. It will be a tradeoff.
And then particularly for AI, I would say, it's the use of AI is more like depends on the context when you use. Like, if you want ‑‑ if you asked NetFlix to recommend a good movie for your mood, it's totally different from the data collection for the cancer detection, for the systems.
So then ‑‑ so then, like if you think of the cancer detection, then maybe the system wants pore data about your very sensitive zone. On the other hand, like NetFlix, maybe you can easily get my data for the movie recommendation.
I don't know where it should be here but maybe we have to think of these things and thinking of the implementation of the technology and stuff.
Thank you.
>> Be flexible!
>> MODERATOR: Yep. Yes, thank you very much.
Any final remarks? Otherwise, I think we can thank you for your excellent inputs, your active listening and we do take the points forward that have been mentioned in terms of the principles we have heard, if particular diversity, added to there.
And we have heard the importance of not only looking at the principles but also what are we going to achieve with it? So good practice, and that should be possible too.
So with that, we will produce as Mike said, a report within 12 hours after this meeting.
(Laughter).
You will be invited to introduce. I mean, this paper that you have seen has been prepared by a relatively small group of people, but a very diverse group of people. It's not a close group of people. If you want to participate, that's possible and all you need to do is raise your hand. Don't even need to pay your travel costs although it's handy if you can connect to the Internet for that.
So do get involved. Get your comments back to the report as well, and looking forward to see you next year with a document that may be convincing to a wider group and with you and us, that we can take it out to a wider group to add to better AI, IoT, BD combination to serve human beings. Thank you very much.