The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> XIANHONG HU: Hello. Good morning, everyone. This is Xianhong representing UNESCO. Welcome to this session, and welcome to this morning's session so early. I believe we all are committed to listening to a very interesting discussion and an issue of Artificial Intelligence and Big Data. I mean, this idea of this last day, we have heard so many discussions, debates on this emerging technology and its multiple implications, human rights that are achieving attempt goals and also so much related to UNESCO's core mandate to support our members in building inclusive societies. Internet has been continuously surprising us all the time in the past decade. It seems Big Data and Artificial Intelligence are evolving and contesting technology and terms.
We are really opening to this debate. We have no assumptions, but like to brainstorm because it seems now that technology such as Big Data and Artificial Intelligence, they are fundamentally reshaping human and humanity's access to information and technology, changing the way of communication, and also, it changes the way of many, many professions, such as the journalist, which UNESCO has been working along with for years.
I was in a journalism conference a few years ago. I engaged with journalist professors. It seems they are lost, they don't know what to teach to journalists. It seems now AI is doing a better job an journalists, and that's a big challenge. I do perceive that the potential technology can benefit from the social and personal empowerment, but on the other hand, it's like a classical music. Just classical music like journalism as a watch dog. The human values, I believe, as they continue to prevail, whenever we are at time of history. UNESCO is advocating universality term norms in this IGF 2017, when membership conceived the idea that all technology want to harness for human development. Then we need to defend those fundamental values, such as human rights‑based, human rights‑oriented approach should be always taken when we deal in all technologies, such as Artificial Intelligence and Big Data need to develop human rights rather than blocking our threat to any of them. Including personal data, privacy, and personal security.
Second fundamental value, we preserve is that internet should be open technology and internet industry should be open, and we encourage open renovation. Openness is one crucial feature of internet and we believe it is a crucial feature for it to support our original fundamental values.
The third value is the universality of internet is accessibility and universal access. We gave a very holistic approach into access that goes beyond the physical infrastructure access but talks about the quantity of access, the countenance, the literature, capacity of individuals who benefit from this vast internet technology. The first one is about today governing process of internet technology. We have been engaging in multistakeholder discussion, we believe that various participation from different stakeholders, from different regions, groups in society should be included in this discussion. Same for AI. Same for Big Data. They are concerning the data life for everybody.
That's why we are continuing to trigger discussion on norms and principles and explore the implications of these emerging technologies. I'm very proud that we are having a very strong panel here and we have also a good agenda. Also, we have one speaker who couldn't make it on a last minute, but I believe that we will have a very interactive discussion with the audience. With remote participation, I wish you can prepare your questions, your comments to engage with us so I'd like to introduce our first speaker sitting to my right side.
Her name is Mila Romanoff. She is representing United Nations global powers. She is a very excellent expert, specialist in the Big Data and privacy. I've known her for some time. Now she's leading a UN international agency privacy group. Because of this work, this involvement, I knew her. And also because of this crucial in the UNESCO has dropped the internet privacy policy in data so I do welcome her work. So, Mila, do you like to share your expertise on this? Thank you.
>> MILA ROMANOFF: Thank you very much, Xianhong. Very grateful being presented as a speaker today. Thank you very much for the introduction. I'm using slides, and if we're not able to use them, I can go ahead without. So, I will start to make sure we are not running out of time at the end. I'm a privacy and legal specialist at the UN global policy. Global policy is a special initiative of the secretary general on Big Data, and I will probably be telling you how to click through the slides because I don't have any means to ‑‑ perfect. Thank you.
So, global policy is the special initiative of the Secretary General on Big Data and Artificial Intelligence. Our main goal is the adoption of Big Data for development and humanitarian course. Mainly right now, it's shifted to the achievement of the Sustainable Development goal. As we all know, Big Data presents terrific opportunities which I will speak about just shortly. However, go into the next slide, there are also risks that are presented by Big Data. We go to the next slides.
Big Data presents terrific opportunities for the achievement of the Sustainable Development goals, which is actually the title of this panel. And we own, some of the leading examples within our work is, for example, use of anonymized mobile data to understand traffic patterns, movement patterns of population in the humanitarian crisises, such as floods, pretty much. Or how the disease could spread. And these successful projects have proven, for example, use of anonymized mobile transactions data could help humanitarian agencies in delivering critical aid.
Another example is the use of public data, public social media data such as Twitter. All of us are familiar with Twitter or public social media, such as Facebook. I'm talking about the publicly available, not covered by private settings. So, such data is also very important, very interesting, and very valuable in understanding what people are, pretty much, saying at the time of humanitarian crisis. So, we performed a project in Nepal during the earthquake two years ago and what happened is that people were tweeting about critical needs that they were experiencing, they had during the earthquake and humanitarian agencies by understanding and analyzing the data could actually deliver and understanding which locations such critical aid could be delivered.
On the development side, again, use of social media data or, for example, anonymized postal transactions data, which we've done in collaborations with the universal postal union, could also help understand the economic patterns. Or how, for example, nations are dealing with economic crisises. Same could be done with, and was done in collaboration with the BVA, which is the financial institution. Anonymized financial transactions data could help understand how fast the economic is recovering, for example, again, from the humanitarian crisis.
And many, what you see on the screen right now in particular, according to every single SDG in the target set, it is possible to monitor using anonymized, and actually aggregated data. We're not talking even disaggregated. We're talking about aggregated data. For the achievement of the Sustainable Development Goal. However, again, even though sometimes we say anonymized or aggregated data, we need to understand the risk that comes with the data even when it's aggregated.
So, we can go into the next slide. Thank you. So, global policy is also established in addition to actually driving the technology for the achievement of the Sustainable Development goals, one part of our work, the bigger part of our work as well is actually the privacy program, and part of our program is, besides our own operational privacy our own operational privacy in ensuring all the activities are operated with the privacy principles, we also do research on privacy. Understand that because data and Artificial Intelligence space is so rapidly developing we need to understand how it is developing, we need to test so we can't really just have one static policy or one static tool that helps us assess the risk. It's actually moving all the time, so we perform research and tests on understanding how privacy is also changing in the Big Data and Artificial Intelligence space.
And that comes with our innovation in this space as well, so we work with many UN agencies and also other partners outside of the UN, including the regulatory authorities and privacy sector to actually understand new patterns in privacy and data protection field. As was mentioned by my colleague Xianhong, we're working with UNESCO as well in understanding the privacy implications in the development space. So, next slide, please.
In terms of how we deal with privacy on the larger scale and from the policy side including from my background, the United Nations has the Resolution 4995 that actually highlights in general terms the core rights to privacy, and establishes the principles of how data should be handled. However, on the more granular scale, many agencies already have implemented and adopted and implemented their privacy policy, and UNESCO, as was mentioned, is developing their own privacy policy. UNOSCR also just two years ago issued a note on how data should be disaggregated in accordance with privacy principles.
The establishment of the United Nations special rapporteur is working with respect to privacy. Other agencies just recently came out with the privacy policy as well and in others. It's a lot to mention. One document that I think was interesting to bring to your attention is actually just recently published, United Nations development group note. United Nations development group is a consortium of over 30 agencies across the system that actually came together helped drafting the note. It's actually available online on the United Nations development group website but it's particularly relevant to this panel because the note is actually the first instrument across the United Nations system to highlight not only privacy but also privacy ethics and data protection with regard to Big Data and it concerns the risks that come with Big Data in the context of the 2030 agenda.
So, if you're interested, please go to the website and review it, but the idea of this note is to be a living document, once again, so it could be documents as the technology involves, and incorporate the key principles of privacy and data protection as well as touch on data ethics. We can go on to the next slide. And the UN digital note is also based on global privacy as well as what I mentioned earlier and also incorporates key principles on the right to privacy such as right to use data minimization. And let me tell you, applying data minimization when it comes to Artificial Intelligence and use of Big Data is quite a big issue, which I'm going to talk about. Another key question to be considered when we talk about Big Data is privacy.
Many times, populations could be excluded from the research or from Big Data application projects so it's crucial to understand the data quality and also data adequacy when we use Big Data words and analytics for the social good. I will move into the next slide and be happy to talk more about the particular issues that come with Big Data. But one of the biggest ones is actually understanding the risks that Big Data presents, and with that part, we developed a privacy facts assessment which we're actually now taking a risks, benefits, and harms assessment. What we're trying to do is actually testing how Big Data could present opportunities in the same exercise as risks and harms. So, going into the understanding of the likelihood of the risks and the likelihood of potential positive impacts, we're combining the two in understanding whether the, for example, this Big Data or Artificial Intelligence projects can proceed. And let me tell you that many times, we actually had to say no even when we were using public data, publicly available data. Because first, for example, if you talk about publicly available tweets, it could be data that talks about individuals. But, the key question here is, how do we treat the groups of individuals.
And in development and humanitarian context, I think it's important, crucial to understand human rights in the context, also, of groups. And this particular exercise that you see on the screen right now actually goes into understanding the group privacy and group harms. So let me just, I guess pause here and I'll be happy to answer any questions as we continue. Thank you.
>> XIANHONG HU: Thank you so much, Mila. It's so good to know that the Big Data application in the humanitarian affairs going in hand with data privacy and data protection. Sorry for my throat, and now I want to introduce our second speaker from the Council of Europe, Mrs. Sophie Kwasny. She's from the data protection in Europe and standard and policy on data protection policy. I believe you already talked on some aspects mentioned by Mila. The floor is yours.
>> SOPHIE KWASNY: Thank you very much for the invitation. Thank you to UNESCO for bringing in the privacy mentioned worked on by the Council of Europe. So I work in Kasbul for the Council of Europe. The Council of Europe is an organization, regional and national organization, as its name indicates coming from Europe. But some of its instruments, some of its conventions are open to the entire world. It's the case of the cybercrime convention and also the case of our data protection convention, so I would like to say for the entire world, just like the web was created a few miles away from here, our conventions were born in Europe, but were already conceived as open to anyone. So please do use them, if you want, in your own countries to enforce that data protection and privacy be protected in a stronger manner. Those conventions are there to help you, so do use them.
So, it's good to speak just after Mila because indeed the potential and benefits of Big Data and Artificial Intelligence for humankind are huge. This is not questioned. This is what you will find in any documents, actually, addressing the topic but indeed some challenges may come with it so my take will really be on the angle of one of the human rights which is the right to data protection.
Another point that has to be highlighted, and I think it came very strongly from the European Commission in September, is that Big Data and Artificial Intelligence do not solely rely on personal data. You have vast processing of data that are based on non‑personal data, and this needs to be facilitated. When you have Big Data and Artificial Intelligence relying on censored data, atmospherical data, for instance, those, of course, do not bring any challenges to the right to privacy. And another category of nonpersonal data that can be processed in a Big Data context is anonymized data. There, we should be a bit more cautious with anonymized data. Indeed, in all data protection laws, it is recognized that anonymized data is not personal data. So, you do not need to afford a protection of a data protection framework.
Personal data is qualified as data that enables identification or identifiability of a person. So the possibility of reidentifying a person actually leads to what you would consider as anonymized data to, again, fall under the category of personal data and that's one of the risks and that's one of the risks of Big Data is that anonymized data may lead to reidentification of a person, and in this case, you need to apply the protection framework.
So, in our work in Council of Europe, although our convention is quite an old instrument ‑‑ the convention dates from 1981, for the past years, we have worked on modernizing this instrument. This is a global trend. We've seen it with guidelines that also dated back from the 80s and have been revised in 2013. We've seen it with the framework, with now the regulation that will become applicable next year. And we see it with Convention 108. We've been working on its modernization now for a few years. Hopefully, next year we will be able to deliver this revised text. And in this revised text, one of the aspects that was considered by the expert is to be able to address challenges posed by new technologies, and so, some of the wording of the convention, of this modernized convention, precisely aims at addressing new technology core challenges.
And if I can refer specifically to Big Data and Artificial Intelligence, I'll mention to you some of the novelties of the modernized convention that are aimed at responding to this. So, the first thing is that under the article on the rights of the person, all of us, as what we call data subjects. We would have a right not to be subject to a decision significantly affecting us that would be based solely on automatic processing of data without having our views taken in a consideration. That's a first right there is new in a convention. Second, we would have the right to obtain on request knowledge of the reason of the underlying data processing where such results of the processing are applied to us.
And finally, we would have the right to object at any time to the processing of personal data unless there are conditions that may restrict this right. So, this is one of the first parts of the modernization that aims at addressing Big Data processing and then new requirements that aims at processing the data also aims at better protecting us in this ecosystem. That is the obligation of transparency that would be put on the controller, which is very, very strong now in the modernized convention, and a new series of obligations that are really, that have emerged in the data protection sector in the past years. That's data protection, data privacy impact assessments, privacy by design, and privacy by default.
All those being really necessary tools to implement to better protect the persons in a Big Data context. So, this is for our general text. This is our convention. If you look at the convention, it's about 20 articles, so, it's a general legal text. It doesn't enter, for instance, into the level of details that you have in the EU regulation, but at the same time, in complement to this convention, the committee of the convention that meets in Kasbul adopts some sectoral texts. And earlier this year, so nearly a year ago, actually, because it was in January 2017 for the data protection day, the committee adopted guidelines on the protection of persons in a world of Big Data. You will be able to find those guidelines on our website if you're interested in it, and what is very interesting with those guidelines is that they translate the data protection principles that we've had for decades, illustrate them under a Big Data context, but also bring some novelties that had never appeared in the Council of Europe instruments in the field of data protection.
That's, for instance, the notion of ethical and socially aware use of data. That's really new, and that's crucial in this environment. I think what we've seen in this IGF is also how the controllers, the private sector themselves are very active in addressing human rights challenges in the Big Data processing environment. And this is part of this ethical awareness. This is really positive, and to be welcomed.
The role of the human intervention in Big Data processing is also highlighted in those Big Data guidelines as much as other parts I was mentioning on transparency, privacy by design. So, I invite you to have a look at the text. It's really, it was the first international instrument on the topic, trying to address the challenges. I also briefly touch now upon really specifically Artificial Intelligence. It's one of the priorities of the Council of Europe for the future, but not solely in the data protection field. My colleagues working on freedom of expression. That, you know, very well. That's the committee on media and information, will look into Artificial Intelligence from this angle. We have committees from the ethics committee that will be looking on it also. Another example is colleagues working on the efficiency of justice. We'll prepare guidelines on the predictive justice when you have Big Data, Artificial Intelligence, and you apply it in a justice sector.
What are the benefits of it for the efficiency of your system, but what are the challenges, and so we'll be contributing to this work. So, as you see, a lot is going on on the topic and I hope that by the next IGF I'll be able to present other work of the Council of Europe. Thank you again.
>> XIANHONG HU: Thank you very much, Sophie. We know Council of Europe has always been leading advocating of internet freedom, whether expression or privacy right, to these emerging technologies. And now I'm introducing our next speak, Ms. Nanjira Sambuli, sitting to my next slide. She's a young and brilliant women's expert based in Kenya as part of a web foundation and has extensive expertise in internet governance and I do look forward to your take on this.
>> NANJIRA SAMBULI: Thank you and good morning. So, at the web foundation, we concern ourselves with how to deliver digital equality for all. As you know, when my colleague said, when the web was invented not too far from here, the vision was that it would be for everyone. I think, by extension, many other technologies that have come to be are also envisioned to be able to benefit humanity. But let me just start by saying that I think with any new technology that we think about how to be deployed on humans, we have to remember that no technology will make up for the lack of political or social will to actually ensure human rights. So we may want to place a lot of hope in what Big Data and Artificial Intelligence will do especially for those who have been left behind traditionally but if the framework around which they're being deployed is a place where human rights have not been fundamentally upheld, this could very quickly become tools of oppression and further divides.
So in any discussion, at the next IGF, it will be virtual realities for intelligence. We have to remember that if we don't have a strong framework of human rights being respected and held in spaces we're deploying these technologies, this technology then becomes weapons in the hands of those who have been keeping others oppressed. So in that frame, when you think about how AI and Big Data are going to be used in developing countries or amongst communities that have been traditionally left behind, we have to assess how previous waves of technology have benefited these communities and where the gaps have been so we can hopefully use any new technologies to start correcting for any mistakes and short comings that have been so far.
One thing that comes to mind is that it's very sexy to talk about leap frogging. Let's go from landlines to AI, and especially for those of us who come from developing countries, we always, the sites of experimentation with every new technology that comes about. So even when it comes to the data that is being collected and who has the agency to collect that data, who gets to have informed consent about how data is used to make decisions about their lives, we all happen to more often than not be sites of experimentation and not willing agents in parting in this. So I often try to call for pause people who have the power, those of us in these rooms making these discussions and will probably go back and design projects to use AI, Big Data, whatever technology to think about the communities that you're going to be testing that on.
They are human beings as well and if their rights have already been systematically denied and we're coming in saying we are the saving grace with our white hats, we have to remember, how do we start building the culture of consent. The culture of they themselves being able to audit these technologies, question how these technologies benefit their communities. I think that's just something that we must always keep at the top of our minds. I think it's great that there's a lot of work that has been mentioned by previous colleagues around conventions and principles but at the end of the day, they do need some political and social will to have them actually be held in place because you can have it on paper but it's no guarantee that it can also translate into practice, so I'll pause there for now.
>> XIANHONG HU: Thank you, very short and crisp and very inspiring. I believe we'll come back again. My next speaker is the gentleman sitting to my next, Mr. Tijani Ben Jamaa is a director of the work in the area. I think I met Tijani in the first north African IGF. I was impressed by his work in the region so your insight and perspective will be appreciated here. Thank you.
>> TIJANI BEN JAMAA: Thank you very much, madam. The organizers told me that I have to comply with five minutes only. I will do. The previous speaker spoke about a lot of things that I wanted to say, so it will be easier for me, and it will be more simplification of things. I will especially address the data protection. As you know, internet users data are massively collected. They are collected, processed, analyzed, to be used for, for example, scientific researchers such as media researchers, economic and social researchers, to better understand the need of consumers and also to improve the quality of services and goods provided.
So, in this case, collected data may be a source of innovation and growth. But, the collected data can be also used for personal attack, for business hijacking, for political interests, or they can be simply sold, and they are sold. They may be used against us without our knowledge. So, how to protect our data? Technical solutions, of course. Encryption, et cetera. But, the efficiency of those tools are limited.
The more and better security technical tools you have, the less risk you incur, for sure. But, technical solutions reduce the risk, but don't eliminate it. There are also legal solutions. Because we need regulation to, for collection, for data collection and data use. Again, the efficiency of those regulations depends on how the regulations are applied, and also, as you know, any regulation can cover each and every case. So, there is always ways to credit.
Regulations are national, and internet is global. This is another problem. Let's say these solutions need to be wider to have significant effect. As an example of legal solution, the European Union tried to set legal framework for data protection. They come up in 1995 with the directive number 95‑46‑EC, and in April 2016, the European parliament adopted the new general data protection regulation that will enter into effect on 25 on May, no, 2018. Excuse me. This will be a real move in data protection regulation.
With the DPR, the general data protection regulation, what will change? We will have more rigorous requirements for obtaining consent for collecting personal data. Also, there will be rigorous requirements for storing, for processing, for analyzing data. Also, for notifying the data breaches and appointing data protection officials. The GDPR will have an extended territory scope. It will be applicable to non-European entities or European ones that are not located in Europe. It will apply to these two kind of companies if they target European resident via internet with services and goods or monitoring.
If you don't comply with the general data protection regulation, what will happen to you? You will incur fines up to 20 million Euros, or 4 percent of the company's global income. GDPR applies to all kinds of data, including internet data, of course. Internet content data and internet domain data, which we call the registration data, or who is data. Those data have always been problematic because we are presented with two fundamental values, you call them. Transparency and privacy. Transparency because we need the data to be public so that if there is bad use of the domain name, we can sue the domain name holder. And privacy because those are personal and any person has the right to have his or her data not public. So, we couldn't right now coincide those two principles. But with application of GDPR, a serious problem will be faced if nothing is done before, not only for European registers and users but also nor non-European one having transactions with Europeans. How to protect our data, again?
We said technical solutions, legal solutions, but you can have the best tools, the best technical and legal tools. If you don't behave in the right way, they will be useless. For example, suppose you have the best technical encryption. If you don't update it as necessary, they will be obsolete and useless. If you have the best regulation, if you accept and consent to all that is presented to you, of course the regulation will not be useful to you.
So, we need technical solutions, yes. We need legal solutions, yes. And we need user behavior. The user should be aware of the risks and should be aware of how to behave with the internet. So, these are very simple and known behavior rules that I will mention here, but it's not existive at all. And we need really to make the user aware of the risk. So I think the user doesn't have to put the details of their life on internet. Only necessary and not personal things should be put there.
You don't have to use interested sites and platforms. We don't have to respond to interested names. They have to use the best security tools, technical and legal. And they have to update these tools as necessary. I think that changing behavior and being aware of the possible harm helps a lot.
In conclusion, I would say that private data protection has always been a concern. It is more and more important because internet becomes more and more part of our daily life. There is no absolute data protection, but we can reach a reasonable level of protection if we make use of the available up‑to‑date technical and legal tools, and more importantly, if we change our behavior as internet users. Thank you.
>> XIANHONG HU: Thank you, Tijani. Users behave as a key in addressing new technology as well. Now I'm going to our last speaker, Mr. Frits Bussemaker sitting to my right side. He's a chair of the institute of accountability and internet democracy. There is a flyer here on my desk, that it doesn't prevent us from talking about the action intelligence. He has a long experience in IT industry so I think your point of view will be very useful to us. Floor is yours.
>> FRITS BUSSEMAKER: Yep, it's on. Good morning. Frits Bussemaker. Thank you for being on this panel. For the last 30 years, I have been part of an international business community of digital leaders and CIOs, and this is now a global federation, and despite different cultures, the different way we are organized, we all see that we have one thing in common, and that is world is changing from one which is from a commander control to a connect and collaborate world. It's changing from an IRP to one which is a network. And we also experience a difference in behavior, but what we also note is that one of the big questions that people have as a digital leader is specifically over the various cultures. What's possible, what is not possible. We see people on the internet having access to a Big Data lake and probably will become an ocean, soon.
So, it's very tempting to make use of that data and we continue to see examples of organizations, then, having an issue with the solution they do try to bring to that network. So, the question is, we talk about consent. We talk about ethical. We talk about values. But, the translations and the definitions of those words differ per country, differ per culture. So, we decided to see if we could actually do what Hugo Hroch has done in the 17th century who wrote the Meril Libem. And the question is, should we need a law for the internet and make it accountable. Therefore, with the backing of both UNESCO and ITU we'll organize a summary in the palace in May of 2017 to discuss what accountability and democracy means, and also how multistakeholders and the business will make use of that. Basically, we want to organize a discussion on the internet values and question what those values are before we can talk about the implementation of those solutions. Thank you.
>> XIANHONG HU: Thank you so much, Frits. So, now, I think the floor is open. Firstly, I wonder if we have any remote participation? Into no.
>> XIANHONG HU: Okay. So otherwise, anyone who wants to raise comment. Yeah, I saw three hands. Maybe we'll go from left to right. Okay. The gentleman here. Yes, you are first. And then we go to the center.
>> GABOR FARKAS: Good morning. My name is Gabor Farkas. I'm a member of the Internet Society here in Switzerland and president of Active Mediation. My question goes regarding the Big Data. The cost for the public sector of data acquisition, I would imagine, is one of the biggest issues in order to have data to work with. In the private sector, you have big companies like Google and Facebook who have means of acquiring data that, beyond those of the public sector.
Google uses the data it collects to using AI to determine the risks of certain pandemics and transmits the information concerning flu, for instance, way before the public sector, or the public officials are able to determine that such a pandemic or epidemic is going to start, and where it's going to be spreading from. I was wondering what kind of collaboration could be implemented in order for the public sector to do its public service ‑‑ no, the private sector, in order to do its public service. And use the data they're collecting very efficiently, and allow the public sector to use that in order to improve the conditions under which the society could benefit.
>> XIANHONG HU: Thank you. Very relevant question. Maybe we'll collect the round of questions and then we'll go back to our panel for feedback. Yes, go ahead.
>> BANABE LUCAS: Thank you for the floor. I am Banabe Lucas from Brazil and I'm here from the CGA's program. My question is to Ms. Mila Romanoff. In your presentation, you said that you need to ensure data quality. Basically, my question is, how can we do that properly? And how will we know that this data is relevant to SDGs? Thank you.
>> XIANHONG HU: And I think there's another one.
>> Good morning. My name is Jeffy from Internet Society in Berlin. I would like to address a question to Ms. Romanoff and Ms. Kwasny. Thank you so much for these insightful presentations and the range of perspective of the issue. I concur with Mr. Jamaa who said that the implementation is important in many things. I think it's important to have the principles laid out but the implementation is important and sometimes also the tricky part that we see this in human rights where the Council of Europe has taken great steps to really think about implementation of human rights in an all‑encompassing manner and I think we see this as well where in the design process of new technologies, it's quite tricky to make the very good principles you mentioned operable. So, to conclude my question would be, do you also think about the implementation problem, the design process, and what your recommendations and these activities in this regard be. Thank you.
>> XIANHONG HU: This gentleman here. And also I saw a lady. So we'll two to you and a lady.
>> I'm from Pakistan. I think a very important question raised by all the individuals is the misuse of data for oppression purposes. I understand that some work is being done in Europe to have these laws implemented so that the people who are using this data for bad purposes are penalized but are there any initiatives being done? Because there are many populations in Africa, probably don't come up with some of the applications which are applicable. And all the other places where, you know, this thing is misused a lot. And then, the activities are, any demanding activities, whatever. So, are there initiatives on the platform of the UN which are applicable globally on all of the countries instead of some regional areas?
>> XIANHONG HU: Thank you, and the lady from there, please, introduce herself.
>> Sorry. My name is Hugh from ‑‑ my question is somewhat similar to previous question. So, about the convention protection of data, could you explain more about the article 108. That's also the question.
>> XIANHONG HU: Thank you, and madam there. I'm sorry just now.
>> Thank you. I'm a little bit perplexed by your comments about how people should behave online, that they shouldn't put out any information online. They should be careful about this. Of course, being careful doesn't mean that you're not still going to have huge amounts of information out there, especially as the platform is designed to affect your mind in such a way that you want to release information, that you get a rush from making information public. It is studied scientifically to do this. So, are we going to say that you're advising major platforms to change the way they're designed so that it is no longer addictive? I don't think this is realistic. So I just did not get where you were coming from on this, but definitely, even though I missed the Council of Europe's session yesterday on information literacy, I would like to know from the Council of Europe what you're doing on the data literacy dimension when it comes to information literacy. Because basically, this is what we Ned to do. We need to, from the earliest stage of educational systems, make sure that our populations understand these issues, and are prepared in such a way that they can deal with these issues.
Now, the colleague from Humbolt, I think has mentioned something that is quite interesting, as did the gentleman, I think, from Pakistan. Is it time for the African Union to come out with its very clear position so that if people exploit African data, there will be consequences? I know that many young people in my country are having to tick boxes that they are in compliance with the European Data Law because their applications, their blogs, their websites, et cetera are visible there so they're going to have to comply. I think it's time for us to wake up, rather than ask people to remember that Africans are human beings and should not be exploited to actually take it into our hands of we have models here.
>> XIANHONG HU: Thank you. I think I'm taking the last three questions and then I will go through our panel to address any questions. So lady here and gentleman here and young lady over here. Go ahead.
>> Thank you very much. I'm Helga Milling from Austria. I'm particularly interested by the presentation of Mr. Bussemaker. You mentioned Hugo ‑‑ and I find it interesting that the concept of the 17th century is being debated again and as far as I know the internet industry is specifically looking into the matter. So I would like to know if the Council of United Nations are looking specifically into this specific topic. Thank you.
>> XIANHONG HU: Okay. Thank you. The gentleman.
>> Good morning, my name is Zihao Jing from China. Indeed, data privacy is a key issue. In China, we also have big problems in this regard as my personal observation. Actually, there was a huge underground black market selling and buying data among different players. Mostly, the companies, but the criminal originals also taking advantage of such datas. And you really don't know where do they get these data? Some of those data leaked by big platforms as a way of making profit. Some of these data are leaked by personnel of the companies, the platforms in their personal capacity. So, in my view, to international norms, national regulations are very important. But the most important part is enforcement of law. And you know, people do this in a very stealthy way and the law enforcement really has a lot to overcome to bring justice to get these people arrested. And it's, such kind of activity is massive in scale. So, it's very difficult. Thank you.
>> XIANHONG HU: Thank you. Thank you for sharing this useful experience. So, lady, please.
>> TEREZA BARTONICKOVA: Hi, there. Tereza Bartonickova, Oxford Internet Institute. So, we are mentioning privacy, and protection of data, which is great. There was also mentioned the biases of the data sets that were gathered. And I think I will just play devil's advocate here and say if the anonymization is dealt with, let's suppose, shouldn't we share even more data to allow the dynamic processing of the data sets and therefore highering the occurrence of AI systems and the processing of systems we want to help. Shouldn't we share everything in order to make the AI more rewarding, let's say, in simple terms?
>> XIANHONG HU: Thank you. I think now I will go through questions one by one, you can pick the one related to you. If that's okay.
>> SOPHIE KWASNY: It's okay but it's mission impossible. I've had several questions addressed to me that each would need a lot of time. Two minutes. So, about enforcement, I think it's a key matter but the system is built in the consistent manner that now I will, if I mention GDPR, you will have strong sanctions. You will consider collective action that will make no other action to ‑‑ it needs to be done. So we need to find the principles. Gentleman, I will see you after to respond to your question on 108 and the same for you because otherwise I wouldn't leave any time for the next speakers.
>> XIANHONG HU: Thank you, Sophie.
>> FRITS BUSSEMAKER: I'd like to make a personal data with our offline values we have used for the last years, how we grew up before we were online, so it's the values of a digital native. So, assessing how to behave online should also be regarded how you could say this new generation of digital natives is now behaving. We have to accept it's changing. We may not like it but that is what's happening.
>> MILA ROMANOFF: So I agree with Sophie. There's plenty of comments and questions that it's really easy to answer but hard in terms of the timeframe so please come up to my. I'd be happy to elaborate more. Just a general message. Thank you so much for the comment with regard to implementation of Big Data in public sector, and how can we do that? I think in general terms, we need to make sure that private sector regulators and public sector are brought together more to cover the gap of existing regulations that would help people implement Big Data‑related projects in public sector. We need stronger regulations.
And I don't mean stricter or less flexible, what I mean is more relevant to data in this space. What I mean is to make sure that all relevant stakeholders are speaking to each other and currently I think that's missing. Two, we need to make sure we agree, thank you for bringing up implementation. That's why I brought up my point about assessment tools and also engage in more stakeholders to the process. What I mean by that is that when we do have principles, one of the key implementations is to actually understand the risks, and develop mitigation strategies. And that's how we can apply the principles that were developed by the convention 108 and by the resolutions 94 and 95 of the United Nations and all the guiding principles that are part of the United Nations system. So that's one of the recommendations, but of course there are other steps to be taken.
And then my message number 3 is that in general terms, we need to make sure we don't speak about one right only. Let's say right to privacy. We need to make sure we talk about right to privacy in the context of other human rights and that will help us in the achievement of these Sustainable Development Goals and also in building more knowledgeable and also educated societies.
With that, we do need to make sure that we educate and bring awareness as well as digital literacy to the populations in developing countries. And in educating populations we need to make sure we're educating theming both on the risks and benefits of data misuse. Thank you.
>> XIANHONG HU: Thank you. Tijani.
>> TIJANI BEN JAMAA: Thank you very much. Let me answer the question of data use. I never said we don't have to put information on the internet. This would mean we have to give up internet, and would be catastrophic. Internet is a wonderful tool if we use it in the right way. I just said that we have to be more rational in the use of the internet because we are all complaining that our data are everywhere and we don't have any privacy now. But we are giving that data. We are giving unnecessary data. We are making use of the internet for, we thought that we are only doing that for our friends, but everyone can access this information, these photos, these videos. So don't come after that and say, oh, our data are collected and are used. This is the only thing I said. I said also that we have to use the best tools, technical tools and legal tools. If people don't use, unfortunately, the technical tools for the security, and they are very important also. So, it is not an issue of not using internet or not putting information on internet. We need to put information on internet if we want to use internet. Thank you.
>> Just real quickly on the question of African positions. Around 2014, the African Union started a process on cybersecurity data protection and I'm sure you will be familiar with the nonstarter it was but also just showing that certain processes and limitations of these conventions are such that they were also designed to protect businesses and not necessarily human rights and that's where the source of these political discussions are. But when it comes to new technologies, I want to wrap up by saying that the Web Foundation has put out a paper on how we can adopt policies having already sampled what's happened in AI with Kenya, Nigeria and South Africa so I'd be happy to share that with anyone interested. Thank you.
>> XIANHONG HU: Thank you so much. You have already done something impossible to answer so many questions briefly. I think the intention is really to raise more questions rather than finding one answer in one session. So the session just opened, just begins, and maybe to the last question from our young lady there, someone told me that Xianhong, forget the privacy, it has been lost forever in digital age. Just get over it. I'm not convinced. I'm completely not, because I believe this kind of values are not just my rights. Define our humanities. So, as UNESCO, we are an intergovernmental organization, we are always standing in neutral position to be a platform to bring different expertise and stakeholders to hear different regions and national voices. So, we thank you to be such a convincing platform for you to serve and continue discussion. I, myself, are going to organize another session at the upcoming forum in March about Artificial Intelligence of Big Data and I wish to have you continue this discussion there.
And you could also leave me your name cards, your e‑mail address after the session. We are having meeting of UNESCO internet policy to push the information about events and initiatives, about us and our partners. Thank you again. I'd like you to join me in applause for our active panel as well.
(applause)
(Session was concluded at 3:09 AM CT)