IGF 2018 - Day 3 - Salle XII - WS269 Do(not) touch: self-regulatory safe harbor of social platforms

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

>> MODERATOR: Good morning, everyone.  I believe we should start our session.

Thank you for joining this early morning.  We see that it's not that many of people who survived by this time and it's really too early.  So thank you so much and thank you for joining us for this as we believe it's a very important discussion of this urgent and timely topic.  My name is Olga Kyryliuk, I'm the CEO and founder of The Influencer Platform.  We will first move with the presentations from our speakers and then we will open the floor for the questions for the public.  If you have any questions, keep them in your mind and ask them later on to our speakers.

And coming back to the issue I would like to mention, that nobody cared how the social platforms were regulated.  They have probably reached their peak.  Those who were calling for regulation have been aware of how that should look like.  Whether it's possible for the national governments it extend their regulation to those companies which are often located abroad and who have their service hosted in other countries than those who are trying to regulate these spaces.

No one, neither the regulators, nor the companies themselves have found the right solutions yet.  Even in his opening speech, President Macron says we have to regular late more.  And now our speakers have a lot to say on this issue.  And here not to overuse my rights as the moderator, I would like to address my first question to Claudio Lucena, who is from Portugal.  Claudio, in your money, what went right with the digital social networking plan and what went wrong?

>> CLAUDIO LUCENA: Thank you, Olga.  I think we could have started by an easier question for the first issue of the morning.  Thank you very much for that first very provoking first statement or first question.  And thank you also for the effort you put in this ‑‑ this together and for calling our fellow panelists.

Again, I'm not using up a lot of time, because I was helping Olga structure this.  I would like really to give you an introduction as to the path, how we moved from a time where we as the title suggests thought about a digital world, where do not touch was the absolute world, to a world where on ‑‑ on Monday, in a statement that to my knowledge was the strongest statement by a head of state in the history of IGF.  We have moved so fast and so unequivocally in the path of a regulation and I would start a couple of my ‑‑ of the secret statements or secret treatable phrases I had here, what were unfortunately by Mr. Macron on Monday.  I will share with you so we can see the path where we are.

One of them was ‑‑ I don't know if you were here on Monday or already had time to reflect upon his words but one of the sentences was pretty much from the general translation they have here, "regulation is inevitable.  It will be done with or without you."  And then again" social platforms start to be described by some as a menace or as a threat for democratic societies."

And then another comment, another interesting comment that he was starting to hear the disapproval by the sole mention of the world's regulation.

So taken from Olga's provocation, let's start by saying, why does this happen?  Where we were, where are we now?  And what are the possible paths for us in the future?

So when we were starting to use the tools, the thing is, what we were exchanging, basically back then was information, cats, family videos, or personal pictures, general messages.  So the amount of information and the sensitivity ‑‑ I would say the sensitivity of the information and of the output of the ‑‑ of the externalities that we were seeing in the environment, were not so delicate.  Even if we felt external effects from the use of social media in the past decades, they were not so delicate as they were today.

But suddenly, the external facts was more evident and stronger and economic developments, social development, cultural developments, behavior developments, people started to behave differently over the two decades.  Jobs were created over the new platforms, jobs that touched upon in a youth session here about the future of jobs.  Jobs, content production and dissemination‑based jobs that simply did not exist 20 years ago.  A new environment started to be forged out of this new environment of the social digital platforms.

And with that, the evident feeling that this world could not be out of control ‑‑ absolutely out of control as it was back then suggested by John Pearly Baru, but now it seems we have touched a key point because over the past two years, interactions concerning ‑‑ based, involved or absolutely connected to this digital platforms have started to interfere directly with democracies.

A bit before there was already the explosion of cybercrime and cybercrime already called for a stronger regulation, for stronger, let's say, points the intervention from states if not absolute regulation, but then cybercrime was already started to developing as a structured framework at least in the legal terms.  But now, when democracies start to be touched, then it seems we have reached a point in which this interference cannot go without oversight and I'm trying to use, every time the word ‑‑ the word regulation or ‑‑ in a harder sense, because exactly, we do not yet know how this is going to be built.  The form of intervention or this form of interference in the democratic systems has taken a couple of different profiles from now, a couple of different forms.

It could have been more direct to a more profile‑based interference, as it has ‑‑ as it's now a bit more clear than was the case for the American election.  It can take much more the idea of a sending of mass messages without necessarily profiling characteristics as it has ‑‑ as seems to have been the case for us back in Brazil, now in the recent elections.

But the thing is, this digital space now mediates a kind of interference that touches directly, at least western representatives of democracy and now it seems that we are not moving anywhere else than in the direction of this kind of intervention.

Now, where do we go from now?

We have witnessed a couple of times when the CEO of ICANN mentioned the fact that for the community that we're doing something that has never been done.  We're building policies about things that were never done back in the history mankind.  So ‑‑ and I ‑‑ I do not want to be naive enough that says there's a hidden message.  We will take this message and things go on without deterring much from where we are going.

But in essence, the statement is true.  We are thinking of something now.  What we have to do now, in the dynamics that we have to do.  We have regulated other business or other economic activities of global reach before.

But this has characteristics that have never been this.  It's wider in the sense that it touches all human activities than ever before.  It's deeper in the sense that it does not only touch all activities.  It touches them very much.  It transforms all of them.  And it's quicker, faster than ever.  So it touches all activities.  It goes deep down in each of them.  And it is very fast.  Simultaneous in most of the cases.

So I think these are characteristics that render these new challenges different than what we had before.

Where do we go now?  Fortunately, as President Macron touched pretty much the very same key than the one I'm referring about, he said, we need to learn how to regulate together.

So this sends a message that we are probably going for somewhere, which is closer or at least the suggestion is to go closer to something that we know very much at the IGF which is the multi‑stakeholder model, with pitfalls but also with advantages.  Our fellows Daniel Lee Thompson and David Moore have published in the "Wall Street Journal," a multi‑stakeholder over platforms, it's a sketch.  There's nothing very much developed in it.

I'm leaving you with this here so we can go on with our fellows and their comments.  The fact is we're moving for a stronger oversight at least of these platforms.  Because the movement from the platforms is also interesting.  They absolutely defended the environment, which is very ‑‑ which very much reflects the American innovation environment of not touching something that was under development in the beginning of the days.

And it's clear that they see now that the movement for some oversight, some type of monitoring is going there.  So they are partners, current ‑‑ if you check the news all over the world, they are partners in the movements and assessing development.  They have already shifted their minds.  They have understood that they are not going to be like in the beginning of the times forever.  Now the call and message I want to leave in the end, now maybe it is time to recognize that movement, acknowledge and find ways to meaningfully contribute to this movement that will result in some kind of harder intervention, monitoring and oversight by governments, but that can also be shared by other stakeholders.

That's pretty much the scenario I wanted to paint, Olga.  Thanks.

>> MODERATOR: Thank you, Claudio.  And a quick follow‑up question from me.  You are saying that some oversight is inevitable when we are talking about to the regulation of the social platforms and for me, it sounds a built confusing when the government officials are talking about that we need to regulate together.  But they use the term multilateral and not multi‑stakeholder.  And so I'm confused.  Are they not understanding the nature of these two different, let's say, concepts of regulation or is it really that they are just trying to give their confusing messages to the audience and whether there is a place for the social platforms to get engaged into this new kind of regulation that's being proposed by the governments?

>> CLAUDIO LUCENA: If you mean just ‑‑ again, a quick answer and then we can debate a little built afterwards.  If you mean the social platforms to engage in this kind of exercise, definitely.  They are well organized enough to weigh in on any legislative process around the world.  What I'm not sure is our ability as a civil society to weigh in and also discuss this.

Never on the same ‑‑ on an equal footing but somehow.

>> MODERATOR: Thank you.  And now moving to our next speaker, Nicolas Diaz Ferreyra, for matter.  And Nicolas will share with us his thoughts on the role of risk awareness in social networks.  What is that and why is that needed?  Please, Nicolas, the floor is yours.

>> NICOLAS DIAZ FERREYRA:  Thank you very much for the invitation.  Yes, I think that we should go back maybe to a more general question is why people disclose so much personal information on the Internet especially on social networks, like Facebook.

From psychology, you can identify different theories that are more related to the personal characteristics of the people, namely peoples with high levels of narcissism or people with seeking for popularity.  And others discuss more private information.

And partially it's truth, but I think we should remember that computers are a social actors and they modulate to a certain extent a perception of our personal information.  What I mean is that our digital data is intangible and we proceed from the ‑‑ we proceeded through the interface of media technology.

And media technology moderates our perception of such value of information.  If we trade from the real world and the offline world, I think we are more attached to our private information when we are on an offline context.  So, for example, if someone stops me in the street and asks me for my passport, I will not give it away, just like that.  Even more, I will have a visceral reaction.  This burning feeling in the stomach when something is happening.

Unfortunately that visceral connection with our data is not happening.  The only moment when that visceral connection, when we feel our data takes place is when something bad is happening.  And what does it mean when a risk is materialized?

In terms of risk assessment or in terms of risk awareness, I would say that it's quite light and why do I say this?  If you look at the interface of a social media platform like Facebook or Twitter, I would say they look quite flawless.  So there's no risk element there.  Even more, if you look at the privacy policy, at least I have tried and I could not find the word "risk" in any part of the policy.

And, again, in the real world, we interact with risky situations almost daily.  I mean crossing the street in the right place or buying products or services.  We have high risk awareness of what can go wrong and even more, we have instruments that tells us how risky it is to perform an action or to consume some risks ‑‑ sorry to consume some products.  For example, health warning labels on cigarette packages.  It tells you that it gives you cancer.  It's not the same as I say smoking is cool, but smoking gives you cancer.

And the designers of this technology are failing to go create the risk awareness.  What are the risks, for example, of ‑‑ (Garbled audio).

And the list goes on and on and on.

But it's true.  You don't find any kind of cues about what can go wrong.  Often social platforms introduce themselves like spheres that are free of any risk, and that actually modulates your perception and your attitude towards using the platform.  And, of course, if you think that there's no risk, you will disclose more information.

When the risk factor comes into play, you are more likely to share less information.  I have come to the conclusion that we should be more ‑‑ we should demand more risk awareness from the platforms and essentially, I don't know if this ‑‑ this should be included in the policy or as I do it on my research, we try to find the introduction of preventative technologies in the social media platforms so people can be more aware about what can happen with their data.  I'm not saying what the platform can do with your data.  That's another discussion and, of course, we should have it as well.

But which other risk of exposing their lives on the ‑‑ on the Internet.

So I this that ‑‑ I think that we should ‑‑ we should concentrate more in developing the instruments for risk assessment, either ‑‑ both in a user‑centered level and also in ‑‑ in ‑‑ in the service ‑‑ in ‑‑ at the service level.

I think risk is something that has been neglected for a very long time and it's a crucial ‑‑ it's a crucial aspect on the ‑‑ on regulating the Internet.

>> MODERATOR: Thank you, Nicolas.  So you believe the risk should be embedded in the regulations when we are talking about the social platforms?

>>> NICOLAS DIAZ FERREYRA:  Absolutely.  I think we as consumers of media technology or users of media technology, we have the right to know what can go wrong.

And ‑‑ and I think that we owe people the chance.  We should work harder in that direction.  Because as I said, we have it in the offline world.  We have it in our daily life, when we buy a car, when we have standards for making or producing cars.  We have those standards comply with security and security and safety standards as well.

So I think we should develop those standards.  We should develop those best practices and should be around the risk awareness, yes.

>> MODERATOR: Thanks a lot for this very interesting point of view and now I think not less interesting point will be presented by our next speaker, Salvador Camacho who is cofounder and CEO of Kalpa Protection Digital in Mexico.  And cofounder of GGWP Foundation.  So Salvador, can you please share with us, can the domain names serve as a way of trying to have independence from social media and how this could look like?

>> SALVADOR CAMACHO HERNANDEZ: Thank you, Olga.

Well, first of all, thank you, everyone, for being here.  I know it's early and ‑‑ but thank you very much for your time.  We are like the real life and digital life, we are like the boundaries are getting more difficult to observe.

Every day we are like getting more involved in the digital world and into ‑‑ and embrace it along with our non‑digital life.

So first of all, I feel that we have the need to raise a general awareness on what's happening of ‑‑ of what happens in digital life, and it impacts in our real life or our analog life.  We need to raise the awareness and to know ‑‑ a lot of people that are here are aware of this, but outside of these rooms, there's, like, maybe 90% of people that are not aware of the impact that social media has in their lives.

First of all, I feel like these boundaries are more difficult to observe.  Second, that we need to start with this awareness around people who are strongly getting connected to the Internet and maybe not just to the Internet, but they are like feeling that they are connected to the Internet, via social media.  So that's for me a huge problem because they are like trying to understand Internet, maybe just as a social media app, for example Facebook.

There are a lot of people in Mexico getting connected for the first time but they are getting connected to Facebook, not to the Internet and that's a huge problem, because they are giving that away.  They don't have a clue about all the data management and there's no risk assessment around this.  There's no way that people that are not very well trained to understand these kind of issues can have the knowledge that they are giving the data away.  The data that it's feeding every time this social media are using it to what ‑‑ to give them, yeah, maybe something useful, but they are living for that.

So after this, I have, like, started this and tried to reach a conclusion around the domain names.  I'm saying this because legal industry, we have seen that one of our major competitors in the market is not other industries or other domain names but Facebook, but social media.  Why?  Because it's easier.  It's easier to open an account on Facebook to promote your business or to promote yourself, as, I don't know, an influencer or a speaker or an attorney for the service that you give.

It's easier to have a Facebook account than to buy and to grow a domain name.

So I believe that that's a pretty interesting issue.  Why?  Because a lot of people for seeing this like in analog way to say it, they are building their houses in somebody else's terrain, in somebody else's house.  They only have a room in somebody else's house, instead of having their own house.

So, yeah, we have been starting this, and we believe that maybe having a domain I am name can be this part of being an independent from this social media kingdom to where we can have not only our own land, our own space and our own house and most importantly, our freedom.

Why?  Because we can manage it.  We decide what we can put there.  We decide, even though we are responsible for the data that we receive, yeah.  But we need to start like, getting out of Facebook, getting out of Twitter.  Yeah, use them, but be aware that we are giving them data.  So that's a very important issue.

And, also, I want to state that technological tools are neither bad nor good.  They are just tools.  So the way we use them is what we ‑‑ is what we are made of.  As an example, Facebook in the early days of Facebook, was like very naive.  Only to share photos, only to share gifts, share gifts and memes, but right now, it's a whole different world on Facebook.  There are a lot of fake news.  There's a lot of political stuff.  It's changing the way also that we think.

So we need to be aware that technological tools are just tools.  They are not good or bad.  So ‑‑ and we also have to address this paradigm that the more information that we have, the more ignorance we are becoming as a society because we are flooded with information.  We are getting more ignorant every time because we are not ‑‑ I don't know, on a Facebook feed and sometimes, like, we are seeing very interesting political news and suddenly, we see a cat.  Falling down or then we saw a meme about, I don't know, "The Walking Dead" final season or whatever.

So we are in this moment that we are flooded by information and that we need to start addressing what kind of information we need or we want to, because ‑‑ so the thing that we have reached conclusion is that we have to be aware of what we want.

If we want to have fun only, that's okay.  Have fun on Facebook.  Have fun on social media.  But if you want to go further, be aware that, yeah, we need to explain and to address that we need to be aware of that social media will know that they are using our data, but we need to maybe start to apply them to ‑‑ sorry.  To getting maybe paid for the use of that video or at least know why and how are they using it.

So, yeah, that is ‑‑ a lot of people last year started saying that data was the new oil.  I don't believe that.  Because all oil is going to get ‑‑ it's a known renewable resource.  Data, we can have data about everything!  About walking, about talking, about sharing, about memes, about everything.

So data, it's infinite.  So the last ‑‑ the last ‑‑ my last conclusion is that maybe we should start getting paid by social media for the data that they are using.

>> MODERATOR: Thank you, Salvador.  I was never thinking of this alternative comparison of having a domain name as really an alternative to having a profile on Facebook, because really, it’s so easy.  You just go to the social platform and you create the account and it takes a few minutes and that's it.  To have a domain name, you need to take the extra steps and pay for it.  It's in our human nature to opt for easier ways that's why.  And that's why Nicolas was saying, we need more risk awareness and we need to talk to people, to get them to know the information about how the data is used, because for many, it's just the choice of convenience to go to the social platforms and to use them.

So thank you, once again, very much for this interesting presentation.  And now, I want to pass the floor to Natalia Filina who is a member of the European Organization, EURALO and Internet Society.  Natalia, what do you think about the future of the regulation of the social platforms?  Should we or could we with the self‑regulation, as it was starting or should we really reshape this regulatory environment?

>> NATALIA FILINA: Good morning.  Thank you, Olga.  Thank you to you all for your early morning.

I'm present here private sector and I would like to have' short speech on behalf of all of us end users.

The number of positive opportunities of social media platform is real tremendous.  We can understand exactly that we not just equal access to the Internet, but exactly equal access to social media platform.  Can ‑‑ can give us a lot of opportunities for very comfortable, very interesting new digital life.  Often without real life, but it depends.

So we have now diversity worldwide communications, the ability to get knowledge, to have our work online, to have ‑‑ to get freedom of our ‑‑ for our expression.  To get the Internet alternate news from different people.  I think it is very important and ‑‑ in our today.

And it's very important that everyone can become an influencer.  We can be ‑‑ we can become the source of really changes in our society.

For example, we can ‑‑ we can create some humanitarian issues and be leaders on this.  There are two cherries on top of this gig.  Our content and our personal data.

Who are creators?  We can say we know who we are, but who is owner?  We want to say we are but we can't be sure because ‑‑ because if honestly, we can ‑‑ we know that all ‑‑ we sent it to social media platform, all of our contacts start leading new interesting life without our participation, without guaranteed respect to our privacy and copyrights.

And in the case of moderators from social media platforms, social media can immediately delete our accounts and lose our content, photos, text, what we can create and post before.

So Internet companies are ‑‑ are bosses.  They have all of it and they have authority.  And we know this ‑‑ this group.  We know that they say once upon a time, they ‑‑ they were in small room, and sort of how can he build some and create some platform just for people, but now we know that these guys now have a lot of money, a lot of benefits, a lot of billions.  They though with our content and with social media platform.

Think these guys must be under control.  Who does it?  I think the states with laws, of course.  But it's not just easy structure.  I think it must be a permit on the top of it, and understate our different organization.  For example, as global commissions, we carve a space.  Now this ‑‑ this is collaboration of some ‑‑ some states, and they are working on coordination and defined responsibility and we have therefore states and non‑state subject in cyberspace.

But we don't talk about it.  Someone must remind that the states want to use social media platform for control and, of course, manipulate us and our opinion, unfortunately.

We as end users don't care about billions.  I think we can say it.  Which Internet company can or already get.

About the tourist dates and the European Union have a discussion about tax for Internet because they talk about advertising and using our personal data.  We care exactly about our privacy and copyrights.  And we need some ‑‑ I think some insight but behavior, to solve the real and sharp problems of social media.

We can say about sexual violence, about pedophilia, and different ideas.  We know absolutely whether to use our personal data.  And all participants need a clear information fields, I think, that ‑‑ that always depends of our digital culture, our culture as Nicolas say, all of our data relates to our personal insight and character.  Yes.  I agree, the ability to respond on abuse and immediately escalate if we can see some problem.  And some illegal context, for example.

The role of the states and technical companies was lost innovation, to create a system of digital education for society.  Equal and free for everyone.  This challenge is much more important when we think about statistics, around 17% of all social media users are teenager and kids under 18.  And we can forget about some vulnerable age people.

We are just talking about protection, I just need to start the process of dedication.  It's part of social activities around the states.  And as Olga asked me, can we imagine the situation of full self‑regulatory, social media platform, I can say that I don't think so because self‑regulation, is by form and rules.  End users owners, like so much freedom and it is a big story where two subject create something and every third immediately think up how can he steal and use it or get some benefits.  We need power of love.

And self‑regulation is a form of ‑‑ of absolutely freedom and we can understand that impossible situation.  I think the freedom has an end when ‑‑ freedom has an end when someone's freedom has start.  And the law should restrict action of owners of large research and big data.

>> MODERATOR: Thank you, Natalia.  And our next speaker is Catherine Garcia van Hoogstraten, with the public sector, tech cybersecurity law and policy and the Hague University of applied sciences.  I know you are researching the regulatory private/public partnerships and we are really interested in what you are thinking on this.

>> CATHERINE GARCIA VAN HOOGSTRATEN: Thank you, Olga and thanks to all of my colleagues that are sharing their knowledge and input on this very important topic.

I would like to start with a bit of wrap‑up of what has been mentioned especially from my colleagues, some of them lawyers and technologists.  Interesting.  With or without you, as Claudio mentioned, is not only the name of a song, but it is something that is very much connected.  It's not only the title of a song, but it's something very much connected to in the large extent to my research.  But most importantly to up with of the goals of this workshop.

Yeah.  The democracy is at stake and we need to learn how to regulate together.

So, yes, there are externalities and yes, of course, if, for instance, we look at the scale of cybercrimes, this calls for regulation.  Absolutely.  But also for a risk awareness approach, as has been discussed by Nicolas.  If we look at the Paris call for security and trust, which has been launched recently, it makes a strong reference to key elements connected to this worship.  Namely the need of managing Internet security risk and I will add that we are a multi‑stakeholder collaborative approach to cybersecurity.

It can be observed that there has been a shift and there has been a shift to this multi‑stakeholder collaborative approach to cybersecurity, in my case, the field of my research.  However, can public/private partnerships be a solution?  And that's one of the questions of this workshop.

I do understand that for public/private partnerships to become a solution, should allow the government and many other stakeholders such as the key Internet service providers, the civil society, to pool the resources and know how to tackle key aspects, for example, cybersecurity, cybercrimes to fight against cybercrimes.  In the course of my research, on public/private partnerships, concerning cybercrimes, at the Hague University of applied sciences, I assess regulatory and government governance framework impacting the effectiveness of public/private partnerships in cybercrimes in Europe.

And I have a concern that the PPPs, I would like to use the acronym, which is more simple.  It's a notoriously complex phenomenon.  And in terms of roles, responsibilities, governance of the public and private partners.  What extent is there a regulatory framework, which is the second you put me for this workshop.

Is the ‑‑ is there a governance framework on the PPPs upholding public values?  Meaning obviously privacy, security, and any other accountability.  What methods of PPPs, collaborating in cybercrimes have been more successful than traditional models of governance?  And, of course, what roles and responsibilities can and should different parts of the government play in the comprehensive PPPs.  In my case to combat cybercrimes.

So one of the preliminary conclusions that I can draw from my research is that there is definitely a piecemeal approach around the legal framework for PPPs.  Which implies challenges and challenges as already mentioned about roles and, of course, also different parts of government, for instance, I'm very much focused on and curious about the role of law enforcement.  And private sector.  What are the roles that are displayed in the PPPs?

There is also a fundamental between the exploitation of partners in terms of responsibility and authority.

On the other hand, there is an evolving liability regime as we can see and assert.  Jurisdictional challenges, the nature and the ways in which information is being exchanged within the framework of PPPs, for instance.  And this goes ‑‑ if we look at the specific cases of e‑evidence and data forensics, for instance.

Finally, just facing crossword investigations, I'm providing, crossword or data transfers and impacts of these PPPs have on fundamental rights of citizens and this undermines public values.

For example, if we look at one of the chosen regulatory frameworks in which my research is based, the Council of Europe has its convention on cybercrime, otherwise known as the Budapest convention, it's a framework of procedural law for purpose of public/private partnership, however, a precondition for international and public, private cooperation is the criminal justice authorities have the necessary powers to investigate cybercrimes and, secure, of course, electronic evidence.

So such a procedure of powers corresponding to some of the articles of the Budapest convention, namely Article 16 through 21 must be clearly defined in the criminal law, and subject of course to the conditions and meet the rule of requirements.

So these are some of the ‑‑ the challenges I have so far encountered in the course of my research and I think that really goes to the heart of the question, is there a legal framework and to what extent this is working for the aim of PPPs.

>> MODERATOR: Thank you so much, Catherine, and now we have around 15 minutes for your questions and comments to our speakers.  So the floor is yours.

Anyone from the audience?  Yes, please.

>> AUDIENCE MEMBER: Good morning, everyone.  I have been working with the regulation of the platforms for many years.  And there is always one basic question, are the platforms kind of the chief editor or not?  Because it's much more ‑‑ answering for this question, it's from my point of view fundamental.  What do you think?

And the second question is because the subject is also the ‑‑ the censor regulation.  And how do you think it's possible to ‑‑ to self‑regulate those giants or not.

>> MODERATOR: Thank you for the question.  Who wants to take the first one?

>> CLAUDIO LUCENA: Can I start?  Once again, way to start the conversation.

Let me get two things clear here.  First thing is our approach is much more of a western world approach, because we don't have any elements, in spite of the fact that we have a certain diversity independent.  We have looked at a technical solution, a legal framework, and two perspectives from two other business fields.  So I think it's a fair variety, but we have to way of looking at, what happens with contacting and other digital platforms.  So we are concentrating in this.

And the other thing that goes a little bit more directly to your question is, I think it was Henry Louis Mencken, an American journalist who says for every complex problem that you find, there is a solution that's extremely simple, extremely clear and measurably wrong, because there's absolutely no way of addressing that complex issue from a simple approach.

I think this idea to see this as a chief editor tries to take because being to a world where we ‑‑ a world where functions are known to us, and very well established.  Because we know from around the world, what are the roles of a chief editor.  What are liabilities that fall upon his back?  And we know how to do this, right?

So yes, the functions seem to be the ones a chief editor and yet no chief editor of the world is subject to that scale of creation.  Now do we just apply the system we know, the liability system for accountant curator or a chief editor as we knew it from the past 100 years?  Do we just let ‑‑ because this is another alternative.  Do you let them automate this task because it's already happening in many private platforms.  This automation of the curation, and certain kinds of decisions are already being taken on an automated basis.  It would already be a system that we don't know because we have no liability ‑‑ we have eye liability system when there is a person curating the content.  We do not have a reasonably developed liability system for situations where this content creation or monitor or oversight is automated.  Or do we move to a place where we try to understand the new characteristics of a person who does content creation on an absolutely new basis.

I'm a little bit more for exploring a new alternative.  I'm not very satisfied.  The intermediary automated creation, which will happen to some extent, no matter how, because of the scale, but I'm not very satisfied with the roles with which this is happening if an automated pace.  I'm trying to see if we can build a third alternative.

>> MODERATOR: Thank you.  Anyone else on the panel who wants to comment on the question?

>> NICOLAS DIAZ FERREYRA:  Thank you very much for the question.  And we had a very interesting talk last night, with Catherine, about this and it seems that we do everything or we do not do anything.  And we want to automate everything.  We want to regulate everything or we don't want to regulate anything.

So I think we should start thinking more ‑‑ in a more systemic way in which are the areas that we can demand, for example, enforcement and where can we automate decision‑making processes and where can we put more control and in which areas can be self‑regulated by the service providers.  I think this take it or leave it approach, it's not taking us anywhere and we should put definitely more and more emphasis to inspection idea where we can work particularly and carry forward and make some progress.

>> MODERATOR: Thank you, Nicolas.  If there's no one else on the panel, then we can move to the next question, if there's any in the audience.  Yes, please.

>> AUDIENCE MEMBER: Hi.  Michael Candy.  I just wanted to as a comment and maybe question raise an idea that I recently saw from Jonathan Zittrain at Harvard where he's saying if these are really platforms and not editors, that we should be able to design our own algorithm; that is, with sliders, say how much we want our friends feeds and how much of "New York Times" or from different areas and just design our own feed and get that and then even share our algorithm with others so that we are in charge of the algorithm, rather than something we can't see or understand.

So I just wanted to raise the idea and get some reaction.

>> MODERATOR: Thanks a lot for this.  Somebody else?  Comments, questions?

>> AUDIENCE MEMBER: Thank you.  My question to the panel is ‑‑ I was glad that Nicolas made that last comment about leave it at or make it full regulation.  I want to know how the panel thinks about different options for regulations, even with the big mess of data and possible thanks to all from the panel side ‑‑ from the platform side.  So we have like ex‑ante and we have the false regulation.  And so these are some alternatives.  Of course, with a big amount of things to look into, you can also consider other things, mechanisms in some way, like taking samples of some decisions made by an algorithm into a human being and look at how decisions are taken on those.

So how do you think this could provide some options for a way forward?

>> MODERATOR: Thank you for the question.  Who wants to take it?  Nicolas?

>> NICOLAS DIAZ FERREYRA:  Yes.  Thank you very much for the question.

I think we should demand more risk information on the ‑‑ from the platforms.  Nowadays, the risk component, it's really missing.  Although in the GDPR, basically you ‑‑ in theory of privacy, you have two instruments that are not real ‑‑ that are really important in this equation.  You have one side, GDPR and the other side the digital policy.  The GDPR has more risk‑oriented semantic in Article 33, it was introduced the risks to the rights of data subject.  And so it's more risk‑oriented instrument.  But on the other hand, for example, the policy, I mean, I checked Facebook, Twitter and I remember which other ones.  If I could find the word "risk" in the policy, and it's not there.

And so in that case, as a consumer, we are not informed at what can go wrong.  If there can be data leakage, if we can be hacked, if our information can be disclosed to third parties.  I think that this ‑‑ at the end of the day, moderates our perception about the product that we are consuming or using in this case.  And I think it's not fair.

I think we need more information about ‑‑ about risks.  And our policy, or interface, we can be creative and think out of the box.  I have worked out on my research, with my research team, and I think that ‑‑ I mean, I'm not a lawyer.  I'm a computer scientist and I develop usable privacy technologies.  But I have a ‑‑ I truly believe that this is a way that can be explored and can be worked jointly with all the stakeholders and Internet components.

>> CLAUDIO LUCENA: Let me say, it's a fair question regarding the theme, if you consider the theme, a very fair question, let's look at a model and see what it looks like.

A fair question demands an honest answer.  We don't know.  We haven't devised ‑‑ we haven't fully designed that model.  There are a couple of elements that haven't been given enough attention as the one as the risk, as some ex‑ante elements, the risks that Nicolas has just mentioned which education, which is a short to long‑term element that was mentioned by Natalia in addressing also the question from our fellow here on the left.  Because yes, designing or weighing in, in the design of how you see things or how algorithms behave in relation to you as a person is a very good option but it takes awareness.  And it's a kind of tech awareness that is not overall present in each end user as it is now.

So education could be a way to achieve that as an element.  These are pre‑ex‑ante elements that we have not been considering.

I do not think and this is also something I want to make very clear.  I'm not an activist or at least I'm not wearing an activist hat here.  I'm at academia and I want to come up with a solution that's reasonable and implementable.  I do not and I cannot believe in bad and good.  I don't think at the end someone says, you know what how we interfere in democracies 20 years from now or how we could help spread terrorist videos or how can we broadcast crimes, live to increase the audience?  This is not fair.  This is not reasonable.  This did not happen that way.

Yet, corporate decisions along the time made the situations happen.  They are here.  And now how we deal with that.  So I think using the same element and analogy is a good way to start, but using the same elements that we know as ‑‑ as the ‑‑ our drivers to the solution won't work.  I think we have to cook this a little bit more and come up with a better solution that is not integrated.

Here's the thing, maybe 20 years ago, the vice in civil society and academia, including the channels through which we could make that resound are not there.  I think we have a better opportunity of weighing in.

>> Final, I would like to add to what my colleagues commented that I will still put in the framework of solutions to address your question.  The PPPs and I think provided that, of course, there is transparency as I mentioned in my presentation on role, responsibilities, governance of these PPPs, I think it's also doable regular had Tory governance sort of framework in which it could work.

I think we have to ‑‑ for this, we have to really look at the use cases.  What I have found in the course of my research is mostly in relation to cybercrimes.  There are some specific partnerships working, especially in the framework of Council of Europe cybercrime convention.

And so once again, the call will be looking at use cases and determining whether there is transparency of these roles, you know, these responsibilities and all in all, the governance framework.  And I just want to use this last minute to invite you to the lightning talk because we are continuing this conversation with Nicolas together, putting together a lightning talk on awareness by design at 1:00 at the level basement, minus one.  So please join us to continue this conversation.

>> MODERATOR: I would really love to continue this discussion, but unfortunately we have to wrap up.  I want to thank all the panelists for making to Paris and participating had this discussion and thank you all for coming here and listening to us and sharing your questions and comments on the issue.  And of course, the loudest arguments for or against something, are people who have desire to compromise, but I really believe that this is the area where we have to compromise and we have to work together to find the mow efficient solutions.

Thank you once again and have a good day.