IGF 2016 - Day 4 - Room 6 - DC on Platform Responsibility


The following are the outputs of the real-time captioning taken during the Eleventh Annual Meeting of the Internet Governance Forum (IGF) in Jalisco, Mexico, from 5 to 9 December 2016. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 


>> LUCA BELLI: Good morning, to everyone.  Welcome to this third meeting of the Dynamic Coalition on Platform Responsibility.  We are here today to discuss some quite relevant issues and particularly how a platform can behave, and also other intermediaries but particularly platform can behave in a responsible manner, and based on the principles in which states have a duty to respect human rights, platform -- sorry, not platform, business entities, amongst which platforms have a responsibility to respect human rights and they jointly have a duty to provide effective remedies in case of human rights violations.

We have a very intense panel today.  We will have -- we will start with Toby Mendel, who is the director of the Center for Law & Democracy that will present their recommendation for responsible tech that they have elaborated.  Then we will have Megan Richards that is principal advisor at DG Connect.  We will Wolfgang Schulz, research director at the Humboldt Institute, Karmen Turk, from the University of Tartu and also from Trinity, and then we have Barbora Bukovska that is senior director for Law and Policy at Article 19, and last but not least, we will have also Ilana Ullman from Ranking Digital Rights.

To kick start the meeting, I would like to ask Toby to start providing us some insight on their excellent work on their recommendation of responsible text, so Toby, go ahead. 

>> TOBY MENDEL: Thank you.  It's a great pleasure to be here.  This is the main product, and here are the recommendations of it.  I have two copies of each.  I don't want to take them back to Canada with me, so after come take them.  It's available online at www.responsible-tech.org:  And the focus is on the human rights responsibilities of private-sector online intermediaries, so we define that broadly, you know, access providers, platforms content with content spaces, whatever, really.  We meticulously avoid getting into the issue of state obligations in this piece of work.  We do have another product, a piece of research on that.  It's from 2012, so it's a little dated, so I think the main things remain correct and relevant.  I have one of those, and I don't want to take that back to Canada also, so please take that.  Okay.  Somebody's got it already.

We're aware, obviously, of the very close relationship between private-sector responsibilities and state obligations and state performance, but we tried to avoid that as much as possible because it's an endless rabbit hole, basically.

The work is a result of our one and a half year process of study.  We have partners the Center for Internet and Society in India, Arab Network for Human Rights Information in Egypt, the -- I'm not going to try to say that in Spanish, but it's a Center for Studies on Freedom of Expression and Access to Information in Argentina, Open Net Korea, and the Canadian Internet Policy and Public Interest Clinic, so partners around the world.  We had an advisory panel with a lot of leading lights, both from the multistakeholder -- both from the tech side and from academics and not governments, actually, but -- and Civil Society, UN special rapporteur on freedom of expression, and the result was an analysis ranging over quite a few issues and looking at -- and with recommendations as well.

I'll just comment very briefly on what Luca has already laid out, whereas states have obligations, the language that's emerging for companies as responsibilities, and I notice that in the DC PR recommendations, you distinguished between shalls, which companies must do, and shoulds which companies should do, and I would just like to say that this is really an emerging area.  It's been emerging for a while.  We know what obligations means for states.  They're in many cases legally obliged to respond to those obligations, and not to do so is a breach of their legal obligations under international law.  I think that it's much less clear for states, and of course, states, you know, there's a horizon application of rights, so -- horizontal application of rights, so states have obligations to place obligations on companies, and companies will have a legal framework, but still a very, very important emerging area.  So my main focus will be on recommendations, and I'll highlight some of the differences between our recommendations and DC PR's recommendations, and I'd start by noting one of the big challenges we faced with this is trying to make recommendations which are globally applicable and applicable across the full set of private-sector intermediaries, and I noticed that in your recommendations you kind of distinguish between -- in some places between the big players, you don't quite define it like that, and there are -- perhaps there's only one really dominant player.  That's my view.

But anyway, there’s not more than four or five of them, and they do control public spaces, public communications, things.  We didn't do that because we wanted to be universally applicable, so we didn't have that kind of distinction.

Also your work focuses on Terms of Service, ours looked at all of the activities of online intermediaries.

Just a couple of comments on your principles, which I -- you know, there's a huge amount of overlap and concordance between what we say and what you say.  One thing I notice is you define a legitimate law as a law which meets international standards on freedom of expression.

Now, I think that Canada's a pretty good country and have a pretty good record on human rights, but we have held being in breach of human rights standards on numerous occasions, often involving our laws, so many Canadian laws do not correspond to that standard, and the result of your use of the term "legitimate law" means that you're actually saying that companies are not supposed to obey illegitimate laws, and that means that tech companies, even in a place like Canada, let alone the vast majority of the world, would have to refuse to obey the law, and I don't think that's a reasonable ask to make of them, so I think that's kind of a structural issue with your recommendations.

A lot of commonalities and some differences, which I will point out as we go along.  Our recommendations cover six different areas, expanding access, net neutrality, content moderation, privacy, transparency and informed consent, we grouped them, and responding to state attacks on freedom of expression.  We're not going to address the first two because we're looking at platforms and those are not so relevant for them, as you asked.

Okay.  Let me start first with the transparency because it's really a cross-cutting themes.  In terms of Terms of Service, many of the same things that you said.  We -- sort of going beyond some of the things you said, we said that -- I mean, Terms of Service are the basis of the legal relationship between platforms and their users, so, I mean, as a lawyer, I understand that sometimes those things need to be rather complicated because they need to be legally sustainable or defensible or sound, so we called for companies to produce summaries or guides that would not be legally binding but that would, for a layperson, render those Terms of Service, which are legal documents comprehensible.  We called for them to be a made available in all of the languages which companies provide services.  We had quite a long discussion about exactly what languages.  If you're providing services in a language or doing something in that language, you should also have your Terms of Service in that language.

We called for companies to help users understand the Terms of Service, sort of providing a health -- a help desk, for example.  To consult prior to making major amendments to those Terms of Service, and like you, not to opt in to new services without consent.

In terms of reporting, still under the transparency hat, we talked about three types of transparency reporting, which could be in one report or in three different reports, about take-down requests, about information about user requests, and also about Suo Moto action or self-action, so where the platform acts itself to take action, that also should be reported.

We called for strict transparency on procedures for responding to government requests.  We also called for companies, you know, within reason, to legally challenge transparency restrictions, you know, where they were not in line with the local legal framework, including the constitutional framework, and to explore alternatives to reporting where reporting was legally restricted, for example, in the form of warrant -- so we had no user requests today, we had none today -- we don't see anything today, so that means we did have one today sort of thing.

Okay.  Going back to the content side of things, very topical as always, some recent developments in that area that captures some attention here.

We basically accepted that it's up to companies as private-sector actors to set the content standards.  You know, obviously they're required to obey the law, by and large, but if they want to set content standards over and above the law, that's fine.  I was chatting with Patrick.  You know, if you want to have a -- a platform that discusses horses, you know, you can say only stuff about horses is available on this platform, and if you talk about cows, we're going to remove it or whatever.

We said, however, that those standards needed to be based on objective standards -- sorry, those rules needed to be based on objective standards and that they should not be ideologically or politically based and you should consult with your users about that.

We understood at the time that was not a fully developed standard and a lot more needs to be done to understand what that means, and for example, we have today this announcement about the terrorism initiative, and I think that, you know, one might argue that that was politically connected.  Obviously, there were was state pressure to generate -- there was state pressure to generate that standard, but we recognize more needs to be done on that.  That's probably the most significant we made in that area.  Obviously the open standards we applied to that.  We put most of our due process stuff inside of that.  We -- you know, the general due process standards.

We also placed an obligation on companies to apply their content rules consistently.  It's very -- it's okay to have content rules, but you can't be arbitrary in the way you apply them, and that requires some attention and effort and resources as a result in the way that those standards are applied, so, you know, if you're going to set up content standards, be prepared to apply them properly.  Yeah.

We had a little bit more on that, but I'll skip that for now.

On state attacks, I don't think we had anything really radically new.  We talked about human rights impact assessments, we talked about providing user information only where that was legally required, so not sort of voluntarily cooperating beyond that, you know, no one else does beyond what's legally required, so I think that's a fair standard.  We call for some pushback, sort of advocacy pushback, legal pushback, in other words subject to court in legal limits or whatever, and in extreme cases we called on companies to consider breaking the law.

I mean, obviously, that sounds a bit radical, and for an individual, you know, sort of when you break the law, you're sort of -- you know, you're subject to the law or whatever, but with companies, I think, you know, it's more of a power struggle about the law because we're talking about a human rights illegitimate law by definition, and sort of breaking the law is a challenge to the state, and of course, if you lose the case of breaking the law, you may have to pay something or do something or whatever, but it's a way of fighting back and in extreme cases perhaps to leave.

Privacy, which I have purposefully left for the last of my comments, my view is that the whole privacy space is a big mess legally and practically, and I have published those and I will be happy to share those.  Within the privacy Citadel Europe, there's a massive confusion over privacy issues, even to the point of massive confusion even at the level of privacy and information commissions about the difference between privacy and data protection.  Okay.  And that's within sort of the strongest part of the world on this issue.

There is, obviously, a basic rift on privacy between Europe and most of the rest of the world and the U.S. on the other side, and the practical implications of -- practical impossibilities, sorry, of realistically achieving informed consent, which is the foundational basis for the, you know, application of privacy rules, I mean, it's just not going to happen.  Even if you make a user click off an "I agree," stand on his head and sing a song, the person is not going to understand the things that they're signing up for.  I arrived at my hotel a few days ago, I signed on to the Internet, and I clicked, of course, so I think that what is required here are regulations.  We cannot -- so this myth that we're operating under, I just don't think we can continue to do that.  The privacy Commissioner of Canada is having a consultation on this right now.  When I gave my comments to that, I said very, very strongly, this is an area which needs regulation, and you need to be leading on that.  They were very, very happy to hear that.  I was a little uncomfortable with making them so happy, but I think it is the only way to go.

In terms of our standards, obviously, the --

>> LUCA BELLI: (Off microphone)

>> TOBY MENDEL: Yeah.  Obviously the transparency staff, in addition to the regular transparency rules, when companies make claims about privacy, they should be clear on them and then they should deliver on those claims.  Sometimes companies make very bold claims about the privacy interests that they're protecting and then don't do that.  That's not acceptable.  We called on them to educate users about security and about the importance of privacy, and to immediately inform in cases of a privacy breach, which hasn't always happened.

In terms of great minimization, we agreed with many of your recommendations on that.  We called for companies to limit the ways in which they process data.  Automated processing is much less of a privacy invasion, culturally or socially speaking than sort of human processing.

And we called for companies that use privacy as a basic business model, so, you know, trading your privacy for the service, to consider providing options for customers who would prefer to pay so they would opt out of that privacy tradeoff.

We -- I mean, in terms of security, again, we shared many of your points.  We did -- I mean, I think -- we believe that research is a very important public interest area of privacy invasion, let's say, or a balancing of privacy, and we think that in addition to anonymizing data for research purposes, in some cases that is not enough, and researchers -- either you can't anonymize the data or researchers need to have data that's not anonymized.  In every other area of life, you know, governments -- not every, but, I mean, most governments have ways of giving researchers access to data that is not anonymized, that is a privacy-invading exercise, but by placing them under a sort of cone of secrecy, and we think the companies should consider the same things.

We had quite a few recommendations on the right to be forgotten.  Basically, briefly, we called on companies to do what the European Court of Justice failed to do, which is to establish very clear standards for how they are going to balance.  I think it was a radical failure of the European Court of Justice to sort of hand this thing over to search engines and to not even give them clear direction on what the standards were, the court not having done that, the companies should.  Thank you. 

>> LUCA BELLI: Thank you very much, Toby, for giving us a lot of material to start this meeting, so as Toby was mentioning, there are a lot of initiatives like Toby's initiative that are trying to tack am this issue on how to concretely allow platforms and intermediaries to behave responsibly.  There are a lot of Civil Society-led initiatives.  We will also hear some governmental-led initiatives during the meeting, but what we have tried to do after having elaborated a set of recommendations last year on Terms of Service and human rights -- by the way, they could still be updated and modified, so thanks a lot for your critique because it is something that we take and actually we could work on it together.

Besides that, we decided also to last year at the Center for Technology & Society in Rio, we decided to provide some concrete evidence, some data because there are a lot of initiatives to provide guidance, but very few that analyze actually what are the state of human rights, what are -- are they respected or not by intermediaries.  We decided to have this -- to produce this booklet on Terms of Service and Human Rights that can you freely download at internet-governance.fgv.br, that also includes in the appendix the recommendations on Terms of Service and Human Rights that we elaborated last year.  And so the study is basically an analysis of 50 platforms Terms of Service, and the principle investigator of the studies is there at the end of the table, so if you want to compliment my points, feel free to do it, but the -- the point that -- the basic consideration replicates that as Toby was mentioning, Terms of Service are the basis of the relationship between the user and the provider, but we consider it a little bit farther than this.  It's not only the basis of the relationship, it's also a sort of private regulation.  It's private ordering that put the platform provider in a position of a sort of private sovereign that has a quasi-legislative power because it unilaterally defines the rules that have to be respected in the platform, a quasi-judicial power because it can decide of the conflicts among users using the rules that are uniformly defined by the platform, and it also has a quasi-executive power because it can implement algorithmically the rules of the unilaterally are defined by the platform, so our main consideration was how to understand how those rules, those private rules, impact the human rights of the users in a are already defined by international standards, so the first part of the methodology of our study was to identify relevant human rights standards, binding standards, and also -- the Council of Europe Guide on Internet Users, and it has been produced in partnership with the Council of Europe, so that was precisely meant to feed the Council of Europe work and also implement the guide.

So after having identified what were the main documents in which you could work, we have reduced all the elements of this document to a yes-or-no question.  We have the analysis, then, with three different teams that were working independently, and then we have crossed the results of these three teams that were not speaking amongst each other, so they could not be in France -- we have crossed the result, and we have included only the results that were a clear no or a clear yes with the percent that it was superior of 75%.

And then we have elaborated some conclusions, and here are some -- well, let's start with some general conclusions and then I will get into some more interesting details.

So the general conclusions are that the documents, the contracts are usually quite vague and include some very technical terms, which is sometimes very difficult also for lawyers to understand.  It is not very easy to identify what are all the entire documents that regulate the relationship between the provider and the user because the documents are not very well referenced, and sometimes it's not easy -- they're not easily accessible.  And not all the information that are necessary to form one's own consent and to express one's own consent, not all the information is included in the document, so, for instance, the -- the platform providers frequently say that they will share their -- the personal data with third parties, but without identifying who are the particular parties and how the data will be utilized, so that makes it virtually impossible for a user to express informed consent on how the data will be utilized, and it turns -- in terms of reference into a sort of waiver on data collection and processing by the operator.

So to get into some more concrete results, we have defined the study with the same three pillars of the recommendations, meaning freedom of expression, privacy protection, and rule of law and due process, and the due process part is a very important one because I think we are the only center that has done a study tackling due process.

So starting with some consideration on privacy -- sorry, on freedom of expression, we have noted 70% of the platforms allow users to report abusive content, but only 48% states that when the content is removed, the content owner will be notified, so your content could be easily removed, but you are not aware of the reasons why or when it is removed.

And 88% of the platform explicitly states that they can terminate accounts without identification, so that is a reason for not only freedom of expression but also due process because you cannot know why your account has been terminated.

With regard to anonymity and the anonymous, which have been highlighted as something for freedom of expression by the latest report of the rapporteur of human rights -- sorry, of freedom of expression of the United Nations, only 32s of the platforms are allowed to use anonymous -- anonymous feature, and, therefore, it makes it very hard to express yourself without other people knowing who you are, that it makes it extremely hard to fully enjoy freedom of expression.

With regard to privacy, 66% of the platforms explicitly state that they will keep on tracking you on other websites and that they allow other third parties to track you within their website, and basically, using social plug-ins, like the funny Facebook button or Twitter button, that means that every time you are on a third-party website, Facebook or Twitter are tracking what you're doing and see perfectly what you do.

And 80% of the platforms state that other third party will monitor your activity in their platform without saying what are those third parties and what are they doing with your data.

And 62% of the platforms states that they will share data with other third parties, again, without stating who are the other third parties and which kind of data will be shared.

With due process, and then I would like to conclude with this, we have also in the recommendations stated clearly that an essential element is that an individual has to know the rule of the game.  You have to know what are the rules and if they're modified, how they're modified, and only 30% of the platform commits to notify you when the Terms of Service are modified, and we have noted that in some platforms, rules -- Terms of Service are modified basically every week, and only 30% commit to the notification, and 12% explicitly said they will not notify you.

Another area of concern is 26% of the platform include a condition that is a waiver for class actions, so if any of the conditions are disrespectful of your rights, you cannot, by contract, raise a class action, and that is something that is illegal, I think, under almost every consumer protection -- consumer protection law, and 86%, finally to conclude, define a specific jurisdiction where the platform user has to go to seek redress, which is generally California, and that makes it extremely hard to have access to justice for a user.

So with this, I would like to conclude and to open the -- maybe we can have a couple of -- a couple of questions or remarks because I know we draw here in the room other people that have done very good studies an initiatives on this, so please, if you have any comments. 

>> YAN: Hello.  I'm Yan from the French Digital Council.  I have a question on the two panelists who have expressed themselves so far.  It is on how you have been taking issues of B2B relations also into account into your works because we have talked a lot about how platforms relations should be regulated with users from the side market, from the consumers, but there are also issues that are sometimes similar, notably issues of transparency with suppliers, so I would like to know what do you think about if there are things that are different from B2C relations that should be addressed also.  We have been working at the French Digital Council on these issues for, like, three years now, and there have been a lot of notably entrepreneurs that have contacted us saying that they had issues in the past where they didn't understand when the turnovers was diminishing and how they were referenced by a platform, for instance, or issues of stability of APIs.  Also, when platforms are using open innovation models to develop services, so I would like to know how you have been working on these issues. 

>> LUCA BELLI: Maybe I'll take other two comments and then I'll handle, please. 

>> ALEJANDRO PISANTY: Thank you.  My name is Alejandro Pisanty from Mexico.  I have a very general broad comment.  These things that you are looking at are actually very deeply rooted in the way the Internet has been built.  Going back one -- to one term, the public sphere, there have been great hope on the Internet that the Internet would provide these new great -- this new great space for deliberation of the whole society, that one particular point is that we have built it on private property, almost all -- almost all of the Internet works on private property, whether you look at the cables, whether you look at the servers, whether you look at the software and the services and the hosting of your information.

The alternative is worse. 


I mean, you can think of the alternative and see countries that are actually trying to build that alternative, so bringing in the state is something that will have to be done with extreme care.

There is the possibility of solving parts of the problem you are dealing with in the coalition by agreement, by a covenant.  One could take the GCIG concept of a new social compact, but one of the risks there is that you will actually only be protecting the people who sort of live in the good countries, and the bad countries will have even more incentives to do their stuff.  Thank you. 

>> LUCA BELLI: I'll take another comment. 

>> BERTRAND DE LA CHAPELLE: I'm Bertrand de La Chapelle, the executive director of Internet & Jurisdiction.  I have just one question and then asking for a clarification on something you said, Luca.  You mentioned the situation where an account is taken down without notification of the user.  There's an ambiguity on the notion of notification because it covers two dimensions, one the actual notification that the decision has been taken, but there's also the notification of why the decision has been taken.  Did you explore the distinction between the two? 

>> LUCA BELLI: Maybe you can reply directly to this and then take another comment.  Just we analyzed when a notification was provided on the removal of content or on the eraser of the account.  Then, in Terms of Service explicitly state there will be no notification, it means there will also be no motivation, so for sure a notification should have an indication of the content and indication of the reason, and indication --

>> BERTRAND DE LA CHAPELLE: So you actually meant --

>> LUCA BELLI: No notification at all.  Another here. 

>> AUDIENCE MEMBER: I wanted to talk about the privacy because you both talked about the privacy breaches by these companies and also the need for them to be transparent about what they capture, what they store, what they share with others, and also, you know, any breaches of security should be immediately publicized rather than covered up, but I just didn't understand whether you are arguing against their business model as such and what is the alternative to this because, you know, I quite like this, you know, German recommendation.

I don't know if there are any German speakers, so sorry about that.  (Speaking non-English language) but they should only capture what they need and the burden of what they need should fall on them, and also for any security breaches, they should be liable, so if you hold the data which you don't need or you can't demonstrate whether you need and somebody sues, you should be liable?  Is that what you talk about when you talk about recommendations in terms of privacy? 

>> TOBY MENDEL: So maybe start with that one.  I didn't go into all of the things because -- especially, I didn't delve too much into the ones which are similar between our two sets of principles, so looking at where we were being more innovative.  So we don't go against the business model but we do otherwise generally support this principle, so if your business model relies on reselling private data, then it is an operational need of your business and you can justify it in that way, but if you're collecting just sort of randomly in case you may in future possibly need it or just because you can't be bothered to tailor your collection tools, he I think you would be in breach of this tool, and we talk about that, we have a whole section on data minimization.

Just to comment -- it just came up for this meeting.  It's not just that the -- you know, the Internet is built on sort of this public space model, it's built on private property, but it's built on a privacy invading model, which a lot of internet services, we trade our privacy for the service, and that's a model.  It emerged -- I mean, it has some benefits and, you know, my kids probably like it because they can get stuff for free, but it has some huge costs.

There was never any part of a public debate about -- we're probably irrevocably locked into that now, but we probably got there by corporate decisions that had no public debate and no whatever, and there would have been other options, so I think that's for me sort of the biggest public space issue with that.

On the B2B, we do call on the sort of transparency in the privacy space, especially, for transparency about the types of third parties that companies will share -- that platforms will share your information with and how the information may be used.  We actually had quite an intense internal debate about that, and some of my staff felt that that was not far enough and that we should have said much stronger and constraining things about that, and I and some others, especially me, felt that we weren't quite ready to go there yet, but we definitely realized that it is a whole area where more needs to be done, and then just finally on the notice, I mean, I'm -- to be honest, it's interesting looking at your statistics.  I'm shocked at the low level of commitment to that and that there are groups that refuse that.  I don't know whether it's a resource constraint issue.  I mean, we do -- in our recommendations, we're, you know, fairly sensitive to resource constraints and obviously technological constraints and legal constraints, you know.  Companies have to sort of respond to those things, but absent those, I really can't see any reason whatsoever why you wouldn't provide both kinds of notice.  The first kind of notice, I mean, for termination of your service, it's unnecessary because your service isn't there, so you kind of know that that happened, but obviously, you want reasons. 

>> LUCA BELLI: Just to provide two quick replies to Yan's comment and to Alejandro's, to Yan's comment, we haven't analyzed the B2B because it was to analyze the impact it had on individual's human right, and to build on what Alejandro was saying, yes, for sure, the Internet has been built on private relationships, but there is a big -- this but, we all know that -- and it has also been clarified by the Human Rights Council Number, 31,804 that states have also an obligation not only to ensure and protect human rights but also to protect individuals against the violation of other individuals or of other private entities, and this has been very clarified by the guide on human rights for Internet users over or the Council of Europe, which was one of the main documents we analyzed that does not only state that states in Europe have an obligation to protect individuals against other individuals or other private entities violation of their human rights, but it also precisely states that human rights prevail on Terms of Service, and that is a very good point on which this work has been built.

So having said that, I just -- I see there are two more comments.  I will ask you to be very, very quick so then we can go to the second segment.

>> CHARLIE: Hello.  I'm Charlie from the French Digital Council.  It was just to ask for clarification.  You mentioned that some kind of great on privacy and I understood the consumers or the users should be able to pay in order to -- to not to make this privacy tradeoff.  Could you explain more on this because it feels like some kind of slippery slope, like how should you --

>> (Off microphone)

>> CHARLIE: Like should you pay for human rights, so if you could explain more on this. 


And my second request for clarification is about the -- the fact that the private sector should be legitimate to their own standards, as long as it's not politically or ideologically based, but I can't find any standards which would not be politically based, actually, so if you could -- and also, what do you think about the idea of community standards that would be constructed by the whole community on platforms?  So thank you. 

>> BERIN SZOKA:  Berin Szoka, TechFreedom.  Since the Dynamic Coalition is about platform responsibility, don't platforms have a responsibility to stay in business, to provide better services and tools to their users and, you know, if Twitter took all of your suggestions and especially on things like data minimization and providing choice to users and cutting back on advertising, they might very well simply go out of business, and so this is all very fine talk, but if at the end of the day you wind up losing platforms that are actually in the case of Twitter more protective of speech and you have one less option in the marketplace, who's really won? 

So high level, how do you think about the responsibility of platforms to invest and innovate and thrive and provide choices to users, and how do you balance those with all the things that you're talking about in here that come with real economic costs, and I don't just mean notification.  I mean, that's a relatively marginal cost.  I'm talking about things like justification.  Those are expensive and it's true that those teams at companies that do content moderation are extremely expensive, but I'm talking about things like data minimization and things that cut to the heart of the business models that, you know -- it's easy to, as we say in the U.S., to Monday-morning quarterback, to second-guess the decisions that companies make, but quite frankly, you're not in the boardroom and you're not there trying to talk to Yahoo.  Look at -- Yahoo went out of business, right?  They're selling themselves out to Verizon, and things might have been different if they had made -- if they had been more bold in experimenting, and it may have been things that made people in this room uncomfortable, but maybe they would have been a third stronger player out there in the search market, so I'm just curious to hear your thoughts on those balancing issues. 

>> TOBY MENDEL: On that one, we're not in the boardroom, but on a panel we had quite a bit of boardroom people.  I didn't keep saying that as I presented the standards, but most of them, which have cost implications, sort of say subject to resource -- reasonable resource constraints, and none of this -- I mean, just to reiterate, and I already kind of said it in response to Barbora's, none of this cuts to the business model.  We're not saying minimize data collection if that data is part of your business model.  What we're saying -- so if you're reselling data and that's your business model, go ahead and do that, but don't collect stuff that you don't need that's beyond that need, but we accept that basic model as the way -- I mean, I made a comment about that as the public space and the direction we've gone that wasn't discussed publicly.  I think that is a huge problem, but we are in that space now and the businesses are operating that way and that's fine, so I don't think that we're posing a threat to the business model.  Read our recommendations, and if you have specific business comments on them, I'd be very happy to receive them.

On the opt-out and paying for human rights, I mean, this is the kind of contradiction here because you're not -- you know, you have accepted, okay, that, you know, when you sign up for Facebook and for Twitter and for whatever, you know, I don't know what I gave away when I signed up for my Internet in my hotel, but something probably. 


My first born or something.  I have no idea what it was.  They'll claim that when I leave the hotel, can we have your child?  No, okay, I'm being a bit facetious, but you do -- but the theory here is that you have consented, okay, and when you consent to the use of your privacy, it is not a breach of your human rights, so you're not paying for your human rights.

What we're saying there is instead of only having one way of, let's say, using Facebook or being a user of Facebook, offer another option.  One is you can give them your first born, and the other is you can say I'm going to pay, you know, $10 a month or whatever it would be for this service, so you're given a choice, you know, for the service.  That was the idea there.

And in terms of the objective, I mean, I think, you know, we recognize -- we wanted to push that idea out there because we think that companies have, you know, free choice as companies to set the standards on which they operate, but because of the freedom of expression implications, not unlimited free choice.  There needs to be some proper -- and I recognize that's a completely uninformative term, but there needs to be some basis for the content rules.  Family values, okay, as undefined as that is, I think that's -- and that was the example we gave.  And I don't think that's political, I think it's a social choice.  You know, it can become political, obviously, but I don't think it's inherently political.  You know, if I want to set up a discussion forum to talk about horses, okay, and I'm not going to let people talk about cows, that is not a political choice in my opinion, it's a fair thing to do.

So -- but we need to go a lot further in terms of doing it, and finally, I don't think that community values or community standards are in any way a productive way to go with this.  I don't think it's going to bring us to, you know, a -- and a possible quagmire. 

>> LUCA BELLI: One brief comment and then we'll pass to the second segment because some speakers have to go away.

I have to discuss another option that was in the article for the World Health Organization is that you utilize technical standards, not only contrasts, and there are some very good examples if you will look at the My Data movement, if you look at the Solid project by Tim Berners-Lee.  It's technical standards to allow the user to which kind of type of his data can be used, so I think there are also technical solutions that should be taken into consideration.

I would like to ask Megan to start the second segment.  Thank you, Megan. 

>> MEGAN RICHARDS: Thank you, Luca.  Am I on?  Nice to be here, of course, and a very interesting discussion.  A couple of points that were raised already in the first session, I don't want to belabor the issue because it's been raised, but I think one of the reasons this is such a complex and difficult issue to address is exactly because it is private-sector actors, primarily, people who participate in these activities do so voluntarily.  There's no obligation to do this.  There are other fora to express yourself and have freedom of expression.

There's an element of caveat emptor.  On the other hand, we absolutely insist in Europe and in many other jurisdictions on strong and secure consumer protection provisions for anyone using, whether it's a service or a good or whatever it might be, and those can include things like human rights protection, et cetera, et cetera.

So this is what makes the whole issue so very complex.

On the other hand, the private sector has an interest too in making sure that its consumers stay with it, that they have trust in their products, and, therefore, I think this is why we have a very interesting approach and a very interesting issue here.  We have to look at the private sector's interest and their commitment; the government's interest and their commitment, and I see I'm the only one speaking from a governmental point of view.  Let's put it that way.  Everyone else is on the side of the angels and the good. 


So we have to really look at how this relates.  Governments as well have an interest and obligation of protecting their citizens, et cetera, et cetera.

What I thought I would do in this context is briefly explain where we are in Europe with respect to platform responsibility, which is a huge area to address, but we have, for example, an eCommerce directive, which has existed since 2000.  Of course, things have changed in the technical spear in 15 years very much.  Nonetheless, Article 15 of the eCommerce directive says quite clearly that there is no general obligation to monitor the content or what is being used if you're an information service provider, and then it goes on to a whole series, so I'm going to use that as a gross simplification of what the eCommerce directive says, but it puts it into the context that these are platforms and there's no real obligation on the part, no legal obligation, and Member States cannot impose special obligations to monitor on platforms.

This is quite different from what we have in, for example, audiovisual media services directive where there's an editing obligation and activity and these services have a real obligation to make sure that what they put out has been vetted and, again, if you want to have a newspaper that only looks after horses and no dogs or only chickens and no turkeys, I don't know. 


This is governed by different rules, so you have quite different rules and regulations in an area that is perhaps -- I don't like to use the word "converging" but has many similarities, and the similarities are becoming greater and greater as we go forward.  So this is, again, something that makes the whole issue rather complex.

So one of the things we have done in Europe is -- well, let me start again.  The other element that I wanted to go back to relating to consumer protection is there is a whole series of competition laws in Europe, and again, I'm speaking about what specific jurisdiction, and I think jurisdictional issues are going to be raised in this context, so if you have abuse of dominance of a position, then there's certain competition law provisions that allow you to take certain actions, but let's go back to the other issues.

On the digital single market approach, we have a whole series of issues looking at how to make a digital single market in Europe.  One of the things that was going to be looked at in that context was the role of online platforms.  What were online platforms doing, was this -- was there a dominance effect, was there an abuse of dominance, how were the relations B2B, how were the relations B2C, and what were the implications here for the digital market.

As many of you in the room know, this was immediately conceived as an attack against American platforms, which are used even more extensively in Europe than they are in America, believe it or not.  I think something like 90% of searches in Europe are on Google; whereas, in the United States they're only about 70%, so you see there's a real difference. 

A couple of things.  In that online platforms assessment and review, there was no decision at the end to go to regulation.  It's extremely complex.  There are many things.  Everyone said the eCommerce directive is there, it exists.  We're looking at how and what and if some things need to be addressed, but one thing that was identified in the online platform assessment was the B2B relationship.  This was identified as something that had the potential to be looked at in much more detail, maybe better provisions.

Now, in theory, businesses are independent, have the capacity to negotiate and contract themselves, this he don't need consumer protection per se, even though some of them are consumers in different ways, so this is something that we're still looking at.  That's one thing.

Then the over is that we have a series of codes of conduct from various companies, most of which are online platforms.  One relates to protection of children online, so we have a code of conduct for online platforms and other Internet companies, which relates to illegal child abuse pictures or statements or websites.  That's one aspect.  No one disagrees with that.

Then more recently, something called the European Internet Forum was established, which is a voluntary code of conduct that was developed by -- with Commissioner Avramopoulos, who's responsible for cybercrime and things like that, and this online platform code of conduct was addressed at taking down illegal hate speech online, and this is because in Europe we have very clear and -- I know the Americans think it's very restrictive -- provisions on hate speech.  Just as an anecdote (Speaking non-English language) which you will probably appreciate.  In the hearings on the IANA transition in the Senate on the 15th of September in the United States, Ted Cruz ranted and raved against the European hate speech provisions because he said it was a violation of Amendment Number 1 of the U.S. Bill of Rights.  It restricted freedom of expression of Americans.

Now, if you -- now, I know he did it entirely for political reasons, but if you have that kind of approach, you have -- and I'm looking at Bertrand now, you have the ability of the jurisdictional cases, which are very complex in the Internet world, so what happened in this case of code of conduct?  A number of online platforms agreed to get together and develop this code of conduct on countering illegal hate speech.  They're Facebook, Google, YouTube, and Twitter, and I'm going to read a little bit -- I don't like to read usually, but to make sure it's absolutely correct, the freedom of expression is not limited, it's only illegal -- illegal hate speech which is limited, and the three that have signed up so far have agreed that to the extent they are informed of illegal hate speech online, they will take specific action to make sure that those who have posted this will be informed of it and that they will take down that content.

So even though the European Court of Human Rights has said that the right to freedom of expression can include speech that offends, shocks, or disturbs the state or any sector of the population, it does not extend to protection of expressions that incite hatred.  Now, one's man incitement of hatred is another man's freedom of expression, particularly for Ted Cruz, but --


 -- so that's another case.  So on the 31st of May, that's when this code of conduct was established, and the commitments are to combat the spread of illegal hate speech online in Europe.

This is particularly important, I think, given the current political situation, not just in Europe but in the United States, in other countries.  We've seen a rise of populism, we've seen a rise of unfortunate comments which may be racist or sexist or xenophobic, whatever they may be, so I think that's one of the reasons why there was this particular concern, and in Europe as well, we have had, of course, a huge issue relating to refugees and migrants, which, of course, has also started some kinds of antagonistic hate speech online.  Yeah, don't worry, I'm coming to the very end.

So by signing the code of conduct, these companies commit to continuing their efforts to tackle illegal hate speech online, and they have a series of internal procedures, it's discussed with all the parties, and the position really is to ensure that the decision on racism and xenophobia allows the majority of valid, valid notifications for removal of illegal hate speech to be taken down in less than 24 hours and to remove or disable access to the content, if necessary.

So that's really where we are on that.  That's just one element.  thanks. 

>> LUCA BELLI: Thanks, Megan.  I would ask Wolfgang -- I know he's leading -- he's chairing the Council of Europe group of specialists on intermediaries -- to provide us some insight on what the Council of Europe is also doing on this.  Thank you. 

>> WOLFGANG SCHULZ: So thanks so much for including us, and as you said, I will not talk about my research on this issue but a little bit on our deliberations and the committee of experts on Internet intermediaries set up by the Council of Europe, and it's great to share that, especially with colleagues like Karmen, so we are a committee of the subcommittee, so you have access to all our ideas.

We are in the preliminary stage, so what I will talk about is not something we have agreed on right now but is just ideas we have tossed around.  Our mandate is twofold.  We have to prepare a draft recommendation by the Committee of Ministers on Internet intermediaries.  That's the first one, and the second one to prepare a study on human rights, the mention of computational data processing techniques, especially algorithms and possible regulatory implications, and it's obvious there are a lot of interconnections between those, both tasks but there are different things -- different products we are working on here.

I will briefly talk about some issues we have discussed when it comes to the first one to the draft recommendations on Internet intermediaries.  Our idea right now is that we address both the states on the basis of their duty and the intermediaries on the basis of their responsibility, a term that is not so easy to understand, and human rights methodology, but nevertheless, that's our starting point because we are basing our work on the Council of Europe's documents and they clearly state the responsibility of business entities in their documents.

First point I would like to bring up when it comes to the duty of states responds to things we can see in some countries of Europe right now, that they are informal requests by the government to somehow clean the Internet because of some things that might offend the majority of the publication.  Germany, for example, it's not so clear when the minister of justice visits Facebook whether it's just about hate speech under German regulation, like Megan pointed out right now, or whether it's about we want to get rid of this hatred in the net, which is a completely different thing to do, and I think it's likely that we come up with a recommendation that there has to be a legal basis and that there should be no informal push by government to raid the Internet, which I think it's important given the fact that some governments, even in Europe, tend to go there.

I have attended a conference in Germany recently, and one of the prime ministers of Germany made this argument saying Trump was possible because of Facebook.  We have these right wing parties in Germany, so we have to regulate Facebook to make sure that everything stays stable.  I think that's a very, very slippery slope, and this time really down and not upwards, and that's not the general thing in Germany.  I don't want to give the impression that everything is --


 -- but some -- sometimes you have the feeling that it's these simple politicians syllogism that's working here.  Second point and in connection with the first one, there is more and more on a European level but also on a national level the trend to enforce self-regulation or to go to co-regulation.  I personally have done a lot of studies on co-regulation and self-regulation, and I think that this instrument can be very powerful, it can be very good as regards human rights' standards because it gives leeway to the industry how to react to demands, but nevertheless, there is an inherent risk, and the risk is a kind of blurring of responsibility.

And the second risk is that something you don't want governments to impose formally is done informally.  For example, monitoring of content, so a recommendation is likely to be that we say that when Member States consider a self or co-regulatory arrangement, they need to make sure they don't have a monitoring of content without somebody -- somebody has made the provider aware that there might be a problem.

And not surprisingly, because everything is based on the Article 10 and other articles of the European Convention on Human Rights, we insist on not only a legal basis but also on the proportionality of the measures and that -- they mentioned that already -- to make sure that when there is content removal request that it is really based on infringement of law and not just on something that is offensive or shocking and things like that.

Seems obvious, but nevertheless, as political discussions are, I think it's important to state that anyway.

The last point as regards the duties of the state is transparency of a take-down request.  We believe that it is from a human rights perspective something that is extremely important.  We know from the Ranking Digital Rights project that there have been at least for some private entities progress as regards transparency about take-downs, but with the states it's still a problem, and that's why we at least consider including recommendations as regards these kind of transparency about takeover -- take-down requests as well.

And very briefly, to the responsibility of the intermediaries, one idea we are discussing is that we see it as a responsibility of an intermediary to do -- to engage in human rights, do assessments to get an internal procedure that makes sure that all the things, including codes of conduct are really in line with that and not only for the products they have but also for the product development for new services.

Secondly, a thing we have discussed is that Internet intermediaries should clearly and transparently inform their users about the operation of automated data processing.  That's an idea that is already in the general data protection regulation -- regulation I'm not so really fond of, like Toby said.  It's prolonging and a broadened concept in new ages, but in many ways, but there are some interesting things, and one is this transparency requirement, and this transparency requirement is only formerly to a personal data -- covers only personal data, and we want to expand that and say that this -- to make clear what the operation actually is behind that to the users is important.

And finally, we have talked a lot in our last session about access to an effective remedy.  That's a very important point, complaint.  Mechanisms must comply with safeguards and include the right to be heard and things like that.  We elaborate that, and that's very likely in our recommendation or draft recommendation as well.

Finally, three cross-cutting issues and then it's -- it's done.  First of all, we have talked in our first session, at least briefly, about what is an intermediary and that we believe that a kind of functional approach is called for because it's not really feasible to say Facebook is an intermediary.  It does so many different things.  It can fall in the category of editing if they provide services in a specific way under the AVMS or future AVMS or whatever, but other things are definitely not this kind of editorial offer, so it's very dangerous to take the intermediaries types we have right now and try to associate that with specific forms of regulation because they are hybrids.  Most of them are a hybrid, so our suggestion is that Member States recognize that and apply a functional approach when a kind of intermediary function is in question.  Then the recommendation shall apply that we have drafted or will be drafting.

Toby already talked about this responsibility aspect and the shalls and shoulds.  That's something that we will consider in our next meeting, I think, deeply, that has not been decided right now, how we deal with that.  Just wanted to make clear that we are aware of these problems and that we have a similar discussion as you reflected from your efforts as regards that.

And my last point, something I believe we will discuss in the next session as well is whether we want to have these general approaches, as you decided to apply, to just come up with recommendations for all intermediaries or whether we consider to see that there are different types of intermediaries, and something that has been discussed in many panels here on the IGF is we see that there are some intermediaries in a have quasi-state function in a way, maybe should be bound in a different way to human rights than others.  This horses example or cats and dogs that they use very often or they have an example of Catholic search engine, if everybody knows that they discriminate in a specific way be forced to be neutral?  I don't see anything that makes that a requirement, and a pause, you mentioned that before, there's always the risk that a very, very tough set of requirements for intermediaries leads to advantages for the big market players because it's costly to implement that.  That's a tradeoff that we have to be aware of.  Thanks. 

>> LUCA BELLI: Thank you very much Wolfgang, and just in regard to the shall and should, the things we stated last year clearly define this approach and says shall is something already seen by the international standard and should be done, in some -- and some intermediary already does. 

>> WOLFGANG SCHULZ: We will definitely take your recommendations into account and have done already.  

>> LUCA BELLI: Excellent.  Karmen, you can provide us your insight and more concrete detail based on your experience. 

>> KARMEN TURK:  Hi, everyone.  My name is Karmen.  I'm also in the same committee as Wolfgang and Bertrand here, and so Luca wrote me that maybe I should discuss some of the examples that the courts have done recently or what the new initiatives are from the Council of Europe that were used, so I picked some, and so I'll try to give you examples of those just to make the discussion going later on, because the Court of Human Rights and the Court of Justice have been quite active, starting from linking to matters and then so forth, but now I'll take the Court of Human Rights.  So the Court of Human Rights had not really dealt with Internet issues until 2012.  The first case they had was in Turkey, and after that, the next one was actually about intermediaries and liability for user-generated content.

So before that, I think just as we were discussing before where we had eCommerce directives, we were all kind of feeling quite safe and sound already in Europe because we thought that we already had the standard, so we had, like, two very underlying ideas that there is no general obligation to monitor, as was explained already, and the liability cannot follow if it's -- if the service of the company, intermediary service would be neutral, it would be automatic, and no knowledge could be attributable, so we would see no liability following, whatever the users would be doing, then, on the platform.

In 2015, the Court of Human Rights, granted a chamber of that comprising 17 highest human rights judges in Europe, made a decision other way around a bit, so what they decided was that if on the platform there are users who would be reporting to a speech that would be clearly unlawful, then the knowledge condition doesn't really matter anymore, so the platform would be obliged on their own initiative to remove those comments -- to remove that content in order to avoid liability -- that was Delfi vs. Estonia decision.

Six months later, the Court of Human Rights did have an option for a redo.  That was Index.hu vs. Hungary.  One of the judge on the panel, they did try to mitigate what was said in Delfi, and they said even though that exists, the decision exists, still that would be like an extraordinary circumstance, so in normal circumstances, where it's not about hate speech, it's not about direct incitement to hatred, it's not about direct threats to physical integrity, then the normal notice and action procedure would suffice in order to perform diligently on the market, so they did try to mitigate, and I think as a second try, it was better than the first one, so I think the Court of Human Rights, we are looking at them hoping that the third one would be even more taking into account how Internet really works.  Probably that's why the Council of Europe convened a committee to try to find a way for those two decisions to have a recommendation for the states that would actually be doable for the companies without hiring hundreds and hundreds and thousands for some, so that's from the Court of Human Rights perspective.  That's 47 Member States.

28 Member States we're still kind of hoping and coping with well.  We have eCommerce directive, so we are taking those two principles, no knowledge, no liability, no obligation to go through that, all the information stored, but European Union has some initiatives that at least from the freedom of expression and freedom of association and all those other human rights that users would appreciate very highly, some of those are to concern to many human rights lawyers, at least, so I'll take a few may -- maybe took note of that code of conduct for hate speech.  I know that Barbora has left, I think.  I know from her point of view, the process was quite questionable because it's mandating the communication on online platform.  It's mandate's responsible behavior from online platform to protect the core values.  Of course, this raises a number of questions, who's core values, which core values, what is a core value, is it freedom of expression or right to privacy, who is to make that decision, so how is the platform supposed to be responsible for everyone's core values?  It can't be done.

The next initiative, what was also mentioned, was audiovisual media service directive.  The proposal to it.  And, yes, generally audiovisual media services, we are talking about the person who is owner of the content, author of the content, so it's a totally different thing from being an intermediary, which means you are not creating, you are just sharing or making it available; however, in the definition of the proposal, it says something different.  It talks about video-sharing platforms and the definition says that, well, you are a video-sharing platform if you are storing large amounts of user-generated content, and, again, for those video-sharing platforms, there is a separate obligation in the proposal.  Most important one of them is that a video-sharing platform in specific is obliged to introduce measures to ensure protection of all citizens from illegal content.

So that will be fun to see how to protect every single citizen, but if I'm not a citizen, then I don't, so it's a -- it's quite interesting from that. 


So migrant -- yeah, it's a number of interesting questions, I think, that you could -- you could think of.

And the final initiative that has made quite a number of academics quite active, as was just presented to commissioner in parliament on corporate directive -- new corporate directive, and there was actually meeting taking place in 6th of December, so some of us who were not in Mexico were able to go, and so -- in Brussels, so the main problem with corporate directives is also, as we started -- Madam started very right on that.  The basis of everything that we are thinking in Europe about intermediaries is this Article 15 and eCommerce directive that there is no obligation, the state cannot oblige you.  The Court of Justice decisions have followed this to a degree and sometimes better than them, sometimes not at all, so let's keep that one.  So the directive tries to circumvent it.  Whether it was the idea behind it from the start or not, but it does circumvent it, that's for sure, because what it does, it makes everyone who is storing large quantities of data or information or user-generated comment -- there are two obligations for them, and this is, of course, just regards to corporate works.  The first obligation would be to put in place appropriate measures and appropriate technologies to make sure there is no violation of corporate taking place on the platform, and secondly, to conclude agreements with right-holders that would also force a reporting of the misuse cases, which is kind of a -- I'm not sure how general data protection regulation would even go with that.

So the measures listed for those measures or technologies use examples of effective content recognition technologies.  The content recognition technologies, we don't really have any of those that would be able to make sure that it is actually -- it's not a use, it's not a use for library, it's not a fair comment, it's not making fun of incorporated work, so it's not done, and if we take into account every such technology would go very deeply into the privacy issues, we are talking about packet inspections and so forth, so how can it be done in accordance with all the other fields of law around it and, of course, the eCommerce directive that is still in place but with the new corporate directive if it would go into force like this, the eCommerce directive would not be applicable to the Intellectual Property field at least anymore, and I think the idea behind eCommerce directive to try to be horizontal, whether it has succeeded or not is another question, but trying to be is something we should cherish, so I think I would like to end and just for discussing with everyone further.

>> LUCA BELLI: Thanks a lot, Karmen, for bringing this feedback on jurisprudence and latest evolution with regard to the proposal.  I would like to ask Ilana to have more elements so we have the last few minutes of debate among us.  Describing the world of Ranking Digital Rights has been done so far and particularly the last thing that you had with your survey.  Thank you. 

>> ILANA ULLMAN: So my name is Ilana Ullman.  I'm with the Ranking Digital Rights projects and we're working with global standards of how we should expect free expression and privacy, so we produced a corporate accountability index, and I have copies of it here if anyone wants a copy.  We have English and Spanish, and you can check rankingdigitalrights.org and see the documentation going along with that.

We produced the first index in 2015.  The next one will come out this spring in 2017 and will evaluate 22 companies, which is an additional six from the first report, and another addition is that it's going to look at what we call mobile ecosystems in addition to Internet and telecommunications companies.

So the aim of projects such as ours is to identify gaps, to identify where companies could be doing better, for users, for advocates, for policymakers, for investors, and also for the companies themselves to identify changes that they could make to improve.

And so we also seek to identify regulatory elements that might be preventing companies from providing additional disclosures and scoring higher, and while this doesn't affect the companies' scores, it does provide additional context, which we include in the report.

So how we do this is we evaluate companies based on their public commitments and disclosures.  Since part of our project is also to encourage the companies to be more transparent about their policies.

So we look at the Terms of Service as well as other public documents, such as privacy policies, transparency reports, help center documentation, and more, and our methodology, which, as I mentioned, can you read in full on our website, includes indicators that are grouped into three categories, which is governance, which includes indicators on grievance and remedy mechanisms, human rights impact assessments, among others; as well as separate for privacy and for freedom of expression.  And so there are different elements within each indicator, so for the example before about notification for content restrictions, we look at was the user who posted the content notified, was there justification, and also is a user who's trying to access the content as not the one who posted it, are they notified that they're attempting to access content that's been blocked? 

So we've also made some changes to the methodology since last year's index, so we have some new indicators this time around, including one on network shutdowns, which has been a popular topic here at IGF, as well as disclosure around data breaches, among others.

So just really quick to summarize a few things that we've learned from the first installment, unfortunately, the big takeaway was that there are no real winners.  The highest score that we found was 65%, and across the board, there was significant room for improvement among all companies.  We found that users are really left in the dark about many company practices that affect free expression and privacy, and for us, even with a team of detailed researchers working dedicated to this project, it was often really difficult to find documentation for a lot of the indicators were looking for, if not, you know, impossible for certain companies.

However, there was also some good news.  We found that every company does at least something well, even the ones that scored less than 25% overall.  There was usually still at least one area in which comparatively they actually ended up doing a bit better than some of the others.

And as was also mentioned before, that transparency reporting is becomes more of a standard practice.

However, no company that we evaluated includes reporting on content takedowns that result from their Terms of Service violations.  That's still an area that there's significant room for improvement, and it would be really great to see -- to companies to really come forward on that and to be a leader in that regard.

We also found that all companies have room to improve their performance in the short- to medium-term, even without any legal or regulatory changes in their operating contexts.

So if you're wondering if any of these changes have been made that we identified in the last report or how any of the new companies fare, just stay tuned for our next index release, which will be in March 2017, right before Rights Con to find out the answers to that and more. 

>> LUCA BELLI: Thank you.  There is comments.  I see there is a comment. 

>> EMMA LLANSO: Hi.  My name is Emma Llanso.  I'm with the Center or Democracy and Technology, and I had a couple of questions for the panel about a couple of developments that have happened in the platform responsibility space in just this past week.  First on the hate speech code of conduct.  Earlier this week the EU Internet Forum released an initial report on kind of their survey of how companies who are participating in this code had been responding to content flags that they received, and they tracked 600 flags that were submitted to the companies by about ten or so nonprofit organizations to track the rate at which the companies took down those flags, and they found, I think, something between 5% for one NGO and 70% for another NGO, but, you know, there was a fair amount of variability, and not all the content came down, which is interesting, and I think there's a lot more to kind of dig into in looking at that, but then the Commissioner Jourova made statements about how this clearly points to the need for companies to do more, which seemed to presume that any of the material that was flagged under this process was, of course, illegal, and so I -- the dynamic here is I'd be very interested in other's thoughts because on the one hand you have NGOs flagging content as illegal; you have private companies assessing whether they think the content violates their Terms of Service or possibly the law; and a government official saying this was illegal content, no courts are involved, so where is this kind of surety that this content is illegal?  Where is that coming from in this system that seems very different from what we typically understand to be a way to declare content illegal?

And then you also, I think, as Toby mentioned, are probably aware of an agreement between some of these same Internet companies to start sharing hashes of extreme and egregious violent terrorist propaganda amongst themselves.  My organization is very concerned about this agreement because of the way it can act as a centralized point of potential censorship of material across these platforms, which do take up such a big share of kind of the environment for speech online.

So I'm curious to the panel's thoughts on the responsibility of platforms in these kinds of projects as well, what should platforms be thinking about when they're looking at, on the one hand, how to improve their content moderation practices and address speech that has raised many concerns, particularly in Europe, but also the kind of creation of these new systems and forms and centralized processes that could absolutely be abused in any number of ways. 

>> LUCA BELLI: Thank you for the comment, and I take just two other comments or questions, and then we can have a quick round of replies. 

>> MARTA CAPELO: Hello.  Good morning.  My name is Marta Capelo. I represent the European Telecom Network Operators Association.  I have two short comments.  The core business model of telecoms is not on data, and I'm not here to defend the platforms that have their core business model based on data, but I would like to highlight sometimes in these debates we have a false assumption.  As some of the panelists have said, this is a private-sector environment, nothing is for free.  You always pay.  You pay with your data.

So actually, the opt-out option that you were referring, for me at least as a consumer, would be very interesting.

And we need to have more clarity in Terms of Service, of course, but we need to understand that some restrictions on the processing will also impact the business models and, therefore, consumer choice and availability for new services, so this for us is a very important balance that we need to strike.

My second point is the focus on transparency that I've seen around the table mentioned by most people that we think it's -- that is actually what we need to work on.

Consent, it's a privacy matter, but maybe it's not like the right thing to do because we see that the tools do not work.

In Europe, we are now doing the review of the consumer rights directive also and of the unfair commercial practices that have, you know, measures on transparency.  Maybe those in this exercise, this is a suggestion for the commission, could be adopted to the -- to the online world instead of doing, you know, sector-specific regulation on platforms or other sectors, so for us horizontal consumer protection rules focused on transparency is really the way to go.  Thanks. 

>> LUCA BELLI: Thanks a lot.  I think (Off microphone) has a comment, and then -- a very quick one from Bertrand. 

>> BRUNO BIONI: Hello, everyone.  I'm Bruno Bioni, legal advisor for the Brazilian Network Information Center, NIC.br, but I'm speaking on myself related to that issue.  I would like to take a step back because I'm a little skeptical how we are framing this debate and building a normative on aligning fundamental rights implementation on accountability.  I think it should take a little bit more clear what is -- does this mean?  I mean, the accountability reports and the CTS research project is great.  It's like the first step, but there's one aspect of human rights who wish to reduce asymmetry of information, but this debate is also about reduce the asymmetry of the power.  That is about personal data protection and the transparency, but transparency is meaningless if you are not able to empower citizens and the consumers to have at least a really true choice, so that thing as some guy said goes to that point, and just to sum up and to raise some good examples to deal with that, very recently, update on WhatsApp terms and services, up-to-date of Uber services, okay, I was informed -- I know how the guys will monetize my data, but at the end of the day, I didn't have any choice, you know.  I was not empowered to take back, and in WhatsApp case is a really good one because they shifted the business model.  After that they capitalize it or get a really ranged audience, so I would like to hear some thoughts in regard to that issue.  

>> LUCA BELLI: Thanks.  We only have one minute to go, I would ask --


>> (Off microphone)

>> LUCA BELLI: Yes, so Megan to you.

>> MEGAN RICHARDS: Just a very brief comment and that relates to the comment that you made.  The report I saw was from a German NGO on the German hate speech cases, but maybe you're referring to something else, and there's a whole review of how the system has worked that's going on now and it's supposed to be, I think, even this week reported on, but I --

>> (Off microphone)

>> MEGAN RICHARDS:  On the 6th of December.  So anyway, I will take back your comment to my colleagues because I don't actually deal with it, but I know what you're talking about, and I'll mention it to them.  Thanks. 

>> LUCA BELLI: So I will have -- I will ask the panelists to have a tweet-like comment if they want to close the debate, and then we will -- Toby, did you want to say something?  (Off microphone) do you want to say something?  Okay.  Please. 

>> AUDIENCE MEMBER: Thank you, Bertrand, because it was a very good point you were making about the hate speech.  I can give an answer.  This is how I explained it to my client after the Court of Human Rights' decision because that was about 20 companies.  One of the comments was bastards and exclamation mark, and the Court of Human Rights said, of course, all of these comments were on their face unlawful, clearly unlawful, so bastards, so that's the answer of everything that's not really nice and pink, it's probably hate speech. 

>> Very, very brief statement.  We intend to have a public consultation on our recommendations, everything that could not be said due to time restrictions today, please let us know.  Thanks. 

>> LUCA BELLI: And let me give you a final reply also to what Emma was saying.  In the recommendation we did last year, we precisely said that in case of take-down, the content should be not only -- the notification should contain the idea of the -- ID of the content, the justification why the content's been taken down, and also mechanism to have redress should the comment taken down would not be illegal, and also, we also precisely state that private mechanism should be allowed in order to take the comment, but to have redress should not be the only way of having redress in human rights violations, be complementary to courts, so that's a point we should consider. 

>> TOBY MENDEL: To reiterate, don't make take these back to Canada, please. 

>> LUCA BELLI: Thank you for the wonderful comments and great debate.  See you later.

(Session concluded at 1:07 p.m.)