The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> BILL WOODCOCK: Just to let you know, the time has begun at this point. We're waiting to get our full panel seated. We'll be another minute or two.
Okay. Welcome all of you to the IGF Main session on technical and operational issues content blocking and filtering: A challenge for Internet growth.
The session today was organized by the technical communities to MAG members, Sylvadina Cadena, and at the opposite end, Sumon Sabir, who is the CTO of fiber at home, broadband and backbone fiber provider in Bangladesh.
We're very grateful to Sylvia and Sumon for organizing this session. This is kind of rare event for the Technical Community to have a main session at the IGF.
I would like to have Sylvia come up for just a moment to tell us about the process for remote Q&A and to help set the stage.
>> Sylvia Cadena: Thank you, my name is Sylvia Cadena. Welcome everybody to the main session of technical and operational issues about content blocking and filtering as a challenge for Internet growth.
We are welcoming our questions and comments using the system that the IGF secretariat has incorporated the speaking queue. So, you can go to the website of the IGF and the main -- at the home page there is a link where you can add your name and indicate and have your questions. Of course, we will take a look at the raise of hands, but it's preferable to use the speaking queue. So, it would be really appreciated if you can do that.
There is also translation, you know the UN for all the main sessions, so you can also share the transcripts and the audio sessions with your communities. 38 remote hubs following the IGF from different corners of the world, so your support to share the comments and the content of this session with your communities is really appreciated.
So, I give back to Bill, so we can follow up with the rest of the session. Thank you for being here.
>> BILL WOODCOCK: So, we have only 80 minutes, and we have a lot of valuable contributors on a complex topic, so I'm going to begin by having two of our MAG members set the scene, then we'll move on to an explanation of the technical processes involved in blocking and filtering, and then some commentary on the political issues and causes, then Civil Society's take on how blocking and filtering are working, and then we'll wrap it up with a look forward at how these technologies are viewed by youth, and how the next generation is likely to steer this debate as they gain control.
So, moving to the MAG, Danko Jevovic is on the board of ICANN Center. He founded the largest IPC in Serbia in the 1990's and then managed the Serbian CCTL registry, and Sumon Sabir is the CTL of fiber at home, as I mentioned Bangladesh's largest wholesale carrier and has been the center of Bangladesh's Internet community for 20 years leading the establishment of Bangladesh's IXP, so, Danko.
>> DANKO JEVOVIC: Thank you, Bill. So, we will begin this session. The Internet has matured, and of course there is no more division between physical and align. And, because of that content blocking and filtering is not only a reality that happened, but necessity. So, we are hearing that more regulations is coming, and it's critical not to break the one Internet as we know it.
During this session we will try to put aside multitude of reasons why blocking and filtering and trying not to get into political issues, we will focus on technical and operational and try to discuss how these technologies apply and what are intended and possible consequences from different technologies.
The Internet is one network, and improper use of technical solution seen through intermediaries, either connectivity, or at hosting, also can have an unintended plan and consequences.
We feel that content blocking and filtering is sometimes necessary, but it should be executed as a part of a due process in the technically and also special legally correct way.
The role of IGF is to discuss all these issues and we feel it's important to bring light in the perspective of various stakeholders, and that's why we organized this main session, and also to promote best practices to understand what should be done and what are possible consequences of different approaches, and do all that for the benefit of Internet and, of course, for the UN sustainable development goals.
Sumon.
>> SUMON SABIR: We're doing content filtering for a long time. At home we're using our home get away to feature content to safeguard children. And, some offices are doing content filtering just to save man-hours and keep content concentration of the employees.
I also bring some sort of filtering for several different infrastructures, but probably we are talking here, mostly, now we see that the trend of blocking content in some countries, some countries we are actually asking ISPs to do the content filtering for the Country, and there is some challenges technically in other areas.
We are noticing that other content filtering is going on by filtering IPRS to a destination, but now it is used like block go an IP address make sites actually.
We are also noticing that you are at best filtering, many countries doing that, targeted destinations featured for a particular Country, but it is also an age because content gradually moving towards this TPS and nowadays another popular mechanism of doing content filtering is filtering that drop black holding destination over ideas, pretty common. And, there has been the first challenges in the popular quickly.
And, in other aspects, we also see that government to the big content providers to technical content to remove some content, take down content, and if you look at the report of Google and Facebook and other content providers, we see that the number is increasing, number of requests is increasing and going from all part of the world, and if you look at the reason that the main reason at this moment, the last two years, that national so it meant the filling the Internet in some respect creating problem for the national Secretary.
So, these are challenges facing at this moment. I will hand it over to Bill to move forward. Thank you.
>> BILL WOODCOCK: Now we'll hear from two members of the Technical Community discussing Internet blocking and filtering.
Peter Koch is the policy DCTO registry and has maiden valuable contributions to the Internet Governance in the ITF and DNOS.
Then Andrew Sullivan, was until recently the chair of the Internet architecture board, the ITF governance group, more recently the ISOC President.
Perhaps Peter can lead us off with an overview of the differences between blocking traffic at the IP layer and blocking lock ups at the DNS layer.
>> PETER KOCH: Thank you, Bill. Thank you, organizers. Appreciate the opportunity to talk about blocking mechanisms here, and I should say in advance that any mentioning of any technology does not include any judgment for fitness purpose or any endorsement by me or my employer.
So, when it comes to content blocking, or content as such is important to understand that on the Internet anything that is transmitted over the network is chopped into small pieces that we call packets and they travel independent of each other. That means that any content usually does not appear at any place in the network in total, and therefore with some exceptions it cannot be inspected in total or judged in total at any point in time, except, of course, at the end point which is either the source of the traffic or the sync, which usually includes the consumer or some device in front of the consumer.
Now, with that said, methods that I used to limit access to certain content are suffering, so to speak, from this inability to have access to the full incriminated content, so some metaphors are used, more or less, that is instead of blocking access to particular content, many of these methods will limit or block access to certain posts, to systems, to domain names, to the resolution of domain names and so on and so forth, and that will mean and we'll come to that later on the panel that we'll have certain side effects and that will also effect the effectiveness of this methods.
Getting back to Bill's initial question, the easiest thing to do on the side of an enterprise for example, understanding that the employees want to go to a certain, say, website to access content that the enterprise doesn't want them to go to is to block the IP address of that particular website.
They can easily do that at their route first, that is the system that connects the enterprise network to the in net and then the traffic will not flow back and forth, and people cannot access that particular website, of course.
Now, without going into too much detail, of course content can be replicated and there are certain technologies in a way for the good of the Internet in mitigating load spikes and for me services that would exactly help distributing the content, so this IP address blocking thing is very easily done and also easily circumvented and I will not mention that anymore in the further few minutes.
So, that is one method.
Another is that somebody decides that on the network there is some host distributing bad content, and instead of doing the filtering close to the user, the enterprise, it could be declared that some ISP or somebody at an Internet exchange point subject to some regulation should recall it so-called no route or black hole a certain IP address or range of IP addresses. This is at the very low level of the Internet packet flow, of course, and it will affect all the packets going back and forth between anybody on the net and this address range and it is still a centrally available -- sorry, an available mechanism provided you have a regulatory framework that can identify these non-routing points. However, any such Internet exchange, any ISP would have to do that on their own or be ordered to do this. So, the further away we get from the source of the content, the more entities would have to intervene or interfere with the packet flow and prevent packets from going back and forth. That's the one thing.
Now, there is another layer, of course, because usually if you go to a website and then, mind you, it is not only web traffic or web content that is subject to blocking, there is lots of others and maybe you can mention one or two in the rest of my time, there is also the Domain Name System, and that's another layer. So, we are talking about the identifier system on the Internet, we have the addresses, I mentioned IP addresses here. We do have little main name systems, which is in Computer Science we always say that there is no problem that cannot be solved by another layer of in direction, so here is one. Domain names usually link to IP addresses, and there is also the chance that people try to block content by either blocking the resolution of domain names, which means there is some interference ordered at the ISP level, for example, that will suppress the translation or the mapping of the domain name to an IP address. Now, if you cannot map the domain name to an IP address and, as you've learned, you need the IP address to get to the content, and you can't get to the content with normal ways. We talk about circumvention later this morning.
There is also a method called domain name take down, which would happen at a different stage. So, if you have a domain name that is -- sorry. If you have a domain name that people associate with bad content, and again that judgment can be discussed, that could be taken out of the DNS and then it would not resolve anymore. This would do nothing with the content, but again, if all you have is a domain name on con ten, the dough name won't resolve in any way, and here you go.
Let me maybe say one more sentence, that there is different types of content, one that the consumers stress to get to, but we also have some content, like fishing websites or maybe malware that supposedly the consumer isn't really eager to get to, which will have an influence on how strong a consumer or a user on the Internet will try to circumvent any of these blocking mechanisms. So, many of these mechanisms that I've described are used for both sides, like for content that people would strive to get to, but also for malware mitigation, botnet mitigation by blocking the resolution of certain networking systems that control botnets, for example, so this again is a double-edged sword. And, that's it, Andrew.
>> Andrew: Thank you, Peter. As Peter was suggesting, there are a couple of mechanisms that amount to blocking the end point where the content you're trying to filter comes from. So, you're not actually blocking the content, you're blocking the point where the thing came from, and if there is other content on that target site, you're just out of luck. It also can't be accessed.
And, this is an important feature of the way this works, because Danko suggested something at the beginning that I think is important to remember is true in one sense and not true in another sense. The Internet is one network. It is true that you get an in net if you have this -- or you get the single Internet, the global Internet if you have this common name space, if you have this common number space, but of course the Internet is made up of other networks. It is a network of networks. What this means is that you can't actually block end points from end-to-end in one place. You actually have to block it throughout the network, throughout all the cooperative networks. So, these are two piece these are important to recognize about this. First of all, that you have to get every network if you want to block something reliably, you have to get every network to block it themselves, because there is no center of control on the network of networks, because there is no center.
The other thing is that you have to block the end point rather than the content, except in one case. And, some of this by the way is discussed in a request for comments in our FC that was written a number of years ago by the Internet architecture board RFC7754, which is about the mechanisms of content blocking and filtering. I should point out that it is an informational document that tells you how this works, not that you should do it, and I think the document is actually pretty clear that they are not subscribing to many of these techniques.
So, if you don't want to block the end point, if you don't want to block some host on the network, what else can you do? Well, the other thing you can do is try to do things in line in the network. You can do this at the network layer by having a sort of filter that looks for certain kinds of content and then blocks it.
As Peter suggested, this is really difficult, because the way that the content travels around the network is in little bitty atoms rather than in a nice convenient block where you can look at it, so what you have to do is reassemble it, or reassemble at least part of it, identify it, and then block that thing.
Now, some of the really sophisticated systems on the Internet can do this, but one of the things that's important to recognize is that this mechanism, this mechanism of you intercepting the traffic and looking at it, you can't tell whether that is a malicious activity or a beneficial activity, right. You can't tell whether this is legitimate or somebody intercepting your traffic. And, of course we don't want people intercepting our traffic and the reason we don't want people you intercepting our traffic is because we all bank online and it is important that people not be able to intercept our traffic. So, what do we do? We encrypt that traffic using a mechanism using transport layer security, or TLS, and the latest version of TLS is specifically designed to make that kind of interception harder, which means that the filters that people put in line don't work as well as they once did, because termination can't happen outside of the host, and so you have an additional problem there.
Fundamentally, the difficulty that we have is that blocking content, which is this large-scale thing, and blocking the pieces of it on the Internet are really two different kinds of pieces, and we need to talk about the low level technical mechanism of doing this in a way that is incompatible with the policy goal at the macro level. This is a little bit like trying to prevent, you know, water from getting into your house's foundation by blocking every single, you know, molecular size hole that you possibly could. That is really what we're doing. We're blocking the individual molecules. So, I think that is one of the main things that we could do. One of the main ways we can understand this.
There is one more way that filtering and blocking happens on the Internet, and it's not technical at all. And, I think that this is important to remember, because this is a distinction that people don't always remember.
Take down notices have no technical component to them really at all. If you get a take-down notice what it really says is hey you, the person who has this bit of content online, you must under pain of judicial penalty take this thing down. You must stop publishing it on the Internet. That's the effect that many of these policies are intended to do, and at the technical layer, we don't actually have a mechanism to do it. That's why there is attention here. There is a technical feature desired and we don't have the technical mechanisms to do it at the level the people want.
I'm going to ren out of time, so I'll stop talking at that point and turn it over to Sebastien.
>> BILL WOODCOCK: Just taking a moment here to see if there are any questions yet at this point. We're going to break for questions at several points during the process.
Any questions on the technical mechanisms? One very quickly, and then for -- yes, please.
>> I am a member of MAG. I have a question about this. Who can actually insure that there is no misuse of this process of blocking? Are there, you know, any mechanisms that we can prevent malicious actors to actually block whichever way is described, limit the correct content?
>> No. Flatly, no. People will tell you all the time yes, but the true fact is no.
What we're doing when we block these things are two things. We're either blocking the end host where the target data is, where it's just a piece of -- from the Internet's point of view this just a bag of bits, right. There is no way of identifying, oh, it's this kind of thing. So, when you block a host, for instance, when you block it in the DNS or block its IP address, you're blocking everything that is in that host. And, that means that there could be other perfectly legitimate content there, but you can't get to it. So, that's the first problem.
The second problem is, there is no way, there is just no Al go rhythmic way to tell if something is a legitimate block or not. All you can do is identify the pattern of bits that you're trying to block or the host that you're trying to block it from. So, there is nothing, you know, the data grams that flow around on the Internet don't carry an evil bit with them. There is, by the way, an RFC. It was a joke. You can look it up. The evil bit. An evil bit, like all the malicious packets on the Internet have to have the evil bit set, and all the non-malicious ones have to have it onset, and as soon as I describe that you're all laughing, so you get the joke. So, there is no way, and there is no evil bit and no implementation of the evil bit. So, there is no way in principle to be sure that you're getting it right. All you can do is best efforts at the judicial level.
>> BILL WOODCOCK: Like to seek a clarification of the question, actually. Are you asking whether there is a way for the Internet community to deal with governments that are asking to have things blocked that the Internet community feels are illegitimate, or are you asking whether there are ways for hackers to get into a blocking system and misuse it to block things that government did not ask to have blocked? Is it one of those?
>> I was thinking about some malicious actors who specially target specific domains or sources of information. But nevertheless, this answers the question, you know, if you can't insure this, you cannot also do against this other side. Unless judicial intervenes.
>> BILL WOODCOCK: Peter, would you like to add anything?
>> PETER COCH: Maybe because Andrew started these analogies and all of them fail, actually, but I'll serve another bad one.
If you identify a certain corner of the tone as the red light district and you block it, then people can't go in there, but that also effects the postman and the ambulance, and the technologies that we've described are intent neutral, and they are also agnostic of the nature of -- so that's both ways, right. Blocking words as an attack and blocking words as a mitigation, and whether it is one or the other is not a technical judgment. That's for somebody else to decide. And, to the other extent, as we explained, the content, there is only few places where the content is available in total, so can be judged in a way, but usually that needs human intervention, and human decision, so that's nothing that the aligned technology can do, especially because we are with domains and IP address blocking, we are at the very abstract level that doesn't look into packets, doesn't see what is going on, it just says, well, that is a street number and we know that something in that house is bad, so we'll close down the house or demolish it or whatever, and we can't make any distinction at that particular level.
Thank you.
>> BILL WOODCOCK: Now we're going to move on from the Technical Community to the governmental community. Sebastien Soriano has worked at RCEP the French communications regulator for the past 17 years, most recently as its President and chairman of its board. He's also had the distinction of chairing BAREC the European association of communication regulators in 2017.
Perhaps Sebastien can begin with discussion of how European leaders balance the free speech with their obligation to protect the public.
>> SEBASTIEN SORIANO: Yes, with pleasure.
I like this expression from Steven Brown, information wants to be free. It's quite famous. And, it's not free as freebier but free as free speech. So, information wants to be free means that, okay, when it is about computers and networks, you cannot control everything. And, there is a good news and a bad news about this. The good news is that it's a great spread, it's a great opportunity for creators, for exchanges and so forth. And the bad news is that some people will use this freedom, this capacity to do things that can possibly threat the society, and this balance between freedom and threats is really depending on any Country, because we all have our own picture, we all have our own release sensitivity, we have more sensitivity about cultural opportunity in our Country, so we all have a different approach of this balance, and I think this is something to all case keep in mind. There is not one model to deal with this freedom of information, I'd say.
So, how do we face it in Europe, for instance? So, President Macron had a speech in IGF two days ago, and what he clearly stated is that there are clearly several practices in the Internet that are not welcomed in his mind. So, I'm just giving some examples.
The first one is using the Internet for cyber attacks. Another example is using Internet to spread fake news, let's say. Possibly to influence elections in other countries. Bad use of the Internet is possibly to not respect copyright, which can challenge the ability to create and have a business model for creation. Bad use of Internet is about propagate go hate speech, for instance, and you know that we all have different sensitivity on that. For instance, in the U.S. you have the first amendment. We don't have it in Europe. So, I will not go into any bad use of the Internet cases. I think you know them quite well. The question, then, is how to deal with it. And, the particularity that we have in Europe is that we have two strong principles. The first one is network neutrality, and the other one is the status of hosting companies and the limited liability of hosting companies.
So, I will not go into details regarding these regimes. Talking about Net Neutrality first. So, the idea behind Net Neutrality is that Internet is a network of network where there is no central control, and the risk would be that Internet service provider that are a technical bottleneck in the ability for the end users to go to the Internet could take advantage of this position to reinstall kind of control the Internet.
And, the Net Neutrality principles says no to that. So, powers, telecom regulator, and know that presented me as part of the governance, but I know that is clarification in international organizations, but ARCEP is independent from the government. I have to underline this. I think this is very important. And, so, telecom regulators in Europe are empowered to impose Net Neutrality, meaning first a non-discrimination principle in the management of the traffic, and second freedom of choice of end users to use any content or service of their choice.
The second principle is the status of the hosting companies. We consider in Europe that if you host a content and you're not aware of the fact that it's -- if it's a content that threats society, let's say, to say it quickly, then you're not responsible for that. But, if a public authority with a legitimate objective asks you to withdraw the content, then you have to do it. So, that's the fundamental piance that we have in Europe. And, what is important is that this is under a very clear process in terms of respecting human rights, the right to have -- to not be prosecuted in an adapted way. So, we have warranties so that these mechanisms are respecting civil fundamental rights.
So, that was mainly what I wanted to tell you. Maybe just a word about something that has been said by Emmanuel Macron, the President of France in his speech. He was talking about creating a new status, not necessarily changing the status of all hosting companies, but possibly to identify a new category of online services that have a specificity which is to accelerate the propagation of content. So, this is something in discussion in France, and possibly this could give ideas in Europe, and to be concrete about this, the President has announced that there would be a pilot, an experimentation on that subject with Facebook. So, several regulators -- sorry, teams from different French regulators will take part to this experiment, so it will be -- it will take place next year, and I hope that in the middle of next year we will have more concrete proposal to have on this.
Thank you for your attention.
>> BILL WOODCOCK: I think it's important to note something that Sebastien has brought into the conversation by mentioning Net Neutrality, that blocking and filtering is sort of absolute, you may not reach this content, whereas Net Neutrality introduces a couple of shades of gray. One is preferential treatment where some kinds of content are given poorer performance, right. So, if you try and reach one website, you get the best performance that the network has to offer, whereas if you try and reach a competing website, your performance is artificially degraded. And, the other debate is around something called Zero Rating, where if you try and reach one website, you reach it with no additional impediment, but if you go to a competing website, you're charged extra for that privilege. So, it's not just an all or nothing, you can reach it or not reach it, but also performance may differ, and price may differ, and these are also implemented through technical means.
>> SEBASTIEN SORIANO: Yeah. Just to add on this. That is why you need a regulator, because it's not black or white, but you have to play with these nuances.
>> BILL WOODCOCK: I think that is one thing that we see as a fairly large difference between Europe and the United States, and the United States, this is very laissez-faire. The regulator has not been active in this space since the mid-1990's, and so a lot of these debates arise because of misbehavior among unregulated market dominant entities in the United States and the sort of overflow of that into the rest of the world because so many of these companies are based in the United States and regulators in the rest of the world have a difficult time enforcing regulation on them; where as in Europe, there are active privacy regulators and other kinds of individual rights regulators in many countries and those interests come into play. So, European regulators have a lot more work to do, but also the European population receives more benefit from that. So, I think what you find is a lot of the problems that insight these debates come from the United States, but I think a lot of the most interesting conversation about how to resolve the problems that's occurring in Europe.
So, take the opportunity again for questions from the audience. Any questions for Sebastien of things that have been raised so far? Ma'am in the red shirt.
>> Audience: Just one quick question. My name is Natash (Sp), I come from Croatia and current MAG member.
Just one question regarding the President Macron's speech you mentioned, and idea to introduce new services that would accelerate some parts -- some services. Some protocols, let's say, or services.
Is it going to be some kind of quality of service, like in technical terms, or some new mechanisms are going to be used, or it's not yet known or still to be defined?
>> SEBASTIEN SORIANO: I think the idea is to recognize that on the Internet a blogger and Facebook are not in the same category. And, they don't have the same impact on society. And, so, the idea is to create a new definition of content accelerator that would be kind of net with, you know, broad lines, giving the opportunity to get only big fish and to define specific rules for these big fishes for this content accelerators.
So, what could be the rules imposed? This is where the question is more complicated. I would like to call your attention to a paper that has been issued two weeks ago. It is very interesting about how to think new type of regulations, especially for big tech adapted to the Internet age. So, it's a paper from Chris Hu from the Tubler (Sp) Institute. And, on the content, on the side of the content regulation the idea would be -- could be not necessarily to oblige a social network or content access but to do this about the content, the obligation to have a due process about content, and the Role of the Regulator would be to verify and to audit the way the platform, the social network, content accelerator is dealing with content, and not necessarily to micro manage how the platform has to deal with the content. So, it would be a way to make sure, to public policy develop that some content, like hate speech, are dealt in a proper way without taking the place of the platform and doing it with --
>> BILL WOODCOCK: Peter, did you have a separate question? Okay.
Any other questions at this juncture? Sure.
>> Audience: Thank you very much. My question is for Sebastien Soriano. When we speak of harmful practices of the Internet, for example hate speech, fake news, or interferences in election campaigns, and just giving you the examples you mentioned yourself, this all has to do with freedom. It is a certain relationship of the freedom of expression, and these are acts that have to do with other sorts of expression. The media books television, radio, those are also ventures content. So, there is a difference between them and the Internet, because with the Internet the impact would be amplified, would be worse, otherwise when it comes to freedom of expression, whether it is the Internet or the press, it's the same thing. Before the Internet they were the same harmful practices, but they just used other media.
So, my question is the following: Are we only going to emphasize the Internet and deal with the other vectors differently; in other words, there would be freedom of expression when it comes to the interests net for which we have to find a new status where the freedom of expression would remain as-is for the other vectors, or rather would this be the first step seeing how we can adapt freedom of expression in general when it comes to the newspapers, satellite TV, et cetera, to our political ends? The phenomenon is basically the same.
>> SEBASTIEN SORIANO: Exactly. In fact, I agree with you with the balance between freedom and freedoms, in particular freedom of expression, and threats to the society so that balance in fact does not depend on the technology involved, whether it is print, whether it's audio/visual, or whether it is digital. So, when it comes to the objectives and the balances, I agree with you.
There should be a certain common attitude to these different objectives in this different equilibria, but each technology has its specificity tees. The specificity of Internet is power of propagation. Information is free, so the capacity to propagate information by the Internet is specific, and therefore it might call for specific responses.
It also has good news, which is specifically allowed for a real boom in freedom of expression. It made it possible for democracy to express it very forcefully in certain countries, but when it comes to the way that we look for an equilibrium, that could also be a specific response for the Internet.
>> BILL WOODCOCK: Move on to two members of Civil Society discussing the social concerns with Internet content blocking and filtering.
Irene Poetano is a doctoral student in political sigh sense studying Internet censorship and filtering at the University of Toronto, and senior researcher at the Monk school Citizen Lab.
Alexandria Sabman is a researcher with the Internet protection society. Prove justly the CEO of global telecom, a Russian Internet service provider and has been active in ENOG the operation group, so Irene. Alexander Isavnin.
Irene.
>> IRENE POETANO: Thank you, Bill. Thank you to all of you for being here.
So, as Bill mentioned, I work with the Citizen Lab. We're an interdisciplinary research lab on cybersecurity and human rights at the University of Toronto, so we've conducted research on Internet, censorship and filtering for a while now. You may have heard of the open ed initiative which is a project funded by the Citizen Lab, Berkman Centers, Harvard University, in which we investigated and exposed Internet filtering in over 70 countries around the world.
Through that research we found that democracies sense order the Internet in different ways and in differing intensities and that these practices are often justified by drawing on arguments that are often powerful and compelling, such as securing intellectual property rights, protecting national security, and nowadays encountering false news. Preserving cultural norms and religious values, as well as shielding children from pornography and exploitation.
Since the end of the open ed initiative the Citizen Lab has continued to conduct research on free expression on filtering, we have documented widespread use of Internet technologies, many of which are western made and deployed around the world with few restrictions. For instance, blue coat is an American company based in sunny veil, California, and being from Canada, there is also net sweeper, a Canadian company based in Gwelf, Ontario.
So, do we test for censorship? We have two websites, and a local list for each one tree and a global list that is comprised of a wide range of internationally relevant and popular websites.
So, what we found is that, you know, there is a lot of talk about the advances of AI and machine learning, however, accuracy is still an issue resulting in under blocking or over block go of websites. Under blocking refers to the failure of blocking access for all the targeting for censorship, on the other hand filtering technologies often block content that they do not intend to block. So, that would be over blocking.
So, typically censorship would be accomplished blue black list, so either through manual designation and automated searches which have resulted in our research in incorrect classification, and also because the filters are often proprietary, there is often no privacy in terms of the labelling and restricting of sites, and as previously discussed blunts filtering, like IP blocking could end up blocking their harmless sites that are hosted on the same IP address as a site with restricted content.
Also, when you run automation, this means that private corporations have control over access to information, without the same kinds of standards of transparency and accountability commonly found in government mandates.
And, as well, the danger is very explicit here when corporations that produce content filtering technology work along democratic regimes and set up nationwide content filtering schemes.
So, I'm going to talk briefly about sweeper because it's a Canadian company and from Canada. We have reports considering the (Audio cutting in and out) in our recent report based published in 2018 in April, we identify net sweeper which is designed to (Audio cutting in and out) identified a pattern of mischaracterization and over blocking involving the use of that may have serious human rights implications, including blocking Google key words for GBQ and blocking non-pornographic websites as a mischaracterization of these sites as pornography.
In these reports we raised issues of the nature of the categories delivered by net sweeper including an existence of category they call alternative lifestyles, which appears to have as one of its principle purposes the blocking of non-important au graphic LGBTQ content, block of HIV aids prevention organizations, and LGBTQ media cultural groups, and net sweeper can also be configured to websites to inspire specified countries.
So, I would like to say as academics, the struggle for research is real. Upon the publishing of one of our reports net sweeper filed a defamation suit against the University of Toronto and the Citizen Lab director, Professor Ron Dibert (Sp), this is because we identified that net sweeper was used in Yemen, so net sweeper tools have been installed on and are presently in operation in a state owned and operating by the Yemen net, the most used ISP in the Country, so this report was published in 2015, so net sweeper sought $3 million in journal damages, $500,000 in aggravated damages and an uncertain amount for special damages.
In April 2016 they discontinued its claim in its entirety.
Regardless, we will continue to conduct careful, responsible, peer reviewed, and evidence-based research and we will continue to investigate net sweeper and other countries implicated in Internet censorship and surveillance.
I'll end there. Thank you.
>> BILL WOODCOCK: Alexander.
>> Alexander Isavnin: I'm from Russian (?) a Country where we have content block examining filtering for six years. So, I will share my experience and experience of my colleagues.
First of all, how it is organized in Russia. There is a set of federal regulations that is legal in Russia and content that is Russian telecom and as a hoster or owner of resource asked to remove and if content is not removed, they are lowered the list of resources to be blocked to ISPs, and ISPs are obliged to restrict access.
Now about Civil Society concerns. First of all, if the content blocking starts in your Country, it would never stop. The content blocking in Russia was invented for protecting data, from and sexual abuse. The government and latest (Audio cutting in and out) only three reasons for content to be blocked, but immediately after that another agency was starting deciding which content is (Audio cutting in and out) and pie raw see counter fit, then more and more agencies have been at the central part for cheating and now there is a huge line of federal agencies waiting to be allowed to add illegal content to protect is waiting to pay a lower content about (Audio cutting in and out). So, if you started, it never stops, and the resources will only grow.
There is no functional evidence the last six years in Russia that is blocking is successfully works for the publicist being intended. There is no statistical of suicides, there is no statistics about preventing terrorist crimes. There is statistic in European union that the content blocking and taking down Paris sites actually increases revenues of content producers. And, for drug crimes in Russia we have statistics shows that number of drug crimes have even increased.
So, another thing. Ambiguity and unclear necessary of regulations related to resources to be blocked leads to real operational expenses. So, the host might take downtime restrictions, contents are not clear enough, so many hosts know they have legal -- illegal content only after they've been blocked. So, a lot of operations are expenses for removal for removing a lot of blocked resources. Also, there is great examples of French most popular video hosting daily motion is blocked forever in Russia for populated distribution, all content, and there is no lying el way to take it out of this list.
Also (Audio cutting in and out) a huge IP ranges starts to be adding -- started to be added to the list to be identified for fight against Telegram messengering as a content of working was about 95 percent. So, 95 percent of sites actually been what should not be blocked because of shared hostings. Blocking of IP addresses.
So, there is an enforcement for blocking ISP block resource, well for the point the government to be found for one in few resource. So, ISP is trying to over block, to block a bit more than they actually need to get fined.
Now we get to technical concerns related to Civil Society.
We have now about some (Audio cutting in and out) of IP addresses to be blocked, completely filtered and about 100,000 resources to be devoted for proxy or GPI for further inspections. So, it definitely brings requires ISP to have more resources, first of all more auditing power, more to add special equipment or sustainable development of Russian Internet is not (Audio cutting in and out) because ISP are much behind in competence with other.
So, I also would like to mention that content blocking on Internet Infrastructure, ISP infrastructure breaks main interests net principle simple network. This principle brought our Internet to current success, and now it's endangered.
Also, quality inspection and content analysis in session we play for analysis is not possible. It requires much more computational powers, from customers, even from server, but Russian regulation, it doesn't matter. ISPs oblige to restrict access.
The real (Audio cutting in and out) all kinds just drop their resources and go to another. So, but such resources to be blocked are not removed from the (Audio cutting in and out). Also, the content blocking system are being abused by bad guys. In Russia, for example, once the list of sources to be blocked by (?) was uprooted, and then some ISPs just go and did those on their internal infrastructure.
In another case for the (Audio cutting in and out) when the government blocking is performed in Egypt, the guys were adding important oh advertisement banners into legal pages using these blocking systems.
Well, actually, when you are talking about blocking, you cannot -- sorry, guys, you cannot talk only about technical issues, because it's usually political issues. As I said, if the government started, say, the only effective things they are actually blocking are opposition sites for the censorship properties.
Sorry, may I take one more minute.
>> BILL WOODCOCK: One more minute.
>> Alexander Isavnin: Colleagues mentioned 7754 as only description of information and in our countries and countries like this, it's been used as kind of IGF lowered to us to block content. It is -- I don't have Andrew each time with me saying this only information RFC, so I would like to talk to -- to suggest start talking about possible blockings, and always tell that blocks are network infrastructure is an attack, is illegal and should not be done at all. Maybe we should stop (Audio cutting in and out) I suggest.
Well, I am out of time. We have much more issues on Russian Internet. Russian Internet relations. I'm here with some colleagues. If you have questions, ask.
>> BILL WOODCOCK: Question for Irene. You guys have discussed the legal costs imposed by these technologies and the technical implementation costs imposed by the technologies. I’m wondering between the two of you could connect those dots just a little bit more and explain where those costs actually get paid from. How are those costs distributed and who is affected by them?
>> Alexander Isavnin: In Russia, many countries each law must have a financial -- unrelated financial impact. So, all this blocking laws was passed with no further spending of federal budgets required. So, all blocking is all technical side of blockings is being done on account of Russian ISPs. That is why I mentioned the sustainable development is no more possible. They need to device more power at this moment, devise more power and special filtering equipment, and actually we are look anything Russia if (Audio cutting in and out) equipment, and now we go package for all Russian must store for six months and five years.
>> BILL WOODCOCK: Which means that the retail cost to end users is up?
>> Alexander Isavnin: Definitely. Even this regulation is in force now actually does not work because it's really hard to implement. Well, I think it's impossible, but I know that going to rise prices at least twice.
>> BILL WOODCOCK: Do we have any more questions from the audience at this point?
Sir?
>> Audience: Good morning everyone, my name is Peter from center European CCTLDs.
One of the things that we hear a lot about these days is (Audio cutting in and out) I was wondering if some of the panelists could share some ideas on how this trend could affect the discussion on (Audio cutting in and out).
>> BILL WOODCOCK: Probably Peter and Andrew.
>> Peter: Maybe I go first. To expand, this is a new trend so to speak and maybe not everybody has heard about that.
There is the idea that the mapping of domain names on to IP addresses or elements of Internet Infrastructure would not necessarily longer be done through what we call name servers like dedicated systems but instead would work in a way that a web browser would treat it like accessing a web page, which would actually make this translation or mapping of name to an address look the same to some outsider look very similar or equal to access to a web page.
Now, that means that if currently when I mention the DNS blocking, DNS queries can be identified on the system because they take other paths, because we call that they use different ports on the network, but with this technology that Peter mentioned, mapping from a domain to an address would be in distinguishable from accessing a website, and even more so instead of using particular name servers, this mapping could be offered by a particular, by the same website that actually serves the common.
Now, what no longer work, and Andrew mentioned the security one and three, and how it makes interception even harder, this would mean that anything that addresses block go by DNS filter direction so and so forth would no longer work say at the enterprise level. Currently if an enterprise, and I mention two different types of content. If an enterprise wants to prevent systems from, say, contacting botnet command and control systems -- well, I should say fishing websites, for example, not this other one.
So, you go -- this would no longer be intercept table because it is encrypted, it looks like a website access and so on and so forth. So, some of these techniques would no longer work; however, there is of course, then, another central entity that could and maybe already does apply other filtering, because if all of these mapping requirements go through a Monopoly or ogolopoly of a small number of providers, they could be either motivated or it would do that from with their own incentives to blocking and filtering. Again, there is this additional service that some of these providers provide and call safe browsing and other names they have, but since the technology is (?) these would also be the first points to be addressed by a regular of a benign nature.
So, that is not a clear answer. Opportunities on the one hand, but threats on the other.
>> BILL WOODCOCK: Peter is being the DS over HCPS is one way of encrypting traffic. The way that has been working its way through the standards process for a longer period of time and is more widely deployed by far is DNS over TLS. (Audio cutting in and out) is that it has to be provided by a single entity, which is typically not going to be your enterprise or your net service provider. It is instead going to be probably the content distribution network and the CDMs are their own for-profit companies, so instead of having your choice of many different possibilities, some under your own control and so forth, these offerings are physically, there are only a few of them and they are very partially oriented, and very commercially oriented, often, at the expense of individual privacy, particularly if you're not paying for that as a service. And, I believe none of the pay as a service provider are advocating for HTPS, all advocating for TLS and can't have GDPR compliance without agreement, I assume individual is managed if this data is being collected. So, this is a complex issue, but it can be simplified down to there is standards battle between DNS over HCPS and DNS over TLS and it looks like DNS over TLS which was plan A in this place is certainly going to prevail.
Andrew.
>> Andrew: This is a special case of the point I was making earlier that the stream of sort of filtering and blocking that you would do by you intercepting a stream and then identifying the thing and going along just a special case. So, any sort of chance to do that in distinguished in many ways from a machine in the middle attack, you really can't tell the difference between a machine in the middle that is legitimate, machine in the middle that is (Audio cutting in and out) at the technical layer. So, that is the first piece. This is just a special (?) of that problem.
The sec thing I will point out is we've got all of these things going over HGTP was p in fact an earlier au tenant at filtering and blocking. The whole protocol Internet, protocols for different kinds of things, and we found that corporate fire walls only allowed HGP and HGPS to go through them. What did people do? They put all that you are traffic through HGTP and we encrypted all the traffic to HTPS which the secure version of the web protocol.
The important thing to recognize here is that this is a feature of the Internet. One kind of protocol, one kind of communication, and inside another kind of protocol core communication, and it looks like another thing that is going through. So, what we gradually found is from the point of view much watching the Internet, everything is going over the web protocol even though a lot of it is not web traffic. So, we find that phone calls are going that way, and, you know, sort of gave up on SEP and went to web RTC, and now it's just that kind of thing. You find that a lot of web services are actually HDP services not using the web at all. And, I think that this is an important trend. We tried this. It illustrates something that the technical mechanisms for filtering and blocking into encourage counter measures that make things harder to filter and block because people want some of that traffic. So, they just use a protocol that you can't afford to filter. We can't afford to do general purpose filtering on HTPS because what that means is we don't have the web anymore and that seems like a bad tradeoff.
Thanks.
>> BILL WOODCOCK: But they also take us further from the original intentional designs and more towards unfortunate compromises, so we're going away from what (?) have figured were the right way to do the things and towards the pricing into one protocol which is not (?) well for everything.
>> I would say I'm original techy and sometimes we say Internet is actually based on the same protocols, but actually Internet is now billions of people, so it's very (?) that we can see from all this technical discussion, and sometimes in discussion about talking and filtering we find that they sometimes (?) needed and sometimes two (?) trying to be applied, and it does send one of the simplest solutions is actually coming back to DNS that we spoke of and domain name and it's often happens that is trying to solve the problem with a blocking of the addresses, identifiers, and this is something that really needs to be understood. So, one (?) but also for (?) will be that when the things are (?) (Audio cutting in and out) it has to be more understood before any technologies are actually applied because of all this constant methods that we see in this discussion that can happen.
Thanks.
>> BILL WOODCOCK: All right. I would like to ask Mariko Kobayshi to conclude this session with her views on how these issues effect the next generation of Internet users and leaders and how she sees things changing the feature. Mariko is a graduate student at Kaio university, and a member of the Board of Directors (?).
>> MARIKO KOBAYSHI: I wanted to be here. Thanks for introducing me. I'm Mariko Kobayshi, Aio (Sp) university in Japan.
I would like to talk about what is the impact on the youth and people, and also for (Audio cutting in and out). In the Country I would like to hear a little bit of our recent (?). And, also, I want to talk the next step of how we can (?).
First the impact of the youth. I think there is three main the impact on the youth. First it is in educational impact like this kind of sometimes this and other dance blocking can be (?) the content and it revolves, like it can fix the access to specific resources or for even like the news or something like the content and seconds later the link, the free flow of ID, like arts or content, (?) or something. Yeah, the kinds of art activities, and the young (?) (Audio cutting in and out) and the third, I think the growing number of the young (?) are (?). And, most of them use the Internet as like (?) of the business.
This kind of restriction on (Audio cutting in and out) can also affect on the areas of the businesses. So, this is my -- the three points I have, I would like to refer.
(Audio cutting in and out)
So, the government has called for the ISP for content blocking on (?) I mean the pirate (?). The copyright of the comments. And, now still we are still discussing including the several stake holler, like Technical Community and lawyer and Governments and regulator. And, we understand it has to be withdrawn. Also, on the Technical Community, I think, is by end-to-end principle, obviously the fundamental idea of global Internet. And, so, this discussion I realize this kind of discussion has been (?) and it causes like conflict between the (Audio cutting in and out) and so the inside product their own area and how to discuss the next step.
So firstly, I think young people can sometimes be neutral position and we can connect the stakeholder and to bring them in the discussion. And, the second point is, it is for the young technical engineers of (?), but I think on develop or provide the technology for this kind of issue is also a solution, and for example, like as Bill talked before, like some engineers work in this DNS over (?) that same committee, and it enabled end-to-end encryption on the Internet. And, then not only the kind of technology, I think some young engineers can be blocked the provide technology, so which can distinguish between (Audio cutting in and out) for example, Japanese situation. So, yeah, that is the thought of my idea for the next thing what we can do, what the U.S. people can contribute to this problem.
Thank you.
>> BILL WOODCOCK: Thank you, Mariko.
We've had only 80 minutes to discuss a very complex topic that is engaging people around the world, but I'm sure that our panelists will be happy to engage with you further in the hallway, as we need to clear the room for the next panel.
Transcripts will be available online, and I think we're all very happy to continue the discussion at your convenience.
Any of you have any last words, Sylvia?
>> Thank you. No, I just wanted to thank you all for your participation and the views expressed in this conversation. There are (?) sessions to the IGF about the impact of content blocking and filtering more on the side of the users and experience of the Internet, but it's important also to see the other side of the coin, because the Internet is built by ecosystem of organizations that are actively engaged in this process, and the more we continue the dialogue, the better the Internet is going to be. So, it's not a closed conversation. As Bill mentioned, it's not that often that the Technical Community can focus a session on technical issues on this type of forum, and we're going to continue to try to bring more technical content and conversations around technical use and we thank you for your attention and your questions.
>> BILL WOODCOCK: Thank you all, and please join me in
thanking our panelists.