The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> PAUL WILLIAMS: Very good. All right. So good afternoon to everyone who is here in Addis. Good morning, and good evening, whatever time it is wherever you are in the world. It is a pleasure to have you here on our session of Creating a Safer Internet While Protecting Human Rights. My name is Paul Williams and I will be chairing this session. We have an hour to talk about this in our workshop.
If I sound a bit croaky, bear with me. I flew in this morning from London, but absolutely determined to be here because it is lovely to be physically at these events. But I have decided since the Internet never sleeps then it is fine if I don't sleep either for a night. So I will be okay. If Amelia next to me nudges me at any point, you will know what's happened.
The most important thing today is that we are going to have six expert speakers talking to us and I'm determined to leave time for Q and A. So if you have questions, please store them up for after the speakers. I will try and take some from the room, from the people here and some from online which Amelia will be kindly monitoring. And Alex will be taking notes throughout for us to have a record.
So turning to our speakers. With me in the room is Felicia Antonio who is with Access Now which is a global campaign against Internet shutdowns. Online we have Kazim Rizvi who is a public policy entrepreneur and founder of an emerging policy think tank called The Dialogue. It is quite a late evening. Irene Kahn, we have who is the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression.
Liz Thomas who is with us as director of public policy, digital safety at Microsoft. This is, of course, like the rest of the IGF, a combination of Civil Society, industry and Government, very important. And Bertie Vidgen, who is the CEO and cofounder of the tech startup Rewire. And Sarah Connolly, is online from London who is a colleague of mine, hello Sarah. She is from the department for digital culture, media and sports and is going to be talking about the UK's new online safety bill which was launched this week.
So just a couple of minutes for me before we kick off. Of course, as we have been discussing the whole week for those of you who have been here for the day so far, the Internet and new technologies are transforming our lives in many ways for the better. They are opportunities to realize exactly what we are talking about today, Human Rights, freedom of expression, freedom of assembly. I'm realizing there is more and more in my work. I am the foreign Commonwealth and development office director for Open Society. I am realizing that so much of this work is now online and transferring to online.
And an open interoperable inclusive and secure Internet can support democracy. That does support democracy in Democratic processes. We all know there are risks and challenges. So earlier this week I was in an event which was about abuse and harassment of women and girls which we know is increasingly moving online as a growing phenomenon about which we have some data but we don't have a proper baseline about how to tackle that.
We all know that online discourse can be toxic. It can be discriminatory and that leads to self‑selection of people who are online and fear of people for coming online. We know that disinformation and misinformation are growing risks of phenomena and can undermine Democratic processes, the ones that I was talking about. And we know, of course, that people require access to the Internet. All in all be able to exercise those Human Rights that we are talking about.
So there are both positives and negatives. And so it is absolutely right that we are here to talk about that today and we have a lot to talk about. So enough from me. Let me turn to our first speaker. So Felicia, we only have six minutes in total and I really want people to be able to ask questions. I'm going to ask you to limit your comments to three or four minutes, if that's possible. Over to you.
>> Felicia: Thank you very much. And it is a great pleasure to be here to discuss how we can make the Internet safe as well as how we can advance digital rights across the globe. And so I lead an amazing campaign called Keep It on which means keep the Internet on. And it unites organizations, over 280 organizations around the world. And in 2016, we came together to say that okay, there is a growing threat to online rights. This is called Internet shutdowns. And we decided to start, coordinate and work together as Civil Society to amplify our voices, raise awareness about these growing threats to democracy. Unfortunately this has spread across the globe. The numbers that we are documenting with regards to Internet shutdowns is increasing.
For instance, last year we documented 182 in shutdowns in 34 countries. And 34 countries is the highest number of countries we have documented shutdowns. In 2020 we documented 159 in 29 countries. And even that's problematic because we were going through a global pandemic. And a lot of our lives were moved online or we were being asked to work from home and to take advantage of the benefits that the Internet provides. Unfortunately we documented all populations around the world were being cut off in 2020 and previously.
And some of these shutdowns are still ongoing. We are here at the Internet Governance Forum where we are deliberating on how to empower people to come online, to be able to access essential services online and to also yeah, benefit from what the Internet and technology provides. Yet we have over 6 million population in the Tikrit region that have been cut off since November 2020.
And when the Internet is shut down, it is very concerning because there are a lot of Human Rights Foundations that happen. And it makes it extremely difficult for journalists, Human Rights defenders to be able to document what is happening. And once that's not documented, then the world might not see the gravity of what is happening. It also makes it extremely difficult for people to communicate with their families. And we always say that Internet shutdowns don't happen on a normal day. They happen when you really, really need to be connected with people. There is a conflict happening. You want to reach out to your family. You want to see how they are doing. And we have gathered stories across the globe, including we recently published stories, highlighting the impact of the two‑year long shutdown on the people of Tikrit. And these stories cut across people who have not been able to communicate with their families for years. And so they don't even know where they are.
Like education has just crashed. The health system is also being impacted. And these are the impacts we are seeing across the globe unfortunately. And prolonged Internet shutdowns amplify these impacts. And they evaluate the essential rights that we are all promoting as stakeholders across the globe. It is important for us to focus on the issue of Internet shutdowns. There has been a lot that has been done and has become a global priority.
Governments are speaking against shutdowns and stakeholders are looking how we can work together to ensure that the Internet is accessible, open and safe. And we have talked a lot about digital development, how can we invest in the digital space. And so these acts that are counterproductive, we know that countries do lose money whenever the Internet is shut down, aside all the Human Rights implications of Internet disruption. So I think it is important for us to continue to raise awareness about this. To talk about Internet shutdowns in ‑‑ it is a problem that we all need to address.
And so this conference has provided an opportunity for us to be here, to highlight the challenges we are facing online. And I think this is more of a blanket form of censorship when the Internet is shut down. For those of you who don't know what an Internet shutdown is, it is when authorities deliberately disrupt Internet access or electronic communication tools and then it makes it impossible for people to communicate with each other, to share information with each other. And this can be done like a blanket shutdown so there is no Internet access at all. And we have seen countries also shutting down SMS and other forms of communication during the shutdowns.
So the intent is clear to silence a group of people or a population to cover up Human Rights abuses. And this goes with Impunity because we are not able to document these and hold whoever is behind it accountable. And so I think it is important for us to speak freely about Internet shutdowns. Once we ignore it it means it is not happening and that's even more dangerous than whatever difficulty we face in speaking about it. Thank you.
(Applause.)
>> PAUL WILLIAMS: Thank you. So now I'm going to hope the technology works and turn to Kazim Rizvi. Are you there?
>> KAZIM RIZVI: I hope I'm audible.
>> PAUL WILLIAMS: You are audible. Please go ahead. Thanks.
>> KAZIM RIZVI: Yes. So thank you so much for hosting me. And it is great to be among this panel. And really good to sort of talk about securing digital rights on the Internet. So I will ‑‑ I won't take a lot of time. But I just sort of want to go back a little bit and focus a little bit more on, you know, in the last five to ten years, we have been seeing the rise of social media. We have been seeing the rise of the Internet. And, you know, with the 2.0 a lot of people are using the Internet for various services, be it for shopping or communicating online or connecting with people online. As there is an increasing rise of Internet over the last few years there is definitely a rise in terms of safety issues which users have been grappling with.
So Governments across the world including in India we have been looking at how do we make sure we have an Internet which secures and protects users and the safety of users while making sure that their rights such as freedom of speech and expression, right to privacy are protected. So it is often discussed that, you know, this is a binary, safety and privacy and safety and speech are binaries but they are not. They compliment each other. And it is possible in the growing world of a new tech economy where more and more Internet users are coming up to protect users online while making sure their speech is protected as well.
We have seen in Europe that, you know, Europe is looking at a new registration, a Digital Services Act. Australia is looking at a new law. In India we have seen the IT was 2021 and now in this year we seen IT has been amended. If you look at the globe they are trying to come to a conclusion in terms of how do we make sure that platforms work better in terms of takedown, in terms of content takedown, in terms of registering the grievance of the users. But at the same time we have to make sure that the rights of the users such as speech is protected.
And often there has been a debate that in terms of are the Governments overdoing the whole safety issue when making sure the Internet is safer at the cost of speech. There are debates in terms of how the safe harbor is critical to an open Internet. And the safe harbor is also evolving and the understanding of safe harbor is evolving. We understand that the platforms are trying to do as much as they can in removing content. We need law that protects safe harbor and protects speech and also need laws to protect encryption. That's something which is critical. While at the same time we have to find a balance with respect to how the law enforcement works in the Internet, in terms of them getting access to the right amount of data, of them working collaboratively with intermediaries and platforms.
I do feel there is a lot more synergies and collaboration which is required between the companies and platforms and Governments to make sure that the Internet is safe. But at the same time speech and privacy of the users is protected.
>> PAUL WILLIAMS: Thank you so much. So yes, we are going to go on to talk about that in the UK context actually when Sarah talks at the end. So for now, let's move straight on to Irene, our UN Special Rapporteur. I hope you are also somewhere. Not quite sure where you are geographically, somewhere online.
>> IRENE KAHN: I'm online and actually right now in Bangladesh. Not far. Let me start by saying that so far we have talked about connectivity. We have talked about safety. We have talked about free speech. All three of them are actually protected by the Human Rights system, international Human Rights law. Rights offline it has been well recognized in the international community that rights offline must be protected and applied online as well.
So I don't think there is a gap in the law. What there is actually is a gap in the practice of the law, international Human Rights standards. That's where problems start. Internet shutdowns don't just happen. They happen just before elections. They happen in the midst of conflict. They happen when people are protesting on the streets. The safety of women that's something that has merged in the context of social media platforms.
On the one hand, women marginalized communities, LGBTI groups, ethnic groups, those who are excluded from traditional society, they are the ones who need and use online communications to organize, to express themselves, to advocate for their rights. And yet they are often the ones who are most at risk on those platforms. And I fully agree with Kazim that there can be no tradeoff between connectivity and safety and freedom of expression. In fact, Human Rights are not the problem here. They are, in fact, the solution. Without Human Rights I don't believe we can have a safe Internet.
It is through respect for Human Rights that the Internet will become safer. More equal. The digital divide we are aware of it. I think that is one thing we learned during the COVID crisis. And the importance of connectivity, I think we are making some progress there. But a lot more has to be done when it comes to those marginalized groups. Those who suffer discrimination in the offline world should not appear to be suffering a similar discrimination in the online world itself. And that is where I think attention needs to be focused. I mentioned women, online violence against women but it is not just women. Women politicians. Feminist activists, those who are speaking up. Those who are looking to bring about change are the ones who are most being attacked online.
And that, of course, is doubly dangerous because it leads us towards a less diverse society. And that can be in no one's interest. So I think there are some huge challenges here. I believe and ‑‑ I sincerely believe that some efforts are being made by companies, large platforms but not necessarily the smaller ones and not enough by the large ones. On the side of Governments, I think that the friend is actually towards restrictions.
Now under international law, freedom of expression is not an absolute right. International law recognizes restrictions can be placed on freedom of expression but sets out the boundaries of those restrictions. They have to be lawful. That means that they have to be very clear and not give discretionary power in to the hand of the Government. They need to be necessary and proportionate and follow legitimate objectives and need to be interpreted narrowly. Yeah, we need some regulation of the platforms. And I have called for smart regulation. We need to make platforms more transparent, more accountable. And make platforms undertake Human Rights due diligence and observe Human Rights standards in their own content. Governments must not use censorship as a tool for managing safety online. In fact, it is counterproductive when that is done.
And serves neither connectivity nor safety nor Human Rights. We will need all three in order to make a safer, more connected world. Thank you.
>> PAUL WILLIAMS: Thank you very much. Very interesting. You mentioned a couple of times the tech industry and platforms and regulation or otherwise which is a fantastic segue way to our next speaker. Let's hear from industry then and Liz. Liz Thomas from Microsoft over to you.
>> LIZ THOMAS: Thank you. I echo the thanks of others. It is great to be here and to speak with you all and be a part of this conversation.
So really just wanted to start actually by situating my remarks in the context of Microsoft's overall commitment to Human Rights. We really see respect for Human Rights as core to our company's mission. We believe that people only use technology they trust. And when they trust technology it respects their rights. And we want to use ‑‑ it to be used for the good of humanity. And it goes to some connectivity issues that we have spoken about today as well.
So our overall Human Rights approaches incorporates laws. And, you know, as a company we aim to meet to commitment in a number of different ways, but it is one where we look to operationalize Human Rights. Speaking from where I sit in the company one area we do operationalize is our company's approach to digital safety. When we are thinking about digital safety from a Microsoft perspective and from a perspective of a company with a really diverse range of products and services, our approach is about striving to create safe online spaces where we are also upholding other important values.
In creating safe online spaces we are enabling people to enjoy and to freely exercise their rights. Through importance of association, ability to express their views and access information, and when we are talking about preventing criminal conduct and content online, it is about respecting the dignity and important rights of survivors. There are four interconnected pillars in our safety. Human Rights shows up in all of these. The first of these is platform architecture.
And this is really how we think about design and building and operating our services with safety in mind. Including thinking about business models. And so this includes our overall corporate commitment to Human Rights, which I have already referenced and commitment to thinking about how we build safety by design in to our products and services and understanding where risks may arise to our users.
And the second place and I think it is a reasonably obvious one is the pillar of content moderation. And that's in the way we see it and enforce our code of conduct and other important standards. Thinking about the safety interventions that we use on our services and how we deploy those.
We have a range of important considerations to weigh and balance to ensure that these measures are necessary and proportionate.
The third pillar is around community. That's how we help build spaces and empower users to build their own norms for behavior which are appropriate to the nature and purpose of the platform.
So we can take the rules and help seek community standards, but we think it is important to help enable others to build safe, inclusive communities, to help everyone enjoy their rights and have important dialogue that Irene has already mentioned. We know that toxicity in online environments will mean some voices go quiet. It is a question of how we work together to build positive counterpoints.
The final pillar for us in the safety space is about collaboration. And that's really at the heart of this kind of conversation of the Internet Governance Forum and multi‑stakeholder Forums. We know that online hubs are complex, whole of society problem. And it is one that requires wholistic approaches. We know that we have to approach these challenges in a multi‑stakeholder way and to make the most of different perspectives and different expertise. And we really benefit from hearing different perspectives and understanding the potential rights and impact of different harms and different choices. It is important that we are open to hearing that feedback and getting those insights.
And I think, you know, one of the observations I would make is we are at the point of dual acceptance around the application of Human Rights applying online. We are on a learning journey about how we best realize those commitments in an online environment. And I think even touched on so far in the conversation advancing digital safety is a complex endeavor and it can require some difficult tradeoffs and balancing. And this is really an active part of the conversation in the current regulatory environment and I know that Sarah is up next. And I think, you know, we come together with a collective interest in ensuring that we have regulatory measures that are risk proportionate, that are practical. And really help support the rights of each country's citizens as well as protecting the most vulnerable. It is not an easy task. And especially when you are trying to avoid unintended consequences. But it goes to the importance of multi‑stakeholder approaches and to the value of conversations like these. I will stop there, but it is a pleasure to be able to join the conversation.
>> PAUL WILLIAMS: Thank you very much. I am burning with a thousand questions as I hope others are. But I'm going to hold back because I said I was going to open it up to the room. I think hearing you speak like that, Liz, everyone I find in all these conversations, everyone wants to know what is happening in cyber tech companies on this stuff. How are they actually dealing with it. It is interesting to hear you talk about the four pillars.
So let's move on. And I want to thank our speakers by the way for sticking, everyone so far to the three to five minutes. We are roughly on time. So let's now go to Bertie.
>> BERTIE VIDGEN: Hello. Can I just check that you can hear me okay? Yes. I'm seeing some nodding. Awesome. Also apologies with the camera off. I'm somewhere without fantastic WiFi. But my name is Bertie Vidgen. I'm from Rewire. I got asked why should we use AI for online safety. We shouldn't always use AI for online safety. We should look at every safety problem and see where AI can help to make processes more efficient and more effective and only then should we use AI. AI is just a tool. We shouldn't go searching for problems we can fix with it. We should just see whether the problems that we have it can be useful for. But there are some key traits of AI. And this is true of nearly all automated digital technologies which make it very effective. Speed, you are talking milliseconds to process any item. Scale. Any bit of AI can handle huge volumes of content, up to billions of bits of content.
Consistency, this is a slightly controversial one. You will get the same outputs. The challenge is that very minor adjustments in the content, perhaps literally swapping around two letters in a Tweet could give you a very different result. Performance, in most settings we can approximate human levels of performance. The really big one for me is that AI does not suffer harm. We do not face the terrible problems that we have seen. We believe in the right setting AI has the potential to help. And I think there is two main use cases and benefits. The first is to improve efficiency. What types of hate speech are going to be prescribing from a platform. How do we enforce it and make sure you are actually applying it as you need to? This is where AI can help. It cannot tell you what those policies should be. That will be a human problem that subject matter experts and policy experts must weigh in on. But it can help you to enforce more efficiently. And this is fundamental for free speech and Human Rights.
If you enforce your policies effectively, you can make what you want left up is left up and what you want taken down is taken down. One of the biggest issues is policies not being enforced in the right way. Many of them have had their content removed or demonetized because it refers to the Holocaust or Nazis. They are the ones who are suffering because the policies are not being enforced in the right way. Policy enforcement is very important.
The second area is improving effectiveness. You can embed AI across an entire service. Content that could violate policies you stop from going viral. This is safety by design. You make sure the way the service is being used by your user base is safer than it could be otherwise. For me this is really exciting possibility of AI. This is a step change. Doing things that were not possible otherwise because we have there very scaleable technology.
So in the spirit of trying to keep to time, I'm actually going to wrap up here. I would like to make two very important caveats when I am talking about AI. The first one, to get any of these benefits of AI we need the right AI. I have not had time to go in to this but it is not trivial to build good AI. It takes the right data and subject matter experts. You are never done with the work. There is nothing more frustrating than someone saying it is a model finish. It is done for now. We are going to have to rethink and update the AI as we go.
The second one using AI is a big responsibility. Some of which we already heard about. If we don't engage with those seriously, we risk very dangerous. Thank you very.
>> PAUL WILLIAMS: Very interesting again. Really interesting to hear you talk about the iterative process of AI. I think normal sort of person on the street often thinks that AI is a mysterious magical thing. To hear you talk about the processes that we all need to go through, to make it useful in any particular circumstance and how that develops over time is real worth thinking about in this context.
So final speaker Sarah, we have used the privilege here of a U.S. ‑‑ a UK chaired session here, to be able to have a UK speaker last to wrap up. So Sarah, we have heard quite a lot of talk about regulation or not regulation. We have heard Liz's industry view of taking Human Rights in to account. Tell us about the UK perspective and how we are working on it.
>> SARAH CONNOLLY: Hi. It is really a pleasure to be here. And I should just start by saying how interesting the various talks have been and how much I agree with so much of what has been said already.
So I'm really looking forward to the discussion. But Paul, as you say, I'm here to talk a little bit about the UK's approach to online safety. And I'm sorry that I can't join you in person. I had hoped to be. I'm very jealous that you managed to get out to a fantastic event. I can't join you because appropriately my team and I have been busy here in London working with Ministers on our online safety bill which starts its progress after a slight delay through Parliament again on Monday. So there has been quite a lot of work over the last few weeks and this week as we gear up to do that.
Over recent years I think it is reasonable to say that there has been general recognition of the importance of developing online safety regulation. And many of you on the panel have made that point well, I think. There is this need to balance what we know is harmful online and what we know can be some quite difficult content with maintaining freedom of expression and freedom of privacy.
So things from, you know, the Livestreaming of things like the Christ Church terrorist attacks in 2019, the pervasiveness of child abuse material, tragic incidents of child suicide following the availability of pro suicide content. It is really clear certainly from a UK perspective that Government action has been long overdue.
And I do find it really encouraging that this realization isn't just in the UK. But I think ‑‑ as Kazim mentioned we have the EU Digital Services Act. We have Australia's online safety act to developing legislation around the world. And one of the things that I'm really keen that we do is keep having conversations between us about what that legislation looks like so that we are ‑‑ we are really learning from each other as we go.
I'm conscious of the time. But I did want to just focus on a few principles of the UK legislation which I hope will be interesting for the conversation.
I mentioned a moment ago the importance that we place on championing protections for Human Rights and fundamental freedoms at all stages. It is not censorship, restrictions and free ‑‑ it needs to remain a vibrant place for robust debate and free expression for all users. That's why the bill has been designed to ensure that there are strong protections for freedom of expression in place.
Companies will be obliged to enforce their terms of service consistently. They won't be able to arbitrarily remove content which, of course, can be a barrier in and of itself to freedom of expression. But they will also have to set up appeals mechanisms so they will be able to appeal that decision.
Companies will also have a special obligation to protect journalist's content and content of Democratic importance. And to ensure that our legislation is enforced consistently. And effectively the measures in our bill will be enforced by OFCOM, an independent regulator. And the independence of the regulator means there will be consistency in that approach. I do think it is worth noting that we believe our approach which is centered around systems and processes and proportionality rather than reactive blocking and takedown requests will both get better results but protect users' rights much better.
The legislation is well targeted. It is focused. And then the independent regulator provides guidance as to how two ‑‑ to companies as to how they can comply with those safety duties. Although our regulator will be able to issue fines and block access we are clear that shouldn't be taken lightly. And any measures to do that would have to have judicial oversight.
We think all of the package together, the whole legislative vehicle really underscores our commitment to create an effective regime that tackles the problem of harm online while reiterating the Government's commitment to transparency and freedom of expression. It really feels like, and this conference is part of it, a really exciting time to be working on online safety. And I am pleased to be here on the panel, even if it is only virtually with an excellent lineup. So I'm really looking forward to hearing all of the questions and answers that they can ‑‑ they will give. So thank you very much.
>> PAUL WILLIAMS: Thank you, Sarah. Yeah, that's a great way to wrap up the panel speakers. That sort of balance there that you were talking about, right, that was very much how my sort of instinct was to frame it at the beginning, the good and not good. How do you make sure you don't stop the good when you try to stop the not good. You have to maintain that balance.
So we have just under 20 minutes. Do we need to ‑‑ do we need to go around with a microphone to be able to speak in the room? Let me turn to the room first, I think. There is quite a few people here in front of me. Would anyone like to ask a question? About two I think. Right? Two. So let's take two from the room. Any online? Yeah. All right. So folks online, if you could write down questions in the chat. And then people here to my right we will be able to see that and pick that up. So let's start in the room then.
And I think I saw the lady right at the back first. And we may need to give you a ‑‑ there is a microphone. Over to you.
>> AUDIENCE: I'm the Chair of the Internet Architecture Board. Thank you for the really interesting session and the different perspectives. So we first talked a little bit about shutdowns and the importance of having open Internet access. And then I held a lot of multiple replies for legislation. What kind of regulation do you have in mind?
And then second question is a little bit related to what Sarah talked about, problems that you see are exposed or accelerated by the Internet. Legal actions you have there. And do you think that regulating the Internet is a right tool to fight these crimes?
>> PAUL WILLIAMS: Thanks. That was probably moderation and regulation of the Internet. How do we do it? Directed to Sarah. And then there was a gentleman in front of you. Just like to ask a question, sir?
>> AUDIENCE: Yes. But I will happily wait to let the panel answer the first question.
>> PAUL WILLIAMS: I don't want to run out of time. I'm going to take two and then let the panel reply.
>> AUDIENCE: I wanted to ask the panel about the responsibility to protect Human Rights. Businesses have a responsibility to respect Human Rights. Governments have two responsibilities, to respect Human Rights, not to be infringing on themselves but also to protect Human Rights, which means to intervene in own societies where the overall outcomes are which Human Rights are not being fully vindicated in the way they should be. In the area of content regulation and dealing with the harms that we begin seeing, governments, many governments have been placing a lot of responsibilities on private sector access to intervene and regulate content. Often leading to go to expectations that go further than the Government would itself be able to require if ‑‑ if they acted directly. It is always understood that companies can go much further. And each individual company may not even consider itself as infringing on Human Rights because our service, because our service, well, it is one service.
But the overall aggregate impact of the pressure on those companies across the market from Governments that those companies should be intervening to deal with harmful material could, well, end up to an overall impact on the Human Rights within that particular environment. Maybe does disclose an intervene to protect against things having in aggregate gone too far the other way.
I'm asking whether the panel thinks that when countries that aspire to lead the world in how to regulate harmful content should be focusing not just on specific measures to deal with that harmful content and then generalities about how to protect freedom of expression but maybe should they ‑‑ does that include a requirement that they should be just as attentive to the specificities that should be required in order to make sure that Human Rights are fully vindicated.
>> PAUL WILLIAMS: Thank you very much. So risks of overregulation and specificities. So let me turn to the panel. I sense that some of this was directed at Sarah, the first question was part of the name. Sarah, I'm going to ask you to come in in a second. Just give you a minute warning. Anyone else like to come in on these things? Moderation? Risk of overintervention? Need to be specific on what we are trying to regulate. Would you like to come in?
>> Felicia: Thank you for your question. When it comes to the harms that we see online, the responsibility lies on all of us, including Governments and companies. And I think what has happened over time is that we do have like Civil Society Organizations and individuals and even journalists that flag harmful content to companies, to be able to moderate or ensure that these harmful content do not spread on their platforms. And this ‑‑ the challenges we see vary from company to company, but it is important for companies to lead because these harmful content are on their platforms. We have seen situations where it has caused danger to people's lives, like Myanmar. We saw how the proliferation of hateful content and misinformation on platforms in Ethiopia over the past two years.
And I think Civil Society has really been doing well in flagging this content. But then we need more from the companies to be responsible for moderating the content and ensuring that content that is harmful, bearing in mind you have to understand the context, the nuances in these places. And also follow the guidance of Civil Society or people that are advancing Human Rights to be able to moderate content. So I think it is a responsibility of all of us. We shouldn't also take advantage of the fact that there is harmful content on platforms.
And so our Governments take the ultimate action of just shutting down those platforms. That violates fundamental Human Rights. Most of us in this room are advocating for fundamental Human Rights and it needs a collective effort for us to address the harms online. At the same time advancing the benefits that the Internet and other technologies provide.
>> PAUL WILLIAMS: Thank you very much. Let me just check if any of the other panel wants to come in on this. Okay. Irene.
>> IRENE KAHN: Yes. Yes. I wanted to intervene because I think it is very, very dangerous to focus too much on regulating content. I was arguing for regulation of process, the due diligence of companies, the clarity of their policies, their own transparency, remedies for users and so on. And not intervene state to state should ensure that companies behave properly in that sense. Should not intervene with content that's lawful but harmful. That's a very, very slippery slope. There is content that is unlawful and what's unlawful in the offline world, but a crime in most countries around the world, should be unlawful online. But we need to be very careful that many Governments actually declare certain activities unlawful that are fully legitimate under international law.
Look at freedom of speech in that area. You find a lot of material that is considered basically censorship, like censored by Governments. And we need to be very careful about touching, companies touching or Governments telling companies how to touch lawful but awful speech. That is where the risk is. Because what is ‑‑ if you look at truth and false, Fake News laws, there has been a proliferation of Fake News. False is not prohibited under international law. We need to look at the degree of harm and the response should be proportionate to the degree. Even in the offline world, not all harms lead to criminal law, for example.
There are various ranges of options that are open. Similarly for companies to and many of the platforms have been doing it, some of the platforms have been doing it. There is a range of response rather than a removal. So I would be very careful in this area of content moderation to overregulate.
>> PAUL WILLIAMS: Thank you very much. Proportionality extend to regulation. And moderation versus not overmoderation. There is a lot there that I think Sarah, you and your team have been trying to grapple with, with the new bill. Any thoughts on this?
>> SARAH CONNOLLY: I do. I have lots of thoughts. I can see that Kazim has his hand up. I will keep this short. I completely agree with Irene. This is not about content regulation. It can get easily conflated. We are really careful in the UK to talk about systems and processes.
It is not about individual bits of content. It is about making sure that the systems and processes are balanced in two respects.
One for safety to protect, in particular, children and to do exactly as Irene said, where something is illegal. Make sure that is not able to proliferate. Make sure those systems and processes protect freedom of expression. And so it is a balance and it has been a difficult ‑‑ a difficult balance to try and navigate over the last ‑‑ I have been doing this for a long time. And we have struggled to work on it exactly where that balance should be and how we should do it. And there are lots of strong feelings about the volume of things online that are kind of harmful, if you like. But we have ‑‑ and where we have landed up actually in the end is platforms on terms and conditions need to be taken in to consideration. So in effect, there needs to be clarity and choice. And so people can choose what to see, which platforms to use. But often they don't know at the moment. So in effect it is Consumer Protection measures is sort of the way that we have managed to square some of that. But it is a difficult balance.
But I really agree with sort of it not being around content. But it being around the systems and the processes. I think sort of two other quick points, someone asked in the audience about whether or not Internet regulation was the best way of managing kind of harms offline. I think the answer is it is not the only ‑‑ it is absolutely not the only tool in fighting crime. But there is something around the kind of speed and reach that things can sort of proliferate online that need to be management. It shouldn't be seen as the panacea, the silver bullet that will fix everything. There are other issues at play that need to be dealt with offline, but the speed and the reach is what makes the difference, I think.
And then the question about kind of private sector engagement and overintervention, I think again it is about being really clear, including to companies about that balance between various freedoms and safety. And making sure that they have ‑‑ are very conscious of that. In fact, the piece of legislation that we are taking through is absolutely explicit about the need to balance freedom of expression and Democratic content alongside managing harms online, particularly around illegal content and children.
But this is a sector also which is ‑‑ which has largely not been regulated. And a bit like health and safety or banking, there are ‑‑ there are plenty of other very large sectors that have been regulated as they have gotten bigger and have more of an impact on people's lives. And it is analogous to other sectors. And it is about getting the right regulation as opposed to having no regulation.
>> PAUL WILLIAMS: Thanks very much. So I think Kazim has his hand up as well. I might ask Amelia or Alex to maybe summarize the conversation that's going online. But we will definitely catch up on afterwards and then we will have to close.
>> KAZIM RIZVI: Thank you. Thank you. I agree we have to be careful of overregulation. What we need to do now in this era of hyper‑connected world is empower all stakeholders. We have to empower the platforms and users and other actors including Civil Society to make sure that there is adequate recourse for the users. At the same time their speech is protected. We have to look at greater transparency in terms of content takedown and carry out the terms of service and how they sort of deliver on safety in terms of how Governments are requesting for takedown.
At the same time I think in India we have seen recently that in the recent amendments to the IT act the Government has focused on improving transparency by platforms. So I think we have seen some steps taking place here in India. But we have to look at the increase in the level of transparency by both platforms and the Government.
Just a couple of points one on AI, I think we know that platforms are using AI to sort of remove content. And a lot of times they use the technology to ‑‑ it is automated in terms of the way the content is removed and taken down because there is so much content online. But we have to start building in the process with how AI is developed. And we have to start looking at diversity by design within the AI tools so that we have people from across geographies and different cultures. Anywhere in Asia or different ecosystems we have to look at responding to cultural diversity, responding to, you know, different types of communities.
And so the people who code and develop AI they have to represent the geography more proportionately. And that has to happen in how the future AI engineers are hired and the way they remove content. So we have to look at diversifying the developer community of AI.
And I think sort of the last point is around, you know, making sure that again greater collaboration but making sure there is a sense of purpose in terms of content safety, in terms of takedown, in terms of removal of some content. But it has to be done in a way that speech is protected. And privacy is maintained.
>> PAUL WILLIAMS: Thank you very much. Yeah. Very important point you made there on AI and diversity. We have only two minutes left in the session. And one thing to having a brilliant panel. I'm going to give Bertie and Liz a chance to speak. One minute each. If you would, please.
>> BERTIE VIDGEN: Great. I wanted to say something about proportionality. I agree with what we have heard. If you look at things like the advertising sector which is tracking user engagement and you look at what happens with enforcing copyright law, which anyone who has uploaded a video to Youtube is very, very active and very fast. We do have the technology to do what's needed for trust and safety. It is amazing what can be done and the tech that we can see being created. So completely agree, we need to be proportional and aware of the market effects, but we are starting from scratch. So we could see huge improvements in these technology for online safety. Thank you.
>> PAUL WILLIAMS: Liz.
>> LIZ THOMAS: I will take the privilege of holding the mic last to really just say there has been so much that you can take in different directions. Coming back to the question of the role of regulatory measures and I think it comes back to the multi‑stakeholder piece and the reflection that, you know, on this panel and elsewhere in the IGF we have the private sector. We have Governments. We have Civil Society and across those sectors is a whole different bunch of roles and responsibilities and expertise. We have a suite of options that are open to us and sort of remembering the wholistic approaches and regulatory measures have a role to play in that to that and taking to heart the offline piece here as well and these harms don't exist.
>> PAUL WILLIAMS: Thank you. So I think we are going to have to close it there. I knew the whole time that we were going to run out of time with this because it is a fascinating discussion. And as Bertie was saying AI develops over time. This conversation is going to develop over time.
Thank you to all our fantastic panelists for their different perspectives on this issue. Thank you also to the organizers. Thank you to Amelia and Alex and to the DCMS team in front of me and to Sarah joining from the UK who said but to ‑‑ given how much you are doing on the safety bill. I think it was introduced this week. Interesting to hear you talk about that and the UK trying to find the balance that we have been talking about. Thank you very much to all of you. We will close it there. And I hope you enjoy the rest of the conference. Thank you.