The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> JAN GERLACH: All right. I think, yes, we have a quorum. Hello, Rob. Wonderful to see you.
Let's get this party started.
Hello and good morning, afternoon, good evening, everyone. Thanks for joining us at the Workshop Number 342: People versus machines: Collaborative content moderation.
Due to the virtual format you are probably not in the wrong room. If you are, welcome anyway, please stay.
It is my pleasure to welcome you all. My name is Jan Gerlach, a lead public policy manager at the Wikimedia Foundation. I'm today joined by Anna Mazgal, Policy Advisor representing the European Wikimedia Communities in Brussels; and Justus Dreyling who leads Wikimedia Deutschland, the German Chapter of the Wikimedia movement.
We will be joined by other speakers.
In a nutshell today we aim to explore the complexity of the Internet content and to explore implications for trust in the Internet. An easy task, right? The longer version, of course, is that some Internet platforms deploy technology to automatically moderate information. Others enable their users to participate in those moderation practices.
We want to discuss the kind of support, architecture, norms, and other systems that will be needed for communities of Internet users to be able to engage in those content moderation efforts effectively and safely.
Certain models of content moderation allow the users of the platform or a Forum to ensure the quality of content and enforce their social norms. This means that community standards or rules in a collaborative manner can be enforced in a decentralized way. This can be very effective. Research on harmful content on Wikipedia, for instance, you'll hear shortly, has shown that content moderation by communities can work but also there are some aspects where platforms need to support them.
Different kinds of communities, including from different regions and backgrounds, may apply different quality standards to information they want to see in the spaces where they meet online. At the same time public policy such as intermediate I can't liability laws has a large impact on a platform's ability to hand over control to users. This means to allow them to upload and moderate content in the first place.
This workshop, this session explores this interplay of social, technical, and policy systems that enable a decentralized collaborative approach to content moderation. In particular focus of the conversation will rely on harmful or potentially illegal content. Misinformation, incitement to violence or terrorist content. Many policy questions arise around all these issues. The following questions seem especially pressing to me:
From a trust and democracy perspective how can policy support participative content moderation that creates trust in platforms and in the Internet. From a perspective of freedom of expression, vis-a-vis harmful content also, what kind of architectures promote people's ability to address disinformation, incitement to violence and other content that can harm society?
From a perspective of safety, of user safety online, where do users need to be supported through tools to address harmful content without being harmed themselves? I'm very excited about this conversation today and the perspectives that we'll hear from the various speakers.
With that I will hand over to my colleague Anna. Thank you.
>> ANNA MAZGAL: Thank you, Jan. Welcome, everyone. I am also very excited about this session. I feel that from this conversation there is so many things that can inspire us but also can provide practical responses or solutions to many problems that we witness on the Internet today. But it is not up to our community only to solve this or to see those lessons. Any occasion that can involve us in the conversation about in is great because we can collectively advance this thinking and collaboratively think about how to make Internet the better place and today to talk about in we invited four wonderful speakers. I'm very excited to hear what they will tell us here because also they present very different perspectives coming from different background.
First we will ask Robert Faris who is a Senior Researcher at the Shorenstein Center on Media, Politics and Public Policy at Harvard University and affiliate of the Berkman Klein Center for Internet and Society, to present the research that he coauthored with other wonderful researchers exactly on the topic of collaborative content governance and moderation and the example of English Wikipedia.
Then we will ask for contribution from Marwa Fatafta, an AccessNow policy manager for the Middle East and Northern Africa; and Advisory Board member of the Arab Center for the Advancement of Social Media and Policy analyst at Al Shabaka, the Palestinian policy network. Marwa will talk about both AccessNow's work on the issue of content governance, as it is called, I believe, but also hopefully bring some perspective from the MENA region on those issues as she knows it very well.
Then we will ask Mercedes Mateo Diaz who is the lead educational specialist at the Inter-American Development Bank, leading and contributing to the research design and execution of innovative education project. And Mercedes will touch on aspects that concerns skills.
And Mira Milosevic, Director of the Brussels-based Global Forum for Media Development, will talk about architecture and influence of it.
As you can see, it is a very multidimensional discussion and I hope we can just dive right into it.
Just a question of to streamline because we also hope that this is interesting to you and you have questions and comments. So if you have questions to Panelists please use the Q&A function in the Zoom and also there is chat available if you have any other points to be made.
I will be looking into both. But just to make sure that none of the questions is being lost or none of the contributions to the actual topic of our conversation, it will be great if you can put it there and we can also answer there or live depending on what sort of question it is.
So that is enough, more than enough for me. I would like to ask Rob to start his presentation.
>> ROBERT FARIS: Wonderful. Thank you, Anna. Thank you, Jan. How is my audio? Is it okay?
>> JAN GERLACH: Audio is great.
>> ROBERT FARIS: Yeah? Super. I am going to share my screen and describe very quickly.
Summarize a study I did with colleagues at the Berkman Klein Center studying how we thought Wikipedia English language would be doing in moderating harmful speech. Jan did a great job of explaining the stakes here. This is one of the biggest challenges in Internet space on one of the most important platforms that we are very fortunate to be able to look at this.
What we did is we wanted to understand how moderation worked and we used two approaches to this. One was interviews with editors on the platform. The other was a quantitative analysis. Our conclusions from this is that Wikipedia is doing fairly well in removing the majority of harmful content from the platform but it is doing better on the articles than it is on the talk pages and user pages associated with the platform. I'll describe how we came to that conclusion.
We started out by trying to understand what harmful speech is, which is almost I am possibly complicated. This was an attempt to try to simplify it as much as possible. You can see that we failed in that.
But it does highlight that we focused on three principal types of harmful speech in this study. One was various forms of harassment. Another one is identity-based attacks. So attacks based upon, for example, race or ethnicity. The other was physical threats or incitement to violence.
This is, trying to summarize how Wikipedia works. An article is written. Someone creates a revision to the article. If everyone is happy with it, you have an addition to the article.
Most of what I'm going to talk about is in this range. There are two ways in which revisions to articles occur. One is by human editors look at it. They say this is not a productive edit and they revert the edit. It goes back to what it was before.
Or there are various machine tools that do the same thing. The one listed here, clue bot NG does revisions automatically.
Wikipedia, of course, is very transparent. So this article is the same as the original, but there is also a history of the edit that was made and the revision to the edit that is available to the public.
There are two other layers. One is when this history includes content that administrators feel is so harmful or for example egregious that they remove that from public view. So it is still there. Administrators can see it, but the public is no longer able to see it. There is an even higher level where the content is suppressed so that even administrators on the platform can't see it. But we are interested in this area here, how do machines, the combination of humans and machines do in removing harmful content.
To give you an idea of scale, if each one of these dots is a revision, the amount of things that are reverted is fairly small. Just this area. That area where things are reverted but removed from public view is even smaller. That highest level of editing where it is suppressed all together is even smaller.
So Wikipedia, huge, a lot going on there. But as a portion of that content, ring versions are fairly small.
We looked at a sample of 100,000 revisions to Wikipedia to see how many of them using the machine learning tool were identified as being harmful in those three categories I mentioned before.
And there the proportions are pretty small. So in an article space it is less than one in a thousand are the machine tools that we used said this is harmful speech. Ninety-two out of 100,000.
>> ANNA MAZGAL: Rob, I'm sorry, but you keep fading away. We wonder whether you might be a bit closer to your mic or maybe you are covering it? I am so sorry to interrupt you.
>> ROBERT FARIS: Thank you for the interruption.
>> ANNA MAZGAL: Now, it is good.
>> ROBERT FARIS: I'm afraid there is nothing I can do. I can slow down, I guess.
>> ANNA MAZGAL: No, it's perfect.
>> ROBERT FARIS: Unfortunately, I'm having Internet issues today. I don't know why.
>> ANNA MAZGAL: I see, I see. It is fine. Sometimes the volume is a bit lower. That's why I deliberated to step in.
>> ROBERT FARIS: Thank you, thank you for that, Anna. If you lose me again, please do say so.
>> ANNA MAZGAL: I'll wave.
(Chuckles.)
>> ROBERT FARIS: Perfect. I'll keep an eye out for you.
In the sample of articles we looked at, a very small proportion had harmful speech associated with them. And where it did exist, this harmful speech was removed in a minute or less. So an article space, the performance between the editors and the automatic machine tools was very, very good.
We see that in the article talk space or on the user places that the performance was not as fast in removing it. There was a very small number of what the machine tools had identified that we used for the analysis as harmful speech that remained on the platform.
But in general very effective.
We also looked at who was doing these edits amongst the editors. It is a very distributed process. The vast majority of people who had reverted harmful speech only did it once. So Wikipedia working as Wikipedia is meant to work.
When we talk to editors about the same process, what they said lined up very well with what our quantitative analysis said. First of all, the process was more effective on articles than it was on talk spaces. One of the principal reasons for that is that the machine learning tool does not operate on talk spaces.
They also said that fewer editors were focusing on talk spaces than the article space, which also makes sense. The primary objective of Wikipedia is to get the articles right. So most of the attention is paid there.
It was also harder to moderate the talk spaces than the article spaces because the conversation about the content is an important part of Wikipedia and it is much, much harder to distinguish between what is acceptable from a community standpoint from what is acceptable from an article point of view, which has a cleaner line. Is it accurate? Is it productive or not?
A key part that the editors confirmed is that the policy and guidelines are not necessarily interpreted and enforced the same way across the platform. This is more of a feature than a bug. Wikipedia has a decentralized model, which is very different from the large platforms. The editors themselves have a lot of latitude in deciding what is appropriate or not. And hence, there is necessarily less consistency in how this is applied on the platform. We understand that.
An important part of the Wikipedia model, of course, is keeping the community itself alive and well and nurturing the community. This is a principal challenge for Wikipedia.
There was no consensus among the editors whether there was enough editors and administrators to keep the thing afloat. Anyway, we know that from talking to people that that is an issue. There are other challenges within Wikipedia. So the homogeneity of the editor pool is one of the central ones. It is a known problem.
The editors that we interviewed also said that the automated tools are very helpful and complement and support their activities.
Another thing that we did is we looked at how media covered content moderation across platforms as a measure of how well can Wikipedia is doing. This is a tool we developed with MIT called Media Clued. There is a lot more public attention on content moderation on the commercial platforms than there is on Wikipedia. We see that as a sign of it working well on Wikipedia as well.
So the conclusions from this is that English Wikipedia seems to be doing a pretty good job. This shouldn't take away from the damage and abuse that Wikipedians are feeling. Those that are experiencing harassment on the platform, this is not going to be very helpful to them. They probably don't feel that Wikipedia is doing that well, if you are on the receiving end of abuse.
Again on the article space, this is much more effective than on the user space. But to say that as a high level conclusion which we believe to be true, it kind of glosses over the intense burden that editors are playing in carrying out this process.
There is room for a lot more research in this space. So how is content removal and account suspensions, how do they influence the incidence of harmful speech? In an ideal world we would have participants in the community that do not want to contribute harmful speech, and that those incentives to keep people from doing it in first place would be most important. We did not have visibility into that.
Another question is what does good performance mean? To really understand that we would need to compare how Wikipedia is doing to other platforms or in fact to compare it against how people are doing in real life. How is Wikipedia any better or worse than communities in real life space?
Another question which we are not able to answer is: What is the overall trend lines of things? Are things getting better or worse on Wikipedia itself?
So I will turn it back over to you, Anna. I very much look forward to the conversation and thank you for giving me the time to present this very quickly.
>> ANNA MAZGAL: Thank you, Rob. If you have been following the presentation, you may also want to check, there are some questions in the Q&A session. One of them was answered.
I am wondering if we can maybe quickly answer the second one, ask Rob or you? So Fran is asking what is the scale of the problem? How many items per month or year need to be addressed as harmful?
If any of you would like to take this? We can also do it in writing if that is preferable.
>> ROBERT FARIS: I would tell you what I know and others will have to help me with the other parts.
The data we have is that harmful speech occurs at a ratio of roughly one in a thousand incidences of a revision. I don't remember what the overall level of revisions is on Wikipedia. It is very, very large.
So we have to keep two things in our mind. One is as a proportion of activity it is very, very small. And we can take that as a sign of optimism. But given the overall size of Wikipedia, it ends up being a lot of material. That's a sign for essence.
>> ANNA MAZGAL: Thank you, Rob, very much. Now we can move to the next presentation, Marwa, if we can ask you?
>> MARWA FATAFTA: Sure. Thank you so much, Anna. Jan and also Rob and others for inviting me to to this discussion.
So I will start with how I think we as a global organisation, AccessNow, our mission is to defend and extend the digital rights of users across the globe. This includes working to ensure that at risk individuals and groups do not become victims of censorship or online abuse whether through government regulations and laws or whether through corporations, practices.
So one key question we focus on is how can we protect freedom of expression in the era of online content moderation, especially since there is so much pressure on governments and private companies to address illegal content and all sorts of harmful content that mirrors the ills of our society, hate speech, harassment, incitement to violence.
This information which is unfortunately as we have seen in many countries and in many contexts is often done in a rushed and poorly crafted manner. And through our work in the MENA region and many other regions in Latin America and the U.S. and the EU, in Asia-Pacific and Africa and also through the work of our digital security help line, we know first hand being in contact or in direct contact with human rights defenders, journalists, activists and members of press, marginalized groups and minorities such as women and members of the LGBTQ community, how decisions related to content moderation and curation are what we called content governance can affect the fundamental rights of those groups.
The central question is: How can we write the fundamental rights of users at risk and all users to the debate today and also to the wider discussion around content moderation? I want to highlight that of course content moderation is a very complex issue and there are no shortcuts or silver bullet solutions to them. But one important principle for us when we are discussing content moderation content governance how decisions are made about content moderation and curation online, they have a duty to consider human rights and in their policies and decision making.
Under international law and the human rights law framework, governments are obliged or obligated to protect those rights and companies have the responsibility for respecting them.
So we, to assess in this process we published a number of reports. Most recently a report on content governance providing recommendations on the issue for law makers, regulators and company policymakers. I will drop the link for those interested in reading the reports in the chat later once I'm done.
Basically what we are trying, as I said, we would like to bring or provide a human rights or Eurocentric focus to this discussion. The way we divide content governance into three main categories: One is state regulation. So those regulations that are enforced by governments through legislation. The second is self-regulation exercised by platforms, by social media platforms and others via their terms of services and content moderation curation and third co-regulation which is undertaken by governments and platforms together through either mandatory or voluntary agreements.
One example of that is the price church hotel eliminates terrorist and violent extremist content and another is the Internet Governance Forum for counter terrorism. For today I wanted to look at the state regulation and how that plays also with the self-regulation when we look at one specific region, in the MENA region.
So when it comes to state regulation of online content there are of course risks associated with the government exercising state power. And unfortunately, that is not re-restricted also to the MENA region but across the globe that governments often react to societal phenomenons like harassment or hate speech or disinformation by rushing to enact legislation without, A, having sufficient debate, without consulting with different stakeholder groups and especially with civil society organisations or even users who at the end of the day are affected by those laws and policies.
Which, of course, opens the door to not only to government abuse or not being able to tackle the issues efficiently, but they also put users at risk including activists, political dissidents and journalists. For example, with the flood of disinformation to and misinformation during the COVID-19 pandemic, many countries in the MENA region thought, okay, we can tackle the issue of disinformation online through enacting an anti-fake legislation in Tunisia and Morocco and Algeria where basically this information and disinformation is criminalised and they tried to tackle it with online platforms, a bit similar to the German legislation which provides a notice and take-down approach. So we have like a window of 24 hours to respond to government requests to take down disinformation.
Luckily, through working with other civil society organisations, we managed to drop the bill in Morocco and Tunisia specifically because they have opened the door wide open for all sorts of human rights abuses and they really restrict freedom of expression without being, taking into consideration the three components or tests, so to speak, or criteria in international law under which you can restrict freedom of expression which is legality. It must be prescribed by law. It must be necessary. And it must be proportionate. Unfortunately, the penalties in these legislations to tackle disinformation are quite disproportionate.
In other countries like Jordan, Palestine, for example, where the government response to the pandemic was heavily securitized through the security agencies or even the Army in the case of Jordan was the one responsible for ensuring lockdown and so on. The same organizations or agencies were responsible for tackling the issue of disinformation online.
Any individual who is suspected or found of sharing misinformation which is not necessarily malicious or disinformation, these people are often, have been detained or handed jail sentences without any due process.
Of course, that infringes on people's fundamental right, especially their right to freedom of expression and free association and Assembly online.
Moving on to self-regulation which I mentioned earlier is how platforms define what is permissible or acceptable in terms of content on their platforms through their terms of services. Often those terms of services or community standards or rules, they are done in a unilateral manner in many cases and they lack remedy, they lack transparency. They are often opaque, automated and fail to align with international human rights principles.
So even there has been so many discussions around content moderation and also to be fair a lot of progress being made on that front. But we still don't know much about how companies, especially the giant companies, Facebook, Twitter, Instagram, YouTube, we don't know how they apply and enforce their terms of services, especially when it comes to automated decision making.
Unlike the research findings that Rob shared with us that explain how exactly content moderation is being implemented, unfortunately we don't have the same access to understand how the content moderation policies are being implemented in these platforms.
I want to really underscore how important to take into consideration thorough and nuanced understanding of regional or local social, cultural, political, linguistic and legal context which automated solutions implemented by the platforms fail to grasp in many numerous cases, resulting in false positives and also arbitrary and discriminatory decisions.
I'll give you one specific example to this. Earlier this summer in June 2020, we received a few reports of Tunisian users being banned from Facebook. They wake up, try to log into their accounts, receive a notification that they are not eligible to use the platform. Some of those users have reached out to us through our help line and we tried to collect as many incidents or cases in possible.
We thought in the beginning, okay, is it a take-down under government pressure? Is it related to certain words? For example, again the results of automated decision making. And those users, some of them were well-known activists, high followers on, a high number of followers on Facebook. Others were just ordinary users with private accounts, 400 followers or so. There wasn't really, we couldn't understand why those individuals or users were banned all of a sudden from the platforms.
Noting that, aside from that one notification, they haven't received any transparency from Facebook regarding what they have done or what rules have they violated in order for them to be banned from the platform.
So we reach out to Facebook together with other civil society organisations and Tunisia and other, Article 19 and EFF and so on, to understand what content moderation policies were in place, whether this has been the result of automated decision making and so on.
Later on what we realised, of course, is that they were taken down because of Facebook cleaning up a sophisticated disinformation campaign done by a digital communications company in Tunisia called your reputation to influence the Presidential elections in Tunisia in 2019 as well as other elections across the African continent. So they removed hundreds of accounts and what they called assets, event pages, and other public pages as part of this campaign.
Unfortunately, those individuals that were flagged by the automation and, of course, consequently banned, when we reach out to those individuals they denied any connection with this European station company. Unfortunately what we can see is that many individuals, some were artists, musicians. One specific example, an artist who invested so much time and energy in building his art page on the platform, that was taken down. Another person who has lost access to photos and information of a disease, also that was government. We can see clearly that the automating of decision making coupled with improper notification to the users can be far reaching. Not only to the freedom of expression but enjoyment of other rights, social, cultural, and economic rights as well. Especially if you are dependent on these economic platforms for your living.
Some of those accounts were restored with our help. And the help of others Tunisian organisations.
One take-away from that is that our, how are platforms made by this platform eroded trust in the platform. The sentiment that we got, and actually a direct quote is that I can't trust to rebuild everything that I have done throughout the years, for example, with the artist, I can't invest again to open up an account. I can't trust that I won't be collateral damage to some other automated decision making by this platform.
And another take-away is the imbalance of power between users, average users and the platforms. Some of the people have reached out to us as civil society organisations. But I want to note there are many individuals out there who don't necessarily have access to remedy or have access to the platform. They can contact them using whatever processes are made available on these platforms, but in many cases their calls are being ignored.
And this is really, really harmful. Not only on the individual levels you as a user but also on the collective level. These decisions that are taken single-handedly by the platforms, especially the giant big ones, they have a cumulative impact. In a way they do decide who gets to be part of a public discussion and who is not. And it does potentially in some cases violent voices of entire communities. One example is, well, just around the same time we were dealing with the Tunisian case, we received reports of Syrian reports of accounts being suspended, accounts of human rights activists, those documenting war crimes by the Syrian regime, all of these accounts -- I believe the number back then was something like 6,000 accounts or so. But again, we don't have a number. We just know through our networks and through our help line team.
The result of this again the suspension of account belonging in this case to the Syrian activists led to the -- there is a sentiment by activists not only in Syrian but echoed in many parts of the region that we don't trust the platform. We feel we are abandoned, especially in a context where the government legislations are extremely repress I have or they are used to censor and clamp down on political dissent and human rights activism. Therefore, the only space where people feel we can be present to active and advocate for human rights is the social media platforms.
And not understanding the machine, what is the process, what rules have they violated, this again erodes their trust in these platforms. So we've seen, for example, hashtags like Facebook censors the Syrian revolution, Facebook censors Palestine. Many communities, especially those oppressed and marginalized, feel that they are being silenced by those big platforms.
And so what can be done about it? I mean, sharing these two examples, the main red flags that come out is one of transparency. So we need to properly understand how and what is automated, automation or automated decision making processes are taking place. How exactly these social media platforms apply their rules in terms of services.
When they take a decision, they have a responsibility to notify also the users in order for the users to understand what violations have taken place and we able to address them. When it comes to automation as well it is extremely important to have a human review mechanism. So when you are suspended or your content is being taken down, it is important at least to have a person, a human being that understands your local, regional language, political and social context to review and verify whether indeed that content violates the community standards or not.
Another thing that is important for us is the context. Understanding context is very crucial, as highlighted. Having this one size fits all approach doesn't work. Of course, as always, the need and pressure on platforms to act quickly, especially when there are, there is incitement to violence, when there is terrorist content. They often decide to err on the side of over censorship rather than approach those automated decisions later on with a more elaborate Complaint mechanism, when there is a person, a human who can review and make decisions.
And the last point that I want to mention before I hand back over to you, Anna, is we talked about transparency. We talked about importance of context. Actually for those social media platforms to invest in non-English language content. I can't highlight this enough. The last thing I want to mention is access to remedy. So users need a mechanism in place where they can, when the machine errs, which is often the case, that they are able to provide further information. For example, if that counters the decision made by the platform. Also when their rights, their fundamental rights are being violated to have access to proper remedy.
I will end here. I'm happy to hear your questions and comments and I will drop the link in the chat link shortly. Thank you.
>> ANNA MAZGAL: Thanks very much, Marwa, for your very interesting and multifaceted presentation.
There are three questions in the Q&A for the practical reasons of time that we want to also have for other speakers, I will not read. But if you also and other Panelists could take a look, maybe you can answer them in writing. If not, we can take them on after we let everybody speak in the Q&A part.
So it is of course important as you mentioned to also think of the content moderation as something that users take part in, even if not on a regular basis. This is where we would like to ask Mercedes for more perspective, what is needed for that part to work.
Specifically we are interested also in how the good content moderation can actually -- or can it increase trust on Internet, which I think is also an important notion here. Mercedes, over to you.
>> MERCEDES MATEO DIAZ: Great, thank you, Anna, thank you, Jan, thank you Rob and Marwa for your insights.
I would like to place the discussion about content moderation in the context of the skills of the user. So the broader question here would be what kind of profile do you need to create good online communities and to improve collective intelligence?
And I want to go back a second to the people versus machine idea. We are seeing a profound transformation of our society, the economic model. Anna, I do believe that talking about people versus machines is kind of a false dichotomy because we might not realise it but all the digital transformation is not really about technology, but about people and talent. And sometimes we cannot, kind of tend to focus attention too much on the machine side instead of the human side.
So the question here is beyond regulations and specific architectures, content moderation depends ultimately on users' behavior. This is something that we can work on from a policy perspective if we educate students to become digital, civic, global citizens. Let me, to illustrate this let me use two pieces of evidence recently published. The first, let's move for a second to the world of video games. I am going to take you all outside of this kind of conversation and move into another world, another space, video games.
We see a lot of antisocial behavior. I want to talk briefly about the results of a study done by a team of researchers of riot games, the company of the game league of allegiance. Many of you know about it, it has about 67 million players. The team of scientists have been able to gather a lot of behavior and data. What they found is that 1 percent of players are trolls who do about 5 percent of all toxic behavior. Right?
But a majority of the toxic behavior, so about 95 percent comes from normal or average players having a bad day. They also did some experiments and they found that the combination between banning abusive players and at the same time giving them immediate feedback improved the behavior of about 92 percent of toxic players. This is quite significant.
Also very important. Toxic behavior drives people away from that platform. So it makes other players to quit and never play again.
And listen to this number because if you encounter a toxic player in your first game, you are 320 percent more likely to quit and never play it. This is the first piece of evidence that I want to discuss to make my point.
The second one is together with Rob's research that he just presented, there is also an early call by Greenstein, Gu and Zhu recently published in Harvard Business Review looking at ideology and the type of content that different contributors and online communities produce through different platforms. They actually examine evidence from Wikipedia.
They look at articles about U.S. politics and analyze the factors, what are the factors that actually contribute to content moderation. There are basely two main contributors, lower barriers to extreme contributors and moderation of those biased contributors who start producing less biased content.
They find that one shift in the composition of participants accounts for about 80 to 90 percent of the moderation.
B, that collective intelligence becomes more trustworthy when the mechanisms in place encourage confrontation between distinct viewpoints.
And C, that, and this is a suggestion that the authors make, that eventually if platforms, the managers of platforms would let the most biased contributors leave the collective conversation, if they can be relies placed with more moderate views, eventually the content will be better or more balanced.
So let me just recap for a second because summarizing the pieces from the two pieces, we seem to know that A, toxic behavior in online communities is not network necessarily about a few trolls but the behavior of average people. That when you have had a bad experience it is very unlikely that you come back to that online community. And that avoiding this kind of behavior is a combination of a stick and carrot, banning plus providing feedback.
And it is the composition of the participants that explains the majority of the content moderation. And this last point is critical to the message I'm trying to convey here because current content moderation seals to be for large part a result of people.
So now what are the skills that users need to avoid bias and radical content and the anti-social and toxic content and provide good content for all? What are the skills that we need to try in an interconnected world and contribute to create good online communities and content?
And of course, the other one is -- obvious one is media literacy, digital literacy with the emphasis on the aspects that have to do with citizenship, but also the skills like collaboration, empathy, creativity, ethics, global citizenship, self regulation and critical thinking.
And about that one, about critical thinking, critical thinking enables people to have informed and kind of leading to that in some context, to having informed ethical engagement with information, with digital technologies and with media content.
And as we are discussing here, people are not only consumers of content but producers of content and critical thinking in that context is key to support individuals to contribute with very respectful and ethical responses. Going to Marwa's point that she was making before, even if you have an automated decision, if you place a human at the end to review the mechanism in the end of the process, you still need that person to make a critical judgment of whether or not they are going to suppress the content or the information is going to be continued or not.
So that is an important point. Empathy, ethics, as we said before, how to treat each other, how to respect each other and how to recognize and respect the difference is also another critical factor in the equation.
So now the question is how are we doing? From an education perspective, training people at these skills and to be actually, to be good digital and media citizens. Let me just briefly refer to the last piece assessment done in 2010, just released the report on global competence. The OECD team tried to measure for the first time what they called competence tense. It is a first attempt to measure things like respect of the difference, sensitivity to other viewpoints, student's ability to distinguish between right and wrong and do they understand and critically analyze intercultural and global issues.
The findings from this study showed that fewer than one in ten students in OECD countries was able to distinguish between fact and opinion, and that was based on information they were provided regarding the content or the source of the information.
So quite significant. One in ten students in OECD countries able to distinguish between those two different things, when they were presented with the content and source of that content.
They also looked at things related to wellbeing of 15-year-old students and social and emotional outcomes. They find that across OECD countries, just about two in three students reported that they are satisfied with their lives. About 6 percent reported always feeling sad. And almost a quarter of the students reported being bullied at least a few times a month.
And this is not irrelevant because remember that we said before that in video games, about 95 percent of bad behavior is explained by average people having a bad day.
So overall, -- has not been particularly good at training us for these kind of things. They show us what computers are good at, repetitive work, accumulation of data and compliance with instructions where we as humans are much better at and cannot be replaced by a machine when we interconnect things that have not been linked before. We are faced with situations that could not be predicted. We have to use and understand our emotions to solve a problem or when there is a need for new ideas, et cetera.
So they are actually creating second class robots instead of first class humans. In this context, the context of the conversation that we are having today, this is of critical importance.
So just to conclude, I want to emphasize this idea that content moderation is the ultimate responsibility of the user. It is clear that platforms have a responsibility to contribute to create good online communities.
In that context, it is important that online platform systematically and apply a set of rules and make sure that the content is good and acceptable, just as Marwa and Rob were explaining before.
But we shouldn't arrive to the point where it is the machine that defines the relationships between humans in a virtual space. Beyond the algorithms, it should be the humans using these platforms that define through their behaviors its final content. Thank you, Anna. I'll let you now continue with the moderation and with the next speakers and Panelists.
>> ANNA MAZGAL: Thank you, Mercedes. Super interesting. I love the phrase about second class robots and first class citizens. I think it sums up the situation quite well and also thinking of the new IO perspective, whether this strategy of the having lots of second class robots isn't written in the architecture of the Internet or maybe that's just as good as we can get. I'm wondering about, very curious about your contribution. Please go ahead.
>> MIRA MILOSEVIC: I'll just take a second. Thank you, Anna, and thanks everyone for inviting me to present here today.
It is really interesting and I agree with so many points raised by Robert and Marwa and Mercedes. And I have a couple of points around very small percentage of toxic players. That is really interesting, especially in identifying malicious players in online platforms. That is something to take a look at.
It is, this interesting distinction between fostering good online communities and user being responsible for content was also interesting.
But to go quickly to my presentation, I am manager of global Forum for media development, a network of over 200 organisations in 70 countries that work on promotion of freedom of expression, journalism support and media development. So I will give a quick perspective on what has been happening, especially in this period while the world seems to be closed, but there is actually a lot happening.
And the results of this interesting contradiction between the way that we are addressing both content moderation, especially harmful and illegal content, but in general framing how the content is moderated on our platforms. And the approach from most governments and private companies has been countering harmful and illegal content. And there have been a very little investment into supporting something that we would define as ethical, especially from the journalism perspective, trustworthy and credible content. That quote from Claire Wardle, if you can see, is one of my favorites. It has been taken from Twitter from the beginning of this pandemic.
And the system thinks that global Forum for media development and a lot of our members are trying to emphasize as a message every time we talk about issues in online digital space.
We have launched last year the dynamic coalition of sustainability news and the news media and journalism. This year has been the first year where we launched a report around the main issues facing the media and journalism in the digital space.
You can find it on the GFMD's website. I'll share it later. Also on the page of dynamic coalition.
There are a couple of cases, all the articles there are inning but local case studies, one from the Balkans by a Balkan investigator reporting from BIRN and SHARE Foundation and they give examples of when media content has been removed from the platform especially by algorithmic decision making.
These are some of their conclusions and Marwa has also indicated some of these.
And as you can see, there is an increase in, this is from their website. There is an increase in activity in terms of content take-downs. As you can see these are countries in the western Balkans and also central Europe. So Bosnia, Romania, Serbia, Herzegovina and others.
As we have heard, human rights activists and civil society activists and especially general media organisations are the most targeted in every aspect of digital rights attacks and briefs.
The next I'll try not to change the page. Yes.
Next a local case study that we have presented in this report is from the group or member, RNW media from Netherlands. They have created this fantastic network of civil society and journalism organisations that are promoting reproductive and sexual health and relationship literacy and knowledge building.
Unfortunately, in different countries different ADD.s are being removed or not authorized by Facebook on the assumption that this is adult content and suspect, foreign content, et cetera. Similar in the BIRN review, this content has been unjust -- was not justified that it should be removed. And some of the ADD.s you can see here. Some of them have been classified as sex publications and have been somebody classified as porn.
If didn't make any difference that these organisations have their own various policies in terms of ethics and strict policies in terms of transparency, ownership of the pages, et cetera. It doesn't make any difference. That is one of the points that I want to draw attention to, that the profile of the actor unlike in the video games that Mercedes was mentioning, the actor, mission statement, article, content standards and practices, are not taken into account when content is moderated on many platforms.
One of our members, Article 19 has started the campaign and is inviting human rights activists, journalists, media, artists to report content take-downs and problems they have in digital spaces. There are a couple of really lovely videos that I recommend you take a look at. And as some of you have mentioned, our content creators are invests a lot of time, energy, resources and sometimes their accounts are just being taken down. Their content is removed. And in the case of journalism and media organisations sometimes investigative reporting stories take months to make. Are very expensive and platforms like Facebook, Twitter, and YouTube are very important distribution or promotion channels for the journalism organisations.
So if the content is taken down and then it takes ten days to reinstate it, this is damage to the story. This is very visible damage for their sustainability, et cetera.
So this is from the Article 19 page that I recommend you take a look at.
For journalism organisations and media organisations we have several layers of issues with content moderation. We have content moderation in terms of moderating a single article, single unit of content.
We have content curation question, what content is being promoted, what is being demoted, what content is able to be shared.
Then we have content monetization in many countries around the world, even with millions of likes, shares, subscribers, YouTube, journalism organisations do not have monetization opportunities. In other countries even when they have monetization opportunities, these are disadvantaged in comparison with some other content that is not serious, in that it doesn't offer in depth engagement. This is a question of architecture that Anna has mentioned.
And then finally, who are the users? What is the profile of the actors that places the content online? So journalism organisations for human rights activists, civil society, this is the most important issue because they have invested years in adopting and practicing very high level of standards when it comes to their content. And in the digital space this is not recognised. In content moderation, but it is not recognised in content monetization either. Their sustainability is seriously undermined by their inability to monetize highly engaging content.
How much time do I have?
>> ANNA MAZGAL: I guess not much.
(Laughter.)
>> ANNA MAZGAL: Since we are nearing the threshold of 20 minutes left for Q&A. I also wanted to ask Rob to comment. So thanks for asking.
>> MIRA MILOSEVIC: Okay. Just a few minutes on this part. A lot of decision making in online space takes us back to the architecture actually and the business and economic model of digital spaces. It is basically advertising what we call the programmatic advertising, but it is actually billions of realtime advertising auctions that are bringing, as you can see, hundreds of billions to major digital platforms for interactions of three to ten seconds. This is what they are making money on. This is what the architecture of the platform optimizes for.
We are not seeing some very negative trends in the cans that Marwa has mentioned. Digital rights activists, journalism, media, staff is being treated as terrorists in many cases for spreading what is sometimes defined as terrorism content.
On the other hand, we have advertisers that are now shying away from serious journalistic content. This is the latest brand safety floor adopted. You can see the medium area is breaking news, military content. Even when you are reporting on COVID you are not monetized. This requires more time to explain but basically up to 70 percent of all revenues in advertising and digital architecture actually goes to so-called middle men, which in this case is Google.
And this system actually priority ices and gives advantage to very low level engagement that is not serious political content.
Finally, just a couple of recommendations that are in line with everything that we have heard today. Just to add looking at business model and looking at economic side of the platform. And looking not only at content moderation but actually where the market power balance, which Mercedes mentioned is and how the markets are structured. And where is the opportunity for market regulation so that we have more competition, more options, more debundling, more access, is something that we also need to look at when it comes to content moderation practices.
I hope I didn't go above my ten minutes. Thank you.
>> ANNA MAZGAL: It is always -- thank you, Mira. It is always difficult to look at time when all the points made are important and they really add to the context. Thank you for caring, but we are good here. We have more or less 20 minutes for Q&A.
We also wanted to ask Rob, what do you think of this all from the perspective of the study you did? But I suggest maybe first we go to questions so we make sure that the inspiration that people have and questions are not lost.
And then we will ask you, if that's okay, Rob, at the end.
We have three questions in the Q&A. And one that Mercedes already said marked as one you would like to answer live. So maybe we can start with you, Mercedes and go to the questions in the Q&A and one question that I will read that is in the chat. So after you, Mercedes.
>> MERCEDES MATEO DIAZ: I think it was a mistake, Anna. I was trying to say I want to tag the answer and actually type it. But then I said live, but I think I already answered. I sent a reference, I sent -- some people asked for the reference to the studies and articles that I mentioned. I already sent them.
And they were also asking about how you can develop those skills and, well, it would be a long answer but basically the short version would be that we need to work on that from the education and training systems. There is a full transformation and disruption of how we educate and train people throughout their lives in the context of today's digital economy knowledge economy, et cetera.
And I think basically the answer is we need to rethink and revamp our education systems. That would be the short one.
I sent a link where they can actually find more references and information about how it can be done, what kind of programmes come included, how to read do the curriculum and structure and how to change many disrupters that are in the market doing these things, from the outside of the formal education and training systems and so on. But there is a lot of information there. Thanks, Anna.
>> ANNA MAZGAL: Thanks, Mercedes. That's great. Of course, we probably can focus in the short time that we have, but let's try to answer at least some. If I may I will read a question we have in the chat. So it is as follows: Content governance and decentralized content as local, what are some of the challenges involved considering digital inclusion and cultural differences in relation to policy framework?
This question was posted during Mira's intervention, so I guess it is to you. Of course, anybody who has anything to adhere is invited to contribute on this as well. Please go ahead.
>> MIRA MILOSEVIC: Sorry, Anna, I was looking at questions. I didn't hear the last bit.
>> ANNA MAZGAL: Of course. So shall I read the question again?
>> MIRA MILOSEVIC: Yes, because I can't see it.
>> ANNA MAZGAL: Yes, it is in the chat but okay: Content governance and decentralized content as local, what are some of the challenges involved considering digital inclusion and cultural differences in relation to policy framework?
>> MIRA MILOSEVIC: Yes, the model mentioned that. Marwa mentioned that. She said it can't be expressed enough. Big challenge there that big companies and the digital platforms do not invest in markets where they don't see potential return.
And so you have a lot of countries where they, of course, exist. There are a lot of users and some of these countries base their discourse base and open discussions on some of these platforms. Unfortunately when something happens and there is an issue, there is rarely a person that you can address. And again as Marwa said, it goes through international organisations mostly. So there is not a consistent address model, notice of content take-down, et cetera. This needs to be taken into account.
The other layer, of course, for evidence-based policymaking, cultural and national differences in legal systems of course need to be taken into account.
While at the same time respecting international human rights norms and standards.
So all these things need to be balanced carefully together with the understanding the nature and the principles of so-called malicious actors. What we have at the moment is that platforms do not invest enough. This is an expensive thing. They do not invest enough because it is not at the moment in their business interest.
>> ANNA MAZGAL: Thank you, Mira.
Does anybody want to add something on this topic or can we go to some other question?
(There is no response.)
>> ANNA MAZGAL: I think we're okay here. Let me try to take them from the order when they were appearing.
The first one was, how prevalent are newspaper bots that are data gatherers from news sources and do these bots collect for harm? I think Marwa, it was during your presentation. Of course, again, anybody who feels like answers, please go ahead.
>> MARWA FATAFTA: Yes, thank you. I actually am not sure I would be able to answer this question. I don't know if other speakers can contribute to it.
>> ROBERT FARIS: So we are at the very beginning stages of understanding bots and how effective pro social bots can be in a digital age.
Part of it is understanding how people are nudged in the right direction by things and people are doing interesting experiments with bots.
But I don't think it is ultimately going to be a solution to governance issues on the Internet, but I think well developed bots can help a little bit on the margins.
We will know a lot more in ten years. Please come back. Great question. Thank you for bringing that up.
>> ANNA MAZGAL: I hope we will be in a better place in ten years, but we cannot really know.
All right. Thank you, Rob.
So there is another question from early in our conversation which is: In all assessments and decisions we make as to content moderation, be it by way of designing an algorithm or attended decisions, certain degree of arbitrariness is inevitable. How do present content moderation processes by Facebook Google and others, deal with the constitution of the content moderation team board? What is the extent of variety of how the team is constituted? What are the innovations being contemplated?
What do we know about the processes that are not always apparent or not always explained and whether we can somehow reverse engineer the system to understand better how they work?
Go ahead anyone.
>> ROBERT FARIS: That's a hard question. Really good question. I'm hoping that Mercedes, Marwa and Mira would take this because they understand it so much better than I do but given the silence I'm jumping from.
This model of content moderation is fundamentally limit the by this factor. Until we can find more effective mechanisms for decentralizing these decisions so that they fully reflect the interests and desires of the people being moderated, that it is going to be imperfect and Facebook can try. I know they have efforts underway to try to improve the diversity of their efforts, to try to bring in more regional voices in making these decisions.
But to do this at the scale that they are attempting to do is always going to be unsatisfactory in my view. That's one perspective on that. It is more opinion than fact-based, but there it is.
>> ANNA MAZGAL: Thank you, Rob.
I think we also have a question that somehow follows up on this topic. It is a question about what specific architectural interventions on non-collaborative platforms like Facebook could be beneficial to give more of the responsibility to intervene and reduce the toxicness in the community? What can Facebook learn from Wikipedia?
Great question. What can Facebook learn from Wikipedia?
>> MARWA FATAFTA: I can answer this. I think one take-away from the Wikipedia experience and also one of the conclusions of Rob's study is, of course, what is the price of having a decentralized approach that is context dependent versus being consistent and having to moderate content with speed and at scale?
I think when I was trying to think reading the paper and thinking how would that apply to other bigger platforms like Facebook and YouTube and Twitter, I think -- one thing is that of course Wikipedia is different than those platforms in terms of the content and also the pressures that they receive by, for example, government, legal requests to take down incitement to violence and terrorist content, and so on.
I'm not sure exactly how that decentralized approach can be applicable to those huge platforms. So does that mean, for example, that we breakdown those big platforms? So communities have more contribution or are able to credibility to content moderation policies and the implementation of those policies such as Wikipedia model? I'm not sure. Of course, that is a huge policy question looking at how these companies will be regulated in the future.
And I also want to note here that just to keep in mind building on Mira's presentation and the business model of those companies, users do have a, let's say, responsibility to share content and to interact with other users in an ethical manner. But let's not also forget that the platforms have a business model that basically builds and profits off of sensational content including harmful content and hate speech and harassment.
So in that case, I would be very cautious in placing the responsibility on users themselves. But I think that these platforms do have a responsibility to invest resources. Let's not forget they are one of the most profitable companies in humanity's history. And the user base of those U.S. companies are 90 percent of their user basis outside of the U.S. So they are making money off people's data through advertising. The business model allows for the spread of harassment, disinformation, and hate speech. And again looking at how they tackle these issues, for example, in the U.S., when you look at Facebook and Twitter's response to the U.S. elections versus other elections around the world, I think that speaks volumes on how they are investing, where they are investing. And basically the investment in their policies and making changes to those policies often follow economic interests and fear of being regulated or being legally liable.
Which, of course, in many parts of the world that is not the case.
So again, you know, how to strike that balance between putting the responsibility on users and users becoming effectively part of the content moderation policies and implementation versus what is exactly the role of those giant companies.
It is a difficult question. And of course, there are no short answers to it. There are different approaches and different opinions on this issue.
>> ANNA MAZGAL: Thank you, Marwa.
If I may make a comment to that also, I very much agree with you in the fact that it is a complex issue.
Also we have a comment that was sent -- sorry, it was received in chat that kind of follows up on this. The question is that the common excuse to be moderated, has a lot of money and not enough on safe, this includes also moderation. So on one hand we have this. On the other we have the question of if we provide in any way when the person is not free labour to someone who is already very wealthy which of course is issuing the difference of the world which also brings us to the question that was asked again to all Panelists: How do we engage the stakeholders to build local content in diverse languages at the community level to enhance -- sorry, I lost it.
To enhance ecosystems as a platform? How do engage stakeholders to build content in diverse languages to enhance the ecosystem as a platform? If anyone would want to take that?
>> MIRA MILOSEVIC: If I may, if no one else wants to go ahead with that?
There are a couple of issues. Would be also refers to that question about architectural interventions on platforms like Facebook.
Depending on the tradition of shaping regulating, the journalism information ecosystems, we have at the moment different policy discussions around the world. So at one end in the U.S. we have this new report from the Congress about the kind of monopoly position of big platforms.
In Europe you have traditional freedom expression and media, but also media pluralism is a very important segment in the media policy. And that supports the notion that a citizen needs to have credible information, but not only from one source but also from sources in different languages.
Then we have also Australia that is at the moment considering new pieces of regulation in relation to platforms.
So just to get back to Europe, in Europe there is a tradition of supporting content in different languages. And at the moment, unfortunately, quality content and information, especially in digital spaces, is not commercially viable.
So there are a lot of different models. So you have the public service broadcasting and public service media model that funds a lot of these content around the world.
At the moment it is the largest sum of money in the marketplace, so to say, around 35 billion that funds a diverse languages and content in different languages in a community, small communities.
Unfortunately, this depends on which country they are based in, whether there are the systems for subsidies from the states and whether these are just transparent and fair allocation. And the final, of course, their countries develop philanthropy that is funding this kind of content and membership and philanthropic support for this public interest content especially in different languages is growing in importance. So this is something also worth looking at from the policy perspective.
And finally, I am not going to speak anymore, our sector also has a very little capacity to participate in all these discussions. And we are fortunate enough to cooperate with organisations such as Wikimedia and AccessNow to do things together. So there was a question how we can help as individuals. Join maybe ten or 12 different mailing lists that deal with freedom of expression and media freedom advocacy around the world. There are not many of us. And help in any way you can because we are facing a lot of policy and regulatory decision making over the next couple of years. And all these voices here today need to be heard.
>> ANNA MAZGAL: Thanks so much, Mira. Since we are nearing the end of our session I would like Rob to add his observations and comments after the discussion. And then we will quickly wrap up and we can continue this conversation online afterwards.
Go ahead, Rob.
>> ROBERT FARIS: Thank you, Anna. And thank you, everyone, for the conversation. I learned so much today from Mercedes and Marwa and Mira, listening to you.
I have two thoughts. One is, I think that we should continue to consider what we want the Internet to be. It is not an easy question.
When we talk about content moderation, we are asking: What do we want the collective spaces online to be? I like to separate it into two buckets. One is implementation. So once we understand how we want it to be, how effective are we at actually putting these measures into place? How transparent are we? That's hard, particularly at scale. But I don't think it is the hardest problem.
The harder problem is defining the standards by which we are going to interact with one another and deciding who is going to decide. So we've inherited this system that evolved over time. It is a mix of platforms, a little bit of government and a little bit of users. But ultimately we need standards that we can live by that work both at scale and at community levels. Who is going to do that?
I think we see one model in Wikipedia which works pretty well for Wikipedia. It takes a lot of energy and a lot of people's time. There's a lot of pain that goes into that and there are thousands of hours of people's time that go into that.
How you use that on a commercial platform, I don't see. It is like night and day. I can't remember who said it, but there's probably not a lot of people who are going to be volunteering their time to help Facebook make more money than they are making now. I don't know how we go about that.
But we could and should be thinking about what a nonprofit alternative would be. And how would we make that happen? Who would contribute to that?
So one of the things I really took away from the conversation today is how much we need the energy and efforts of individual users to actually make that happen. And if not that, then we are going to have to live with the Facebook and Twitters and Instagrams that we have and how they do that, how they make those rules.
So I take Marwa's point that we need to be cautious in not turning over the responsibility to users and that there will be a sharing of responsibility between governments and users and maybe platform owners, but that is the challenge before us and it is clear more than before. Thanks so much for the great session.
>> ANNA MAZGAL: Thanks, Rob. Justus is our rapporteur and will prepare a report. I think it will be available, right? If you want to go back to the summary of the session. So if you could share a few words to just sum up the session and we're done.
>> JUSTUS DREYLING: Yes, thank you to all the participants, Panelists. It has been interesting listening to your interventions.
I wanted to highlight four very quick points. Rob already provided some excellent comments related to the presentation. I think I want to highlight four take-aways.
A mix of technological and community-based approaches can be somewhat successful as shown by Wikipedia. At the same time as Rob has just said, this might not be applicable to commercial platforms. However, it is an interesting approach. So we can definitely learn something from that.
Second take-away. There needs to be some sort of quality standards or content consideration. I took that from Marwa's rest tailings and AccessNow's report on content governance. You know, we can't just let platforms fully decide on how to do it. We need some sort of quality standards for that.
The third take-away for me was we need to think about ways of how to source the skills that promote positive interactions on the Internet. That point I took very much from Mercedes' presentation. It has been very interesting, very interesting to see obviously how those preconditions for engaging in more civic way with one another influence our interactions on the web. That is also interesting.
Fourth point and that one I took from Mira's presentation is we need to consider the economic incentive structure. We need to take a look at the business model and the way the advertising model prioritizes certain kinds of engagement and what effect this has on content, but also on content moderation.
So yeah, with that being said I think I conclude and those were my take-aways from the presentations.
>> ANNA MAZGAL: Thank you, Justus.
So here we are. This is the end. Thank you so much for joining us. Thank you for all your questions. We hope this was inspiring to you and we hope we can continue this conversation. Right now that we are in the domesticated way of doing conferences. My constant regret is that we cannot go outside and get a cup of coffee and continue. Hopefully we will be able to do that in other settings, either online or hopefully not in too long offline. Thank you all. Thank you for IGF volunteers for helping us with this and have a good winter. And hope to see you soon.