Session
Organizer 1: Sabrina Vorbau, European Schoolnet/ Insafe
Organizer 2: Joachim Kind , Landeszentral fuer Medien und Kommunikation
Organizer 3: David Wright, UK Safer Internet Centre
Speaker 1: Hanna Gleiß, Civil Society, Western European and Others Group (WEOG)
Speaker 2: David NG, Civil Society, Asia-Pacific Group
Speaker 3: Sofia Rasgado, Government, Western European and Others Group (WEOG)
Speaker 4: Ricardo Campos, Government, Latin American and Caribbean Group (GRULAC)
Speaker 5: Sabine Frank , Private Sector, Western European and Others Group (WEOG)
High-level speaker:
- Thomas Blöink, Head of Subdivision at the Federal Ministry of Justice and Comsumer Protection
- David Kaye, UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression
- Chan-jo Jun, Advocate for IT law
- Ingrid Brodnig, Journalist and author
- Sabine Frank, Google
- Dr. Marc Jan Eumann, Director Landeszentrale fuer Medien und Kommunikation (LMK) Rheinland-Pfalz
- Representative from the technical community, (e.g. ICANN/ISOC) Africa Region (tbc)
Breakout-session - Table facilitator:
- Hanna Gleiss, Das Nettz
- Kathirn and Joao, Better Internet for Kids Youth Ambassadors
- Ricardo Campos, University of Frankfurt/ Lawgorithm, Sao Paulo, Brazil
- Sofia Rasgado, Safer Internet Centre Portugal
Sabrina Vorbau, Civil Society, Western European and Others Group (WEOG)
David Wright, Civil Society, Western European and Others Group (WEOG)
Joachim Kind , Intergovernmental Organization, Western European and Others Group (WEOG)
Break-out Group Discussions - Round Tables - 90 Min
- How can cooperation and collaboration on national, regional and global levels help to counteract hate speech online?
- How can children’s rights to participation, access to information, and freedom of speech be preserved and balanced with their right to be protected from violence, exploitation and abuse in the online environment?
- How can their resilience be increased by means of capacity building, media literacy, support and guidance in the digital environment?
- What role should internet platforms play in defining the standards for acceptable content in light of freedom of speech?
- How can global accepted standards be developed?
GOAL 3: Good Health and Well-Being
GOAL 4: Quality Education
GOAL 5: Gender Equality
GOAL 9: Industry, Innovation and Infrastructure
GOAL 10: Reduced Inequalities
GOAL 16: Peace, Justice and Strong Institutions
GOAL 17: Partnerships for the Goals
Description:
The workshop will begin with a high-level panel presenting different cases, instruments and strategies to tackle hate speech online. In more detail, attorney Chan-jo Jun will begin with presenting his case that brought him to fame as he supported victims of hate speech online and instigated legal proceedings against Facebook. Following Gerd Billen, State Secretary at the Federal Ministry of Justice and Consumer Protection Germany will give a short overview of the Network Enforcement Act, explaining the reasons why the German Parliament passed the law, which introduced compliance obligations for social networks when dealing with complaints of illegal content online.
Furthermore, representatives from the UN and Google as well as a journalist will act as respondents discussing in particular what safeguards should be applied to secure freedom of speech and possibilities to develop internationally accepted standards to deal with hate speech online.
Against this background, the floor will be opened to the audience, involving everyone in different group discussions lead by representatives from civil society, academia and youth. During these table discussions, strategies and solutions will be discussed to counteract hate in the global internet, drawing up on the policy questions listed above, and to ensure that every stakeholder group is part of the responsibility.
Expected Outcomes: The session will highlight that tackling hate speech is a shared responsibility of various stakeholders to ensure a free and safe internet for all citizens. While different opinions will remain on what instrument/s are the most appropriate to reach this, it should become clearer what is understood by hateful content and which initiatives/resources are available to support more awareness and education in this area.
In terms of format, the session will be organised as a facilitated dialogue. Led by the moderator, the workshop will kick-off with a 30 minutes high-level introductory panel discussion including the State Secretary at the Federal Ministry of Justice and Consumer Protection in Germany, an UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, an advocate for IT law, a journalist and a representative from Google (all confirmed).
Each panelist will give a short statement outlining their role and responsibility when tackling hate speech online.
Following the introductory panel, different break-out group discussion will take place in order to pro-actively involve all participants in the debate. For 30 minutes four different table discussion will be led by representatives from civil society, academia and youth, in order to fully fulfill the multi-stakeholder approach, respecting as well gender and geographical balance.
Table leaders will consist of a representative of the German civil society organization ‘Das Netzz’, two Better Internet for Kids Youth Ambassadors, a professor from the University of Frankfurt/ Lawgorithm, Sao Paulo, Brazil and a representative from the Insafe-INHOPE network of Safer Internet Centres (all confirmed).
Table discussions will evolve around the policy questions mentioned above. Each table will identify strategies to counteract hate speech online which will be shared in plenary afterwards (please see further details in agenda below). High-level panelists will join the table discussions as well.
The workshop will conclude with final closing remarks by the high-level panel and takeaways summarized by the onsite moderator and rapporteur.
In addition, the online moderator will ensure that remote participants are able to communicate questions to the onsite moderator throughout the whole debate.
Complementary to this, a social media campaign on Twitter will help to give further visibility to the session. Live tweeting during the session will open the discussion to a wider online audience and will give remote participants the possibility to get directly involved in the debate.
Relevance to Theme: When the World Wide Web was developed in the 90’s, hopes and expectations were high that it would be a space where people around the world could communicate freely and safely. However, in the last years it turned out that in particular social networks are often misused to distribute hate speech and unfortunately became a place where harassment and bullying takes place. Hence, people often experience the internet to be a hostile space where trust is fragile.
Although at first social networks were hesitant concerning their accountability for harmful third-party content, governments and civil society urged to remove harmful content from their platforms. Moreover, in some cases social networks agreed to participate in codes of conduct, in others legislators introduced a legal framework social networks have to comply with.
In this context, and in order to complement existing initiatives to regulate, monitor or report online hate speech, a more pro-active approach is needed to counteract hate speech online, building towards a secure, safe, stable and resilient internet environment, where trust is restored and accountability of the providers established.
Relevance to Internet Governance: Online hate speech can be identified as one of the growing threats to the global internet and its users. Hence, the urge to take an evidence-based approach to prevent and remediate online hate speech inevitable.
At the center of the debate is what the roles of government, the private sector and civil society respectively are when dealing with the challenge of hate speech online. Still, there are different views about who should be responsible to set the rules for keeping the internet free from harmful content.
That said, more than ever, the importance of establishing a multi-stakeholder dialogue between governmental, civil society organisations and industry is key to strengthen shared principles, norms, rules and decision-making processes to fight hate speech online.
Remote participation will be ensured through prior involvement of various stakeholders from across the world.
The online moderator will ensure that remote participants are able to communicate questions to the onsite moderator during and after the debate.
Proposed Additional Tools: Complementary to the online remote participation, a social media campaign on Twitter will help to give further visibility to the session both prior, during and after the event. In addition to the generic event hashtag a dedicate workshop hashtag will be developed by the organizers.
Live tweeting during the session will open the discussion to a wider online audience and will give remote participants the possibility to get directly involved in the debate. In addition to the online moderator, the organizer will nominate a representative from the organization team to monitor and respond to conversations on Twitter throughout the whole workshop.
9:30-10:00 |
Welcome and high-level panel discussion on online hate speech approaches Speaker:
TBC:
|
10:00-10:30 |
Break-out group discussion, addressing the following questions: - How can cooperation and collaboration on national, regional and global levels help to counteract hate speech online? - How can children’s rights to participation, access to information, and freedom of speech be preserved and balanced with their right to be protected from violence, exploitation and abuse in the online environment? - How can their resilience be increased by means of capacity building, media literacy, support and guidance in the digital environment? - What role should internet platforms play in defining the standards for acceptable content in light of freedom of speech? - How can global accepted standards be developed? Each group discussion will be led by an expert, who will report back in plenary. Table leader:
Remote participation:
|
10:30-10:50 |
Table leaders reporting back from break-out discussions |
10:50-10:55 |
Q&A |
10:55-11:00 |
Final closing words by high-level panel and takeaways |
Report
- How can children’s rights to participation, access to information, and freedom of speech be preserved and balanced with their right to be protected from violence, exploitation and abuse in the online environment?
- What role should internet platforms play in defining the standards for acceptable content in light of freedom of speech?
- How can cooperation and collaboration on national, regional and global levels help to counteract hate speech online?
Expected Outcomes: The session will highlight that tackling hate speech is a shared responsibility of various stakeholders to ensure a free and safe internet for all citizens. While different opinions will remain on what instrument/s are the most appropriate to reach this, it should become clearer what is understood by hateful content and which initiatives/resources are available to support more awareness and education in this area.
There was broad support for the view that online hate speech has to be tackled because it means a serious threat to an open and pluralistic online discourse while freedom of speech has to be respected at the same time. Dr. Marc Jan Eumann, Director of the State Media Authority of the German federal state of Rhineland-Palatinate, gave a broad picture on both antagonistic principles.
Most participants wished a multi-stakeholder approach and saw responsibilities of social networks, regulators and civil society as well. Many indicated that a legal framework should be limited to oblige social networks to remove illegal content that infringes criminal law. Chan-jo Jun, advocate for IT-law, gave an example on how difficult it was some years ago for persons who were defamed on social networks. This was the case of a Syrian refugee who took a selfie picture with the German chancellor Angela Merkel. In the aftermath, when criminal acts were committed by foreigners, his picture was reposted on Facebook and he was defamed as the perpetrator of these criminal acts. He tried several times to have these posts removed but Facebook answered that defamation was not covered by its Community Standards. Chan-jo Jun concluded that regulation was needed. But he also remarked that it takes too much time to have put back online a content that fully complies with law.
Some emphasized that social networks should also have the possibility to remove content that is not strictly illegal but that is highly disturbing (so-called borderline content).
Many paricipants underlined the important role of civil society organisations in detecting and tackling hate speech.
Another key finding was that digital literacy is important to prepare users to the risks associated with the use of social platforms.
It was recommended that the exchange of ideas on tackling hate speech should be enhanced between nations and different stakeholders. International standards should be established that set up some basic principles that could be shared worldwide. Nevertheless, national and regional diversity should be respected what could result in a certain granularity of rules. These ideas were developed in the group discussions facilitated and presented by Carolin Silbernagl, responsible for external affairs at betterplace lab, and by Ricardo Campos, University of Frankfurt and association Lawgorithm Sao Paulo.
With regard to digital literacy it was underlined that this subject should become mandatory in the school curriculum. This was one main result of the group discussion facilitated by Sofia Rasgado, Safer Internet Centre Portugal.
But not just pupils should be addressed by media literacy programmes but also their parents. More attention should be given to gaming content and influencers. It was also proposed an age rating system on online content. These aspects were highlighted by Kathrin and Joao, Better Internet for Kids Youth Ambassadors, who reported from their group discussion.
The German Network Enforcement Act was explained that obliges social networks to operate a complaints management system. The act only concerns certain content that constitutes a criminal offence under the German Criminal Code, like public incitement to crime, forming criminal or terrorist organisations, incitement to hatred, dissemination of depictions of violence, CSAM, insult and intentional defamation. Social networks must have reporting procedures in place. Violations of the provisions of the Enforcement Act may be sanctioned by a regulatory fine. Hence, providers of social networks must do their part to ensure effective criminal prosecution in the fight against right-wing extremism and hate speech. Google described their concept to tackle hate speech online that is based on the principles remove, raise, reduce and reward. The flagging system of the video platform YouTube and the importance of trusted flaggers was explained. Machine learning to detect content that infringes law or Community Standards is evolving. The technology works reliably on spam, CSAM and terrorist content but still has problems to identify hate speech. 78% of the removed videos had been detected by machine learning, 81% of them did not require additional human view. Only 23% of the content reported by users have been removed.
Progress for the tackled issues could be made in forums that already exist on an international, regional or national level or that should be established. Some mentioned that social networks should discuss and work on their community guidelines on hate speech with different parts of society. There was also the idea that parliamentarians of different countries should exchange their views on creating legal frameworks on hate speech.
There were about 80 to 90 onsite participants. The number of online participants is not known.
It was emphasized that women are disproportionately affected and intimidated by hate speech. This makes it more likely that they avoid speaking about certain topics or completely withdraw from online discussions. Brodnig quoted a study by Amnesty International that 1/3 of women were more reluctant to express themselves on social networks after they had been insulted online. She gave the example of a female Austrian journalist who received death threats and wanted to know who was behind them. She found an ordinary Austrian man who believed in a fake story on a rape committed by refugees.
There were no special session outputs. The video of the session can be watched on YouTube: https://www.youtube.com/watch?v=DlsXNz0XAeU