Foto de uma colagem em preto e branco feita em uma parede que forma a imagem do rosto de uma criança gritando, com os olhos fechados e a boca aberta

Germany tightens the fight against hate speech on the Internet

News Freedom of Expression 07.25.2017 by Dennys Antonialli and Thiago Oliva
Photo of a black and white collage made on a wall that forms the image of a screaming child's face, with eyes closed and mouth open
Image: alice.d via Visualhunt.com / CC BY-NC-ND

On June 30th 2017, Germany passed a more aggressive law against hate speech on the Internet. It establishes very costly fines in case of noncompliance by the platforms, which should remove “clearly unlawful” content within 24 hours.

The bill that was passed into law points out the development of a more “aggressive, harmful and hateful” debate culture, guided by massive changes in the public debate held on social media. According to the bill, this context enabled the dissemination of criminal content such as hate speech, which endangers the “peaceful coexistence [of people] in a free, open and democratic society”.

As per the German Telemedia Law, currently in effect, the platforms were already responsible for the unlawful content stored by them in case it is not removed (or blocked) as soon as they become aware of its existence — by means of an extrajudicial notice, for instance. The new law came to implement this obligation, creating tools for the State to compel the platforms to comply with it.

One of the new duties imposed by the law is the developing of systems designed for managing content removal requests. The obligation involves a wide range of tasks: from the elaboration of simple reporting tools for content rendered unlawful to the keeping of open communication channels for users to be informed about an eventual request for content removal presented by them. It also imposes a duty of transparency, which encompasses the publication of reports each three months, in German, describing with rich details and data the how the companies are dealing with hate speech and other unlawful content.

According to the German Federal Government, the law is important due to the lack of results regarding the initiatives taken autonomously by the platforms. Additionally, the existing laws on hate speech remained poorly implemented, which made the approval of compliance rules necessary.

The European context

In the European Union, where the discussion is more advanced — at least in terms of the consolidation of guidelines at supranational entities on the matter — the concern with hate speech has progressed to the digital environment. In May 2016, the European Commission, alongside Facebook, Microsoft, Twitter and YouTube signed a code of conduct to counter hate speech. Among the many clauses of the document, the platforms committed to removing content perceived as “unlawful hate speech” within 24 hours from a notice requesting its removal. In addition, they are expected to clarify to their users which kinds of content are not allowed and promote counter-discursive initiatives.

A year after the disclosure of the code of conduct, the European Commission released a statement celebrating the anniversary of the initiative and numbers that would prove a more proactive approach by the platforms when dealing with the problem of hate speech.

The side of the platforms

In a recent statement, Facebook recognized difficulties in turning the platform into a hate speech “free zone”. The company highlighted that it will add 3,000 people to the team of content moderators, on top of the 4,500 Facebook has today. Content moderators are people specifically hired and trained to analyze posts, images, videos, and comments, deciding whether they should remain on the platform. Aside from Facebook, other platforms like Twitter and YouTube announced to be developing global rules and tools to minimize the effects of hate speech on the platforms and make them safer spaces. Still, the challenge of simultaneously operating across many jurisdictions and stimulating communication flows between people located in different countries is huge.

Indeed, the concept of “hate speech” varies from one country to another, considering that many of them lack specific laws addressing the issue. That causes platforms to be exposed to many potentially enforceable laws imposing different courses of action and, in some cases, conflicting ones. Furthermore, other issues of context related to culture, politics, and language should be taken into account by content moderators, which makes the task even more complex.

Laws that make platforms more easily accountable for third party content increase the risk of overblocking. That is so because they provide incentives for platforms to censor more content than needed, which may threaten free speech by causing them to take down lawful online content. In Germany, this argument wasn’t enough: the Parliament considered more urgent to tackle the growing number of xenophobic messages that circulate in the country.

Implementation challenges

Defining rules and criteria for distinguishing between legitimate and hateful content is such a delicate task that it might lead to distortions. In an article published on June 28th, for example, ProPublica released documents containing some of the internal guidelines that Facebook uses to train their content moderators.

On the training slides provided by ProPublica, there are explanations about the categories considered as protected (gender, gender identity, race, religious affiliation, ethnicity, national origin, sexual orientation, serious disability or disease) and those which are not (social class, profession, continental origin, political ideology, appearance, religions, age and countries). This means that the category “child” (age) is not protected while the category “gay” (sexual orientation) is, for instance. When the categories are added into groups, the presence of a “not protected category” causes the group to be understood as not worthy of protection. In those cases, the training material claims that the content considered as offensive should not be removed. This means saying that, for example, hate speech directed at the group “black children” or the group “female drivers” would not be considered as hate speech by the platform, but those directed to “white men” would be.

The example also suggests that even though hate speech should be tackled in different ways, including by means of content removal,  transferring the responsibility of deciding what should or should not be considered as such to the platforms might also cause problems. Moreover, the analysis criteria designed by the platforms may backfire on the very vulnerable groups that they were aiming to protect, as they can hinder counter-discursive strategies.That was the case, for instance, of a post from Didi Delgado (a race activist) which was removed by Facebook. The post said: “all white people are racist. Start from this reference point, or you’ve already failed”, as ProPublica also reports.

Another problem regarding this transfer of responsibility to platforms is the creation of obstacles for assessing compliance with the law in countries where this kind of discourse is regulated. This is because, when analyzing a specific post or image against the platform’s term of use, content moderators will remove what they consider inappropriate, but probably will not report an eventual violation of the law to public authorities, what also causes transparency issues.

And what about Brazil?

If we take a step back in this discussion, we will understand that hate speech, although a widely recognized problem in many countries, remains controversial in several aspects that range from its definition, guidelines for identifying it in concreto, to the way it should be addressed, especially from the legal standpoint. And most of these challenges arise from the contextual nature of hate speech: both the groups considered as vulnerable and the way of externalizing the intimidating message that characterizes this form of speech may vary from country to country (or even across different regions within the same country).

Thus, aside from debating on which groups should be protected from hate speech — in Brazil, as we know, there is no law criminalizing hate speech against LGBT people, although we have laws in this sense for curbing racism, for instance — there are discussions on the convenience of using criminal law for this purpose, or even about how we may identify, in practice, the occurrence of hate speech. The list of questions is long and the debate remains open.

Team responsible for the content: Thiago Dias Oliva (thiago.oliva@internetlab.org.br) and Dennys Antonialli (dennys@internetlab.org.br).

Translation: Ana Luiza Araujo

compartilhe