[#1][special week] What is the right to be forgotten?

News Privacy and Surveillance 01.30.2017 by Clarice Tambelli

20170126_ILAB_SEMANA_ESQUECIMENTO_EN_W-01-02-02

When people search for your name on the Internet, what do they find?

In addition to relevant or useful information, you may find “embarrassing” or “undesired” facts about you on the Internet. It may be a photo, a piece of news or a website with fake or outdated information, for example. Under these circumstances, have you ever wanted the Internet to “forget” about something? This week, InternetLab will work with the topic of “forgetting” on the Internet, sometimes understood as a “new right”, sometimes criticized due to its potential consequences for freedom of speech and access to information.

The first subject of our special week is “what is the right to be forgotten?” — from the way search engines work (organizing most of the information we access on the web) to the legal challenges they have been facing.  

Understanding the right to be forgotten: the library metaphor

To understand what is the “right to be forgotten”, we can think of the Internet as an enormous library. On the shelves, we find thousands of websites and platforms that give us access to an endless collection of content. Like in traditional libraries, where books are catalogued and organized by their registration numbers, on the Internet, these websites also need to be arranged in some way to be found (and accessed).

With over a billion websites, searching the Internet became complex. The library became gigantic, and searching for information could become a hard task. Search engines — like Google and Bing — simplify the process of finding a book (or information on the Internet) by avoiding the “shelf by shelf” search (or website to website). Equipped with automated programs for indexing pages (crawlers), these services purpose to catalogue, according to their criteria, the entire content arranged on these “virtual shelves”. Crawlers work as tiny “robots” that search the web to do this cataloguing. It is like they were responsible for “labeling” new books. They do that, for example, based on keywords from websites.

When this cataloguing is done, it is possible to present a result list based on the search terms typed in by the user. What determines the results shown, their order and relevance are algorithms developed by the search engines, which use different criteria in this assessment. Over time, these algorithms became more and more sophisticated and, with that, the search results became more “personalized”, considering data such as location or other aspects of the website (like its ranking, also determined by an algorithm). In Google’s case, for example, over 200 elements interfere with the list of search results.

But there is still an important difference between libraries and the Internet: the virtual shelves are invisible to the user’s eyes. This means that, more than cataloguing, these search engines have a determining influence on what will be found — and what will not. This puts them in a privileged position, since they act as intermediaries of any search for information on the web.

Can the Internet “forget” unwanted information?

What if, in the middle of these shelves, there are books — or pages — that have untrue or offensive information about someone? What may happen if that photo taken in a laid-back moment appears when someone (a potential employer, for example) searches for your name? Would it be right to prohibit that this photo was removed from the Internet? Search engines should be prevented from showing it among the search results displayed to users? Would it be possible to blame them for the access to these materials when they only provide their location?

It is based on this idea that the concept of a “right to be forgotten” was developed in Europe: a prerogative of people to request the correction, removal or deindexation (that is, the removal of such content from search results) of personal information that shows up in search results for their names on the Internet, in given situations.

With the popularization of the term around the globe, similar cases that involve “forgetting” requests, like rectification rights and personal data elimination — in the context of consumer relations — and the application of certain aspects of the rehabilitation principle — of criminal nature — end up entering the conceptual umbrella of the “right to be forgotten”.

Right to be forgotten: different “rights” under the same label?

This plurality of situations placed under the same idea ends up causing some conceptual confusion. In order to better organize the debates around the topic, This subject was explored by Julia Powles, a Cambridge University researcher, in a speech presented at the VII Seminar for Privacy and Personal Data Protection, promoted by the Brazilian Internet Steering Committee (CGI.br), last August, suggests that we think about different types of “right to be forgotten”.

Julia Powles is a legal researcher at the University of Cambridge, where she holds appointments in the Faculty of Law and the Computer Laboratory. Her research focuses on the interface of law and technology, with expertise in data protection, privacy, intellectual property, internet governance, regulation and business law. Currently, she is working on projects on cybercrime data sharing, European implementation of the right to be forgotten, encryption and public policy, artificial intelligence and healthcare, technology and power.

Among these categories, Powles identifies the right to be forgotten in its strict sense, originating from judicial precedents involving the constitutional rights to private life, intimacy, honor and image. This right consists in the prerogative of not having personal information constantly brought to the public space, whenever this violates constitutional rights of the person exposed and there is no other competing interest at stake. For example, Powles indicates cases involving terrible crimes that happened decades ago and later came back to the news due to different reasons, mentioning the name of people involved. According to her point of view, this guarantee is related to the rehabilitation and the social reintegration of convicts, grounded on provisions from the Criminal Code, the Criminal Procedure Code and the Criminal Enforcement Law.

In Brazil, the best examples are two cases involving Rede Globo (a television channel) that passed through the Superior Court of Justice (STJ): Aída Curi and the Candelária massacre. Both were judged in 2013 and raised different discussions on the subject. In the Aída Curi case, her family members filed an indemnity claim requesting the payment of moral damages. They argued that the exhibition of a television documentary about Aída’s homicide made them relive pains of the past. The Candelária massacre case involved an individual acquitted from the criminal charges related to the slaughter who, years later, had his name mentioned in a television program about the crimes.

Another category identified by Powles concerns rights granted in infra-constitutional legislation, rectification rights and personal data elimination, in the context of service hiring from different sectors — like credit and health. These rights originate from specific laws about data protection in several countries. In Brazil, despite the lack of a specific law about the subject, there are provisions in the Code of Consumer Protection, in the Brazilian Internet Civil Rights Framework (Marco Civil da Internet) and in the Decree that regulates it which can be used to protect the access to data, to rectify them and erase them under some circumstances.

The third category of rights that the researcher mentions corresponds to the “deindexation rights”, which we mentioned on the beginning of this text. This right, enforceable against search engines, is destined to solve the problem of the “eternity” existing on the Internet, this is, the difficulties of “leaving behind” things that happened in the past. It is about removing search results that have outdated, irrelevant or inaccurate information about somebody, when these informations are not of public interest. Powles proposes the segmentation of such “deindexation right” in clearer and more specific categories with the purpose of facilitating the harmonization of the criteria adopted by the courts in order to separate the cases in which the information should be removed or deindexed from those in which this should not happen.

Team responsible for the content: Thiago Dias Oliva (thiago.oliva@internetlab.org.br), Jacqueline Abreu (jacqueline@internetlab.org.br), Dennys Antonialli (dennys@internetlab.org.br), Francisco Brito Cruz (francisco@internetlab.org.br).

Translation: Ana Luiza Araujo (analuiza.araujo@internetlab.org.br)

compartilhe