Black Mirror: “Hated in the Nation” (S03 E06), robotics and liability

Opinion 11.21.2017 by Ana Luiza Araujo

What should be the liability model for damages caused by robots?

Screengrab of the “Hated in the Nation” episode

By Kleicy Alves Braga and Victor Pavarin Tavares

The rapid technological progress provides great challenges for the development of regulatory frameworks by the Law, of a slower and more cautious evolution. Extortions, defamations, frauds on the web, and other crimes thought for an offline reality put traditional and established doctrines at stake. The situation becomes even more complex when we consider the possible physical damages that can be caused by new technologies. When we talk about artificial intelligence or robotics and their intrinsic potential to cause damages, many doubts appear: should the manufacturer of a robot, for instance, be responsible for damages caused by the machine? And what if the person who bought a robot used it improperly for a purpose that it was not idealized? Or even, what if a robot has the capacity to learn and starts to act autonomously?

The “Black Mirror” episode Hated in the Nation is thought-provoking because it leads to the reflection about some of those questions. In the episode, robot bees programmed by a company to fulfill an important role in the ecosystem — and here we set aside its potential to be used for state surveillance — have their control system hacked and are reprogrammed to kill people.

The victims are people who did something that did not please internet users — which are at the same time victims and, indirectly, agents of crime — and for this, are targeted by the #DeathTo hashtag in a social network. The person who has their name connected more times to the hashtag is killed at the end of the day in a barbarian way by the bees. At the end of the episode, the situation is inverted: all internet users who disseminated the hashtag suffer the same punishment. Who should be liable for the deathly use of the bees?

This somewhat unexplored field in between robotics and the Law should involve a cautious pondering of interests. If on the one hand, companies promote important technological progress — in the case of the episode, the robot bees fulfilled a fundamental role in an environment where bees are extinct –, on the other, the possibility of machine-caused damages stimulates the debate about liability models to the companies for its manufacturing.

The fatal accident which happened last year involving a semi-autonomous car, the Tesla Model S, raised questions that reflect this debate. Even though the vehicle is not totally automatic since it is equipped with a kind of limited “autopilot” that requires some precautions (like the driver cannot take their hands off of the wheel), the accident raised questions on the debate about barriers to the technological development and liability.

Keeping in mind the importance of the development of new technologies, some authors say that the liability of companies should be minimal. In the episode, the person responsible for the control of the bees says that it is “almost impossible” to hack its system. Tesla affirmed that the limited “autopilot” technology is still being tested and that consumers are advised. Is the “almost impossible” enough to not have any responsibility for the deaths? And what about the warning to the drivers that the car is still being tested?

In this sense, there are people who argue that a company can only develop a technology when there are no risks of it being hacked. If there are risks, the company is liable and, at this point, there is a standstill: under a legislation that makes companies liable in a tougher manner there is the discouragement to invest in new products due to the possible losses determined in judicial convictions deciding the payment of an indemnity, which would stagnate the technological development when it comes to robotics.

The debate over limitations of liability stumbles upon some other factors that make the discussion even more complex, among them the possibility of open sourcing. This possibility makes technology even more prone to innovation, allowing different programmers to improve the code and develop new functions, many times making robots multifunctional. However, this makes the same developers able to deviate the use for which the machines were programmed and ends up facilitating the use of robots for unlawful purposes, despite the transparency and the big possibilities of innovation that open sourcing offers. The question here is whether the liability would fall under the company or under the developer who modified the code. Or both.

Closed sourcing, on the other hand, loses points for the lack of transparency and because probable security flaws can take longer to be detected due to the small number of people who can access the system and find them, which in theory results in a less tested code and, consequently, less safe. For instance, we do not know if WhatsApp is effectively secure. We believe it to be, but it is unknown. If WhatsApp were hacked and all conversations were publicized, would it be possible to blame its closed sourcing?

Apple’s operational system, the IOS, is another example of closed sourcing in which we do not know its vulnerabilities, but we believe it to be safe. It is common to hear people saying that the IOS is much more trustworthy than Android, which is open sourced and more prone to innovation. Should the fact that a source is closed or open result in different liabilities?

It is also important to highlight that companies are not the only developers of artificial intelligence. This kind of technology is many times thought about in a military context, usually for other purposes — which does not hinder its appropriation by other agents. GPS, for example, was developed for military purposes, only to be used as a tool for civilians later.

It is clear that establishing a liability model of this kind is not a simple task. At the beginning of the year, the European Parliament addressed the need for creating a juridical personality for robots — called “e-personality”. There are several unanswered questions about the topic that made this issue highly complex, demanding studies and in-depth research to avoid hasty solutions.

The problems raised here will become more and more recurrent with the progress of technology, especially with the development of robotics and their somewhat unpredictable consequences. It is up to the Law, in this scenario, to find ways to balance the technological progress with its possible collateral damages to society.

compartilhe