Artificial Intelligence: Should Robots Have Rights?

October, 2020

By Karina Parikh

In 1950, Alan Turing proposed a test for the ability of a machine to exhibit intelligent behavior that is indistinguishable from that of a human being. Over the 70 years since, artificial intelligence (AI) has become more and more sophisticated, and there have already been claims of computers passing the Turing Test. As AI applications, and especially AI-enabled robots, continue their evolution, at what point do humans begin to, in fact, perceive them as living beings? And as this perception takes hold, will humans begin to feel obligated to grant them certain rights?

The time to address these issues is now, before the robots start doing so.

Researchers and scientists are now pondering the ethics surrounding how robots interact with human society. With some of these robots having the capability to interact with humans, people are naturally worried about their effects on humanity.

For example, people have varying perspectives on the effects of robots in the workplace. Some see them as beneficial, able to perform tedious or dangerous jobs, leaving humans to perform more interesting work and stay out of harm’s way. Others see them as hurtful, taking jobs away from people, leading to higher unemployment.

Some of the struggles facing the public right now are pertaining to robots that possess the ability to think on their own. It is an issue that divides people due to the fear associated with the idea of autonomous robots. As a result, the overlaying concern that must be taken into consideration is whether or not it is ethical to integrate these robots into our society. By programming these robots with specific algorithms and then training them with enormous amounts of real-world data, they can appear to think on their own, generating predictions and novel ideas. When robots get to this stage and start to act like humans, it will become more difficult to think of them as machines and tempting to think of them as having a moral compass.

Arguments Supporting Rights for Robots

Although the role of robots and their “rights” may become an issue in society generally, it is easier to see these issues by focusing on one aspect of society: The workplace. As robots working alongside humans become smarter and smarter, humans working with them will naturally think of them as co-workers. As these AI-enable robots become more and more autonomous, they may develop a desire to be treated the same way as their human coworkers.

Autonomous robots embody a very different type of artificial intelligence compared to those that simply run statistical information through algorithm to make predictions. As Turing suggested, autonomous robots ultimately will become indistinguishable from humans. If, in fact, robots do develop a moral compass, they may—on their own—begin to push to be treated the same as humans.

Some will argue that, regardless of the fact that robot behavior is indistinguishable from human behavior, robots nevertheless are not living creatures and should not receive the same treatment as humans. However, this claim can be countered by pointing to examples indicating how close humans and robots can be to each other. For example, in some parts of the world, robots are providing companionship to the elderly who would otherwise be isolated. Robots are becoming capable of displaying a sense of humor or can appear to show empathy. Some are even designed to appear human. And, as such robots also exhibit independent thinking and even self-awareness, their human companions or co-workers may see them as deserving equal rights—or, the robots themselves may begin to seek such rights.

Another argument in favor of giving rights to robots is that they deserve it. AI-enabled robots have the potential for greatly increasing human productivity, either by replacing human effort or supplementing it. Robots can work in places and perform more dangerous tasks than humans can or want to do. Robots make life better for the human race. If, at the same time, robots develop some level of self-awareness or consciousness, it is only right that we should grant them some rights, even if those rights are difficult to define at this time. Just as we treat animals in a humane way, so we should also treat robots with respect and dignity.

Counterpoint: The Argument Against Rights for Robots

Although some may advocate for giving human-like robots equal rights, there are others who feel they are facing an even more pressing issue, that robots may overpower humans. Many fear that artificial intelligence may replace humans in the future. The fear is that robots will become so intelligent that they will be able to make humans work for them. Thus, humans would be controlled by their own creations.

It is also important to consider that expanding robots’ rights could infringe on the existing rights of humans, such as the right to a safe workplace. Today, one of the benefits of robots is that they can work under conditions that are unsafe or dangerous to humans—think of robots today that are used to disable bombs. If robots are given the same rights as humans, then it may get to the point where it is unethical to place them in harmful situations where they have a greater chance of injury or destruction. If so, this would be giving robots greater rights than we give animals today, where police dogs, for example, are sent into situations where it is too dangerous for an officer to go.

A more immediate argument against giving rights to robots is that robots already have an advantage over humans in the workplace, and giving them rights will just increase that advantage. Robots can be designed to work more quickly without the need to take breaks. The problem here is that the robot has an unfair advantage in competing with a human for a job. Robots have already begun to perform human jobs, such as delivering food to hotel rooms. Even in this simple task they have advantages. They don’t get distracted as humans do, but rather they can remain focused for a longer period of time. It also helps that the employer does not pay payroll taxes for the robot’s work. This has driven fears that robots will come to dominate human jobs and the resulting unemployment would negate their benefit to the economy.  Even though humans may not be opposed to robots carrying out simple tasks, there may be increasing opposition when robots start to fill more complex roles, including many white-collar jobs.

When it comes to looking at the impact of robots in the workplace, there are varying perspectives. The argument in favor of granting robots rights is ultimately that they are coming to have the same capacity for intellectual reasoning and emotional intelligence as humans. As noted earlier, these supporters argue that robots and other forms of artificial intelligence should receive the same treatment as humans because some of them even have a moral compass. On the other side, those who argue against giving rights to robots deny that robots have a moral compass and thus do not deserve to be treated the same as humans. In essence, even if they pass the Turing Test, they are still machines. They are not living beings and therefore should not receive any rights, even if they are smart enough to demand them. If we were to grant robots this kind of power, it would enable them to overtake humans as a result of their ability to work more efficiently.

Whether or not robots and other forms of AI should have rights, these technologies have the potential to greatly benefit humans or greatly harm us. As the technologies grow and mature, there may be the need for regulation to ensure that the risks are mitigated and that humans ultimately maintain control over them.

In 1942, science fiction writer Isaac Asimov formulated his three laws for robots:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three laws predate the development of artificial intelligence, but when it comes to principles to guide regulation, they might just be a good starting point.


For more on intelligent automation and other robotics-related technology, including free Research Bytes, see our RadarView market assessments.

For information on future technology trends, including free samples and Research Bytes, see our annual study on Worldwide Technology Trends.