Publicada el 9 de Diciembre de 2016
Are the laws of robotics reversible, such that, as humans, we should treat machines in the same way we would like robots to treat us? Should we develop new laws to make our relationship with robots as healthy and productive as possible?
According to Isaac Asimov robots should act on three basic principles:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If we change the order, and substitute human being for robot and robot for human being, this would become:
- A human being may not injure a robot or, through inaction, allow a robot to come to harm.
- A human being must obey orders given it by robots except where such orders would conflict with the First Law.
- A human being must protect its own existence as long as such protection does not conflict with the First or Second Law.
00Isaac Asimov. Source: Wikipedia
This seems to make all the sense in the world. Even the Second Law, which might seem the most unsettling. But which is less so if we consider that in fact we are already applying it (do we not trust our GPS more than our own intuition when it comes to reaching our destination via the quickest route?).
However, perhaps we should also consider regulating the interaction of robots amongst themselves. Because, let’s be honest, we tend to focus overwhelmingly on relations between machines and humans, worrying about robots taking over our jobs (or not), and whether they will make our lives easier or become a threat.
But what about the interactions between robots themselves? Should they be protected from each other? Should they not be prevented from injuring other machines? Are we not obliged, as humans, to programme machines so that they are “sensitive” and do not commit injustices on one another?
Universal declaration on the rights of machines
Some philosophers refer to this as the rights of machines. There are those who consider that it would be unreasonable for a robot to uphold human rights, and yet ignore the rights of other machines
If we have legislated to protect animals and are appalled by animal fights set up for human entertainment, the same could be applied to machines.
In other words, we should also consider the interaction between robots and Artificial Intelligence (or AIonAI).Some experts argue that a proposal for an international charter on Artificial Intelligence, equivalent to the United Nations’ Universal Declaration of Human Rights should be prepared, to guide research and development on morally appropriate robotic and artificial intelligence engineering.
Such regulation would be responsible for predicting how relationships between different machines would look like in the future, especially in their interaction amongst themselves. Some go even further and argue that the law should recognise the inherent dignity and inalienable rights of artificial intelligence. This would help prevent the exploitation and abuse of rational and sentient beings, but would also reflect on our own moral code of ethics and humanity.
Ethics for programming
And so, for fear that developments in artificial intelligence could be used for purposes that go against the laws of robotics, many argue that this ethical aspect should be enshrined in law.
The aim is to develop safe robots and programmes, based on education and research and increased philosophical awareness. Some even argue that an annual AIonAI award should be created so that Artificial Intelligence is developed in a more altruistic manner.
Thus, we are talking not so much in the future, but rather of something that has to be introduced now, in the present. Machines should be duly respected for what they are, and should be made to be respectful and tolerant. We should apply to humans the same principles and laws as we established for robots, such that Artificial Intelligence and machines will have respect for themselves and for each other. And in this way we will have come full circle.