Isaac Asimov’s Laws of
Robotics aim to protect humankind, but this notion implies that robots will
have an innate desire to harm. This idea
reveals that we are scared and pessimistic: we are concerned with their
potential to “take over the world” or what we now know as the human world.
First and foremost, “they,” as in machines, have already taken over – to point
out Dr. J’s example, most people use their phone more than their knees (see my
example of “phubbing” in previous blog). I understand Asimov’s rules are
precautions, (if robots turn out to be evil, then thanks) but we currently have
no reason to believe they would hurt us. Our human flaws are what lead us to
rash decisions; would a strictly logical being fall victim to mood swings and
vengeance?
Asimov’s rules suggest that
human beings are entitled to service.
Law #2 could be misused if we eventually consider robots to be moral
patients. Does being manmade deem them inferior, destined for a life of
servitude? Perhaps, but if robots can be developed to the point of becoming a
moral agent, would we give them the title of moral patient? In that case, saying “how high?” when a human
says “jump” does not seem ethical. Next, these laws cause us to consider human
moral standings. Humans are flawed because life experience shapes us with
various unintentional biases; robots, as logical beings with higher cognitive
functions, would not truckle to these biases. Preconceived notions and
stereotypes without a kernel or truth create schisms throughout the world.
Overall, their levelheadedness and ability to see past predispositions would be
of great benefit to society. We need
more neutrality, more truth promoting.
These laws are complex:
overall, they are beneficial toward humans because the role of robots is
completely subservient. If a robot must
obey orders given to them by humans, they are limited in their existence. It eliminates the idea of free will and
desire because fate is determined by human demands. Consider the possibility that robots are
capable of making better decisions than we are: should they still listen to us?
Or should we listen to them? The idea
that robots must protect humans has merit, but when we consider robots as
public officials or soldiers, the line is blurred; thus, the zeroth law states
that a robot may not harm humanity, or by inaction, allow humanity to come to
harm. In other words, if a human is
harming humanity, they are susceptible to harm by the robot.
Currently there are robots
specifically designed to kill humans: drones. This begs the question: who is
calling the shots? Chapter 24 discusses roboethics and the complexities pertaining to liability. The
designer of the robot will take responsibility which is logical until the
robot's intelligence surpasses their creator. In the case of drones, who is at
fault: the designer, the public official, or the person who guides it into its
target? Robots will be cruel if their creator designs them in that manner, in
which case, the creator should be held responsible for the damage. I am solely worried about robot existence because of their human programmer.
Question 4
No comments:
Post a Comment