Artificial intelligence is defined
as any computer systems that can do a function that would typically require
some form of human intelligence such as speech recognition, visual perception,
or decision making. Therefore, I would consider Ava and Ash 2.0 as examples or
artificial intelligence. They are both very advanced forms of artificial
intelligence because they are literal emulations of human beings. Earlier in the semester we debated whether or not our
everyday electronics were moral agents, and if we held any moral obligations to
them. As a class, we came to a consensus that we did not have any moral
obligation to our gadgets, just as they had none to us. In our discussion we used the examples of our
smart phones, laptops, and other common electronics. Therefore, it was easy for
us to decide that we had zero moral obligation to them, and vice versa. However,
when that same artificial intelligence is in human form it somehow enters an
ethical grey area for most people. Ava and Ash 2.0 specifically seemed to land
in a grey area because of the context from which they were invented. During the
modern day Turing test that Ava was a part of she was supposed to convince
Caleb, and us as an audience that she was as human like as possible. During
this process she flirts with him, and makes a special connection with Caleb
that even he was not prepared for. I cannot say that Caleb held any moral
obligation to her just as the humanoid robot that she was. However, his moral
obligation to her began when he felt that he had a genuine human like
connection to her. I think that it what ultimately made her pass the Turing
Test. Ava was programmed to form relationships, and that alone will make
another human have some form moral obligation to them. Comparable to the
relationships that we share with our pets; they seem to create some sort of a
bond with us where we feel that we should be morally obligated to them. Also,
for example with dogs, they almost seem like they share some sort of moral
obligation to their owners as well. However, when we speak of other humans I think
that we have the moral obligation to help each other when we are in need. We
also have the moral obligation to make sure that humans receive what some would
call basic human rights. The rights of health, shelter, and freedom from human
captivity should be what we morally oblige to other humans. Although, with “inorganic”
humanoids I do think that we have the same innate moral obligation to them. We
have to think about why they were invented. Humanoids typically are not built
to just live the common everyday life as humans. They usually are built to
carry out some sort of task, whether it be to teach, care for, or for sexual
uses. They are becoming more human like just so they will realistically carry
out more human like tasks. So the moral obligation comes in when the actual
human decides that they have some form of relationship with the humanoid. Furthermore,
I do not think that the robots will have the genuine feelings back to them, but
they will be programmed to exhibit the feelings or emotions necessary to carry
out the task that they were built to do just as Ava did.
No comments:
Post a Comment