Saturday, October 17, 2015
Midterm Question #6 - Artificial Intelligence and Robotics
In Brian Christian's book The Most Human Human, he discussed the Turing Test and it's effect on the way we view artificial intelligence. The Turing Test is basically an online chat, where a human candidate chats with either another human candidate or the computer itself. The human must then decipher if they are chatting with a robot or a human. If the robot were to successfully imitate a human as far as tricking the human into believing it is human, then the robot would have passed the test. Having past the test would mean that we have now made artificial intelligence. This test presupposes that all we need in artificial intelligence is the ability to simply choose a side on a controversial question, if asked by the human candidate. The one problem I see with this test is that it is only a chat. No actions can be done. For example, in the short film, Be Right Back, Martha, the widowed wife, is brought much comfort when she is able to talk to Ash 2.0 on the phone. She can barely tell a difference between him and her real late husband, until he has no memories that they share, but that were not put online or videoed by the two of them. He is able to make her laugh and act like himself. The problem arises when Ash 2.0 is put into a body that looks like the late Ash. Martha realizes that Ash 2.0 cannot be the real Ash when he has no free will. He doesn't get mad, he doesn't fight with her, he doesn't have any emotions that the real Ash would have. The Turing test, which could be compared to the phone calls with Ash 2.0, is passable by robots that do not have artificial intelligence because no emotions can be detected through a chat. But when the robots are made tangible, the lack of emotion is abundantly clear. They will not pass the tangible Turing Test. Not being able to produce emotions, they, therefore, are not artificial intelligence. But when it comes to Ava, the robot in Ex Machina, she does have emotions and picks sides and makes decisions that are questionable in nature. She therefore would pass the Turing Test online and in the tangible world that she resides in. The Turing Test may work online, but it cannot account for emotions. Emotions are unable to be seen in the human body, so they are difficult to reproduce and even program into a robot. Until that is able to be programmed, we will not have a robot that can pass the tangible Turing Test. Do we even want a robot to be emotional? Would we be able to control the emotions? It is important also who controls the emotions. The programmer's opinions could be programmed into the robot as well, so whose to say what the robot's end goal is? I think we need to seriously considered these questions before we further our understanding of artificial intelligence.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment