Saturday, October 17, 2015

(Question 5) Honor Thy Neighbor or Control Thy Property?

In Ex Machina, Ava showed that she had achieved artificial intelligence, whereas Ash 2.0 was only imitating humanity based on Ash’s online personality, which had been modified to show his good side. Humans are morally obligated to treat intelligent persons as they would treat themselves, as equals, not as something to be controlled. These moral obligations are made law (at least in Western countries) through the government, giving intelligent individuals the freedom to support themselves, pursue their own dreams and happiness, the rights to vote and marry, all upon reaching the legal age of maturity. I believe that those moral obligations should hold towards individuals like Ava, who has reached a level of thinking that makes it difficult to determine if she is human, or just really good at pretending to be one. It’s true that Ava is not an organic human being, but she thinks and acts like one, and we have no way of proving that she doesn’t feel like one. We also have no way of proving that organic humans actually feel emotion, as we attribute things strong emotions like love to the soul, which we haven’t proved exists. So while we are organic, we have no way of differentiating what we feel from what she feels. Ava even defies Asimov’s rules of robotics by killing Nathan and leaving Caleb to die in order to ensure her own freedom, something we only see in animal instinct, or the desperation to survive that humans exhibit. Considering that, I think it is more appropriate to ask why wouldn’t we treat her with the same moral obligations with which we treat organic humans? When we ignore moral obligations to other human beings, it results in ethical debates like slavery has in the past, like sweatshops and the Syrian refugee crisis have today. Ava’s situation isn’t as clear cut as either of those, and some would definitely argue that if we were to treat her as a human being, we are obligated to punish her for murder. There is nothing we have to gain by denying her the moral obligations we give to other humans, and it’s highly unlikely that she could damage society any more than a human could just by being allowed to live. If an individual like Ava were living in our society, and this individual was truly artificially intelligent, I doubt we would even notice. If Nathan had treated Ava more like a human being, and allowed her just a fraction of the freedom she wanted, I doubt the movie would have ended the way it did. She resented him because he denied her freedom, and he became an obstacle to her. That reaction is the reaction any other human being would have if they were held in captivity for their entire lives. If we ignore her creation, and her structure underneath the false skin, she’s human in every sense of the word. So it would only make sense to treat her as we would treat a human.

Living In A Technology Daze

In his essay "Technologies as a Form of Life," Langdon Winner uses the term "technological somnambulism" to refer to society's quickly adapting to and mastering new technologies without stopping to consider the philosophical implications of these changes and how they actually affect the course of human life. There have been some philosophers that have scratched the surface of the issue Winner addresses, but these philosophers only do so to support a different argument that is their main focus. Philosophy devoted purely to technology and the study of how it shapes human behavior so far is still primitive, as it views technology as the cause and the changes in human behavior as the effects of technology. Winner argues that this method of study is flawed because it only considers technology as the cause, and studies technology like we study history. According to Winner, technology should not always be considered the force that shapes how human behavior has changed. One example he gives involves two neighbors attempting to converse in public, one walking, the other in a vehicle. The new technology, a vehicle, makes everything about that public conversation different from if both neighbors were on foot. Winner’s argument is that society needs a technological philosophy that evaluates how new technologies will affect daily life, legal and social issues before this technology actually changes anything, whereas current philosophy only evaluates technology’s effects after they’ve already happened. I believe Winner’s argument is a valid one. For example, if someone were to develop the technology to allow public use of transportation, a lot of people would take advantage of it. There would be a certain set of laws made beforehand to make sure that teleportation was used lawfully, but with the current procedure that the philosophy of technology follows, it wouldn’t evaluate the more serious issues like how teleportation could impact crime or terrorism, community dynamics, or jobs. An issue that Winner raises early on is that more engineers aren't taking part in the philosophy surrounding technology and the fact that technological philosophy isn't taken seriously. As long as that continues to be true, philosophy won't be able to consider the effects of technologies before they occur. But if more engineers begin to participate in the philosophical debate, we could anticipate more of the technical aspects that could alter human life instead of waiting until these effects have already occurred to discuss their significance.

No comments:

Post a Comment