One thing I believe about science fiction focused on artificial intelligence is that some parts, no matter how small, are most likely possible even if it takes centuries for the technology to develop. One example of this is a recent remake of a show called Humans. The debate in this show is essentially what we discussed in class, but on a larger scale. In this future, robots have already been made that are impossible to distinguish from humans based on appearance alone save for their green eyes. A small group of these robots, called "synths" for short, were programmed with the very essence that makes humans human. These special synths have their own opinions, genuine emotions, they form familial and romantic attachments just as humans do, and they're capable of significant emotional acts such as sacrificing their "life" in order to save someone they care about even if they aren't programmed to do so. I understand the dilemma that we discussed is focused on the question of whether or not we should recognize technology with artificial intelligence as moral agents, our equals. But I think the bigger question is, if we have artificial beings that think, feel, have morality, and a true sense of self-awareness, then why shouldn't we recognize them as human? If they can think like us, feel like us, go through life the same as us, don't pose a threat to us that's greater than any other human being, then what do we have to gain from denying them their humanity? I understand that artificial intelligence was created (or is in the process of being created) primarily as a tool for humans, but there would have to be some way to use that tool without oppressing a potential new race of beings. Also, if we were to create artificially intelligent beings that gained self-awareness, and then denied them their own freedoms and rights, wouldn't that create a bigger problem for us than whether or not we should recognize them as moral agents?
P.S. I've included a link to the trailer of the show just because I feel like it really tries to address the issues with artificial intelligence that we've discussed, as well as the issues that deal more with everyday life, such as how these robots impact those around them by being self-aware (And it's just a great show).
I think that the main issue is that if we cannot recognize something as a moral agent, then it is very difficult to care that something's rights are being infringed upon. If it isn't seen as a moral agent deserving of our consideration, then it definitely will not be seen as something deserving of freedoms and rights. Plus, when people are making AI's so that they can be used as tools, they aren't particularly preoccupied with how that tool is being oppressed. It is still, essentially, a tool.
ReplyDelete