A large portion of humanity struggles with the dilemma of judging artificial intelligence as having a moral consciousness or as just being a machine. At what point do we feel as if we have a moral responsibility to them? Is it when they begin looking and acting like us? Most likely not, because most robots will simply be programmed to act as human as possible, similar to Siri. So we must determine when robots gain a consciousness; but how can we determine this stage? We can not understand robotic consciousness according to the problem of the other mind. How is it possible for us to comprehend the consciousness of robots if we can not experience the world as they do, just as we can not experience life in the exact same way as a dog does. Another problematic aspect of judging consciousness comes with the fact that we do not know that robotic consciousness operates in the same way as ours. Should we be worried that we may create technology that has a consciousness that which we do not understand and thus unintentionally act immorally as a result? I believe that this could very easily happen; we need to research more about consciousness to understand the inner workings of our robotics systems before we go to the next step of creating artificial intelligence. But what about souls? Do robots have actual souls? Assuming that I have a soul, I do not believe that artificially intelligent robots have a soul. Humanity does not have the capability of creating an inanimate object (which a robot is at first) and then gifting it with a soul; we are not powerful enough to engineer by hand living beings who are on the same moral ground as ourselves. But then again, how do we decide who and what has souls? Do bugs and plants have souls? What about dogs and cats? What gets me the most is the fact that, obviously, viruses do not have souls, so at what level of creation do things stop having souls? What about the people who do not believe that souls even exist? They are still able to determine who and what they owe a moral responsibility to. Maybe robotic consciousness is possible without the consideration of souls. But then we are still left with the question of what determines consciousness; we are not even sure which non-human animals have a consciousness. Okay, so what about robotic interaction with humanity, regardless of whether or not robots have a consciousness? Should robots have the ability to work as nannies? This is a very important topic for me because I am a nanny. From my personal experience I do not think that robonannies should be implemented. For little kids, robonannies might not be so bad because they do not need much of anything besides to be kept safe, fed, and happy. Once the kids get to elementary school age, robonannies are not such a great idea. Kids of this age need more than a small child; they require imagination, advice, good role models, and sometimes instinctive decisions. I know that I have been in situations that required me to act so quickly that I did not have much time to think about what I was doing; I just acted instinctively. Could robots do that? I do not think that they could because they follow a code that is programmed into them. All information that they receive must be processed before a educated decision is made. They may not be able to act quickly enough to prevent something that a human reaction could have prevented. Also, I do know that a robot would have unlimited access to information via the internet but I still do not believe that it could be imaginative enough to entertain and educate a child. When I nanny, I have to be very creative when we play games and when I teach her information.
No comments:
Post a Comment