Friday, October 30, 2015

Private Blog

Privacy as definited in the dictionary is the state or condition of being free from being observed or disturbed by other people. As humans, we are social animals we want to share with others and gain personal satisfaction from being social. However, we al so crave the idea of being free from the eyes of judgemental people. We all have things to hide, wether most of us not as bad as plotting a terrorist attack or an attempt at hurting someone else. We hide things that may seem bad to us but not to a mass. As humans, we find comfort in hiding things that we see as bad or risky there fore categorizing these things as private. We decide who we want to know what and when to share a particular idea or activity with someone. 

Most people categorize the hesitance of getting rid or privacy or an age of severe transparency as being something only bad people don't want. The idea what if you have nothing to hide put it all out in the open is not a black and white idea. There are grey areas that as humans we have the freedom to keep closed inside, maybe not always forever but atleast when the timing is right.  Lack of privacy creates a prison in the mind because as people we are naturally created to change behavior when we know or even suspect that someone else is watching. I listened to a TED talk by Glen Greenwald, he was basically making the argument that people want privacy because it is our freedom that leads to creative actions and discovery. He also pointed out that those making the claims about privacy not being very important are usually the most private people. He gave the example of Mark Zuckerberg and how he bought a house and bought houses directly next door to him in order to keep a more controlled private life. Ironically, Facebook has terms in conditions that prevent certain aspects of a user's privacy to be kept private. 

Beware of...

Beware of… No but really.  Beware of what?  Normally when you hear that phrase you know not to cross a certain line whether it means staying out of someone’s backyard or just staying away from someone’s private property.  When we see that we know to beware.  We seem to be embarking into a time period where we are having to protect our privacy more and more because there are newer and easier ways for people to invade our privacy.  That’s what chapter two discussed.  As time passes our privacy seems to decrease and who is to blame?  I’d like to point my finger at technology.  Why might you ask?  Because, technology has allowed for me to create my digital self.  The self that I really want to be.  The self that can fool others.  Technology gives me the ability to share my information to whomever whenever.  Is that good?  I’d like to argue that there are positives and negatives to the issue at hand.  One particular thing that I did not think about but Chapter 2 of “The New Digital Age” by Eric Schmidt and Jared Cohen brought to mind was the fact that the new heights that we are reaching in technology could cause us to have more accountability and transparency in society.  They said “Citizen participation will reach an all-time high as anyone with a mobile handset and access to the internet will be able to play a part in promoting accountability and transparency.” (34)  Sounds good right?  But think about it, yes we will have this right, privilege and choice but dig a little deeper.  That means that the things which we thought were so private will no longer be.  What if at the click of a button someone could know every law you broke in the past hour?  They you would be forced to adhere to the consequences.  Would that be a society that you would want to live in?

Privacy . . . eh Who Needs It . . .

Chapter 2 in The New Digital Age book discusses how virtual profiles will impact people's identity and privacy. With more people using the internet and social media sites, people's identity could start to reflect what they post or search for online. I think this is something that is currently happening right now and will continue to increase as more people use the internet and social media. If this continues to grow then it will become irreversible and people will not be able to distinguish self-image and virtual image. Social media is supposed to be a place where people vent problems, share things with friends and family. People should not have to worry about being judge based off their social media. Most of the time, I think people post certain things online without actually thinking about it because they feel safe enough to do (I could be wrong because I do not have any type of social media so I can't really relate). If people start to feel like they are being watched on social media then they will not things like they normally would because the thought of people watching your every move is scary. There are technological companies that are searching for new ways to secure online privacy, but I do not think anything would really be able to prevent people from seeing your info and stealing your identity. There are certain things that can be done to keep certain people away from your personal info, but I think that hackers, the government and to an extent certain companies that hire you will be able to access private information without consent. I know that before a job hires someone, they might look at the persons social media sites to determine if they would be a good fit for the job, but I think eventually companies will be able to go beyond that and start looking at other private info too. In the social world, I don’t think complete privacy exists anymore because there is just too much going on with technology.  


 



When I grow up I want to be either a Thing or a Human.

       When we actually think about how would technology be in the future many things may often come to mind. We may often imagine a place where  there are all sorts of flying cars that fly for themselves or even cell phones without there physically being a cell phone there. Far as actual life in the future world, we may imagine us a humans now, cyborgs, and even AI. Seems like luxury right?


       Now imagine that you are still alive when this actually happens. Imagine that you had choices as to what to do with your human self (as you are now). You have three different choices to be exact. First choice, you can either remain the human being that you are today meaning that you are not infused with any highly complex technological aspects (at least apart of your body). Second choice, you have the option to have what is known as the "grain" inserted in the back of your ear. This gives you the ability to save or delete anything that you want from your memories. And last, but not least, the third option is that you can replace majority of your body with any machine that you want (think of a cyborg). Take a minute to think about what your life would be like based on the choice you made.


       If you picked option two or three, as yourself this, Would you still be considered as a human? Or would you be considered as something else?


       When we define the word Human, we see that a human is something that is distinguished from a animal or an alien;  With this definition being used it is easy for some of us to still classify our selves as humans while on the other hand some may feel that we will no longer be classified as a human. Indeed, Yes it is true that we started out as humans. But did we eventually merged into something else? But to be completely honest the question that follows this is what aspects actually makes us human? Is it the brain and all its unique abilities? What happens when someone goes brain dead? Would they still be considered as a human? If we were to take out our brain and replace it with a machine (let's imagine we could) would be still be considered a human? Or would we be considered as a machine?



What Private Life?

During these modern times, we have a major lack of privacy.  Despite what we may believe, we are being watched all the time, whether it is when we are on our phones and computers or out in public.  Some people think that this lack of privacy is a good thing because it enables us to hide our wrongdoings.  I believe that privacy can indeed be a negative thing, but without it, humanity would not be the same.  We act differently when we know that we are being watched; if we all were made completely aware of the fact that we were being watched, no one would act like themselves.  They would all be on their best behavior.  As said in class today, humanity could lose an important characteristic of its personality. Privacy allows us to have a safe place, where we feel like we are not being watched, to express our creativity and de-stress without fear of judgment. Another positive aspect of privacy is that relationships require privacy because it creates a sense of intimacy.  The special feeling that comes from a couple having their own inside joke or pet names would no longer exist without privacy because everyone would know the joke.  Privacy also allows us to have a positive sense of trust; without anything private, what would we need trust for? 


I also want to talk about a permanent online identity.  The majority of people already have a permanent online profile such as Facebook or Twitter.  It seems that not long in the future, everyone might be mandated to have one from the time of their birth.  This profile would play a huge role in how children grow up.  Parents will have to begin talking to their children at a much earlier age about the dangers of the internet and how their decisions can affect their futures.  If a young child makes a mistake and it is posted to their profile, it will never go away.  The child will be stuck with the embarrassment that comes with everyone knowing his mistake for the rest of his life.  It could even affect him as he searches for a college to attend or a permanent job.  I do not think that this is the best idea; how can we hold a child to an innocent mistake for the rest of their life?  We already see instances like this occurring today.  When you send your resume in for an interview, the employers look you up on social media to see what kind of person that you are and how you conduct yourself.  Right now, we are fascinated by the fact that we can be so open about our lives on Facebook and Twitter.  People share things without a second thought, but what about when that feeling of freedom wears off?  What about when we are being judged for what we post?  What is seen right now as a freedom of expression could eventually, if we are not careful, morph into a permanent way of being controlled and restricted. 

The Brain and Our Humanity

      In the class on Wednesday, we began to discuss the new book. The main question posed was at what point are we determined to be a robot rather than a human. My jerk reaction was that once the the heart is taken away, or has machinery inserted into it, the person ceases to be the way that he or she was. While that is a romantic way of looking at this philosophical question, I'm not sure I could defend this point.
       When I began to think about it, I would consider a pacemaker a piece of machinery. But I don't think I could classify an elderly person who relies on a pacemaker to continue their heart a robot. So what makes a person no longer a person? So if not the heart, the brain? As far as I know, as of now, we cannot replace any part of the brain with a piece of equipment. (Here's an article that I found about this) When a person is determined to be "braindead" they are considered to be gone. They may have equipment allowing their lungs to work and their heart to continue, but once braindead, as far as I have gathered, a person cannot be resuscitated or brought back.
      So is the brain where the consciousness and soul encapsulated? I am leaning towards yes. Honestly, I'm open to being persuaded no, but this is my reasoning right now.
     In Dr. J's philosophical question about the boat, and when does it become the boat again if it replaces the new pieces of the old boat, I think that when 50% of the old boat replaces the new pieces, I think the new boat could then be seen as the old boat because over half of the materials are the old boat. As confusing as that sentence was, I don't think that this can be applied to humans, to add to the confusion. Humans are much different than boats, we have a conscience. I think I would say that when the conscience is gone, the person is then gone. That person is now considered to be artificial intelligence.

Being Born isn't Enough to Remain Human!!

During our class discussions this week, we have been trying to decipher what makes or changes a human into a cyborg and no longer a human. In my opinion, a human is a person that was conceived and was developed in a human woman’s body. Of course there is the method of in vitro, but the person would still be a human because they developed in a human body. It has been said that once a person has to place any type of technology in their body is immediately a cyborg. I don’t agree with that because my mother has a pacemaker, and I don’t believe that makes her any less of a human. Also, what about people who need life support in order to stay alive? Are those people considered to be less of a human because they need technology in order to keep them alive, even though it's technically not implanted? I think a cyborg would be a person who is completely half human and half robot, or a robot that looks and functions like a human. For example, I think Ash 2.0 and Ava are considered cyborgs.

            As far as the privacy of technology goes, we are already in a time where people are becoming what we see on social media. This is the only thing that they portray. Considering the cases in which a person doesn’t see another person for many years, and can only view them from their Facebook posts and statuses. Their identity is technically already based off of social media alone. It’s terrifying thinking about the progression of the internet and not being able to remain separate from social media sites and pictures. As of right now, a person can have two identities, one for the internet and one for their personal life. However, they are somewhat protected from being figured out because they might act different from their internet life and don’t have to worry about being recorded on a regular basis. I think better privacy settings would be great, but if our social media lives and personal lives merge, people would not be their true selves. We will eventually have a species of “perfect” or “internet-made” humans. If we are restricted in what we are allowed to do or express on the internet, does that make us less of a human?

Thursday, October 29, 2015

Saturday, October 17, 2015

They're Only Here to Help! Not Kill!

Asimov’s 3 rules of robotics pretty much tell the story of how humans feel about robots. Which is obviously that they are all going to eventually try to take over the Earth, and kill us all. The laws are centered on the security of humans. However, I feel that it contradicts itself. The laws discuss that the robots should protect their own existence, but when doing that I would assume that we are saying that the robots have some kind of rights. It does not make sense to only have to right to exist, but it should have the right to be free from basically human enslavement. Also, what are the robots going to be protecting themselves from other than humans? Other robots? If so then do robots have an unwritten code of ethics that they abide by? I do not see how that is possible because they do not have a sense of like the “robot race” like we do the human race. So how do we expect them to protect their existence, if they really do not have anyone to protect it from? If it were the case that robots were protecting themselves from the danger of another robot, should we be concerned about humans getting caught in the crossfire of the soon to come robot war? I also think that as technology advances the 3 rules will become obsolete because our relationships with robots will change. We will have robots so intelligent that they will now longer pose any threat to us. Also, after we become more used to them we will have different feelings for them. Although our generation was born into modern technology, the next generation will be born into even more advanced technology. So they will not have the same reservations that our predecessors or that we have about living our lives with them. They will start off as commodities that only maybe people with thousands of dollars to spend on a robot will have. Eventually they will become like any other everyday technology that we have. They will be readily available at prices that average people will be able to afford. Asimov’s 3 laws will have to be revised or just become obsolete because that society will not feel that robots are anything that we have to be afraid of because they are literally only invented to help us.

(Question 5) Honor Thy Neighbor or Control Thy Property?

In Ex Machina, Ava showed that she had achieved artificial intelligence, whereas Ash 2.0 was only imitating humanity based on Ash’s online personality, which had been modified to show his good side. Humans are morally obligated to treat intelligent persons as they would treat themselves, as equals, not as something to be controlled. These moral obligations are made law (at least in Western countries) through the government, giving intelligent individuals the freedom to support themselves, pursue their own dreams and happiness, the rights to vote and marry, all upon reaching the legal age of maturity. I believe that those moral obligations should hold towards individuals like Ava, who has reached a level of thinking that makes it difficult to determine if she is human, or just really good at pretending to be one. It’s true that Ava is not an organic human being, but she thinks and acts like one, and we have no way of proving that she doesn’t feel like one. We also have no way of proving that organic humans actually feel emotion, as we attribute things strong emotions like love to the soul, which we haven’t proved exists. So while we are organic, we have no way of differentiating what we feel from what she feels. Ava even defies Asimov’s rules of robotics by killing Nathan and leaving Caleb to die in order to ensure her own freedom, something we only see in animal instinct, or the desperation to survive that humans exhibit. Considering that, I think it is more appropriate to ask why wouldn’t we treat her with the same moral obligations with which we treat organic humans? When we ignore moral obligations to other human beings, it results in ethical debates like slavery has in the past, like sweatshops and the Syrian refugee crisis have today. Ava’s situation isn’t as clear cut as either of those, and some would definitely argue that if we were to treat her as a human being, we are obligated to punish her for murder. There is nothing we have to gain by denying her the moral obligations we give to other humans, and it’s highly unlikely that she could damage society any more than a human could just by being allowed to live. If an individual like Ava were living in our society, and this individual was truly artificially intelligent, I doubt we would even notice. If Nathan had treated Ava more like a human being, and allowed her just a fraction of the freedom she wanted, I doubt the movie would have ended the way it did. She resented him because he denied her freedom, and he became an obstacle to her. That reaction is the reaction any other human being would have if they were held in captivity for their entire lives. If we ignore her creation, and her structure underneath the false skin, she’s human in every sense of the word. So it would only make sense to treat her as we would treat a human.

Living In A Technology Daze

In his essay "Technologies as a Form of Life," Langdon Winner uses the term "technological somnambulism" to refer to society's quickly adapting to and mastering new technologies without stopping to consider the philosophical implications of these changes and how they actually affect the course of human life. There have been some philosophers that have scratched the surface of the issue Winner addresses, but these philosophers only do so to support a different argument that is their main focus. Philosophy devoted purely to technology and the study of how it shapes human behavior so far is still primitive, as it views technology as the cause and the changes in human behavior as the effects of technology. Winner argues that this method of study is flawed because it only considers technology as the cause, and studies technology like we study history. According to Winner, technology should not always be considered the force that shapes how human behavior has changed. One example he gives involves two neighbors attempting to converse in public, one walking, the other in a vehicle. The new technology, a vehicle, makes everything about that public conversation different from if both neighbors were on foot. Winner’s argument is that society needs a technological philosophy that evaluates how new technologies will affect daily life, legal and social issues before this technology actually changes anything, whereas current philosophy only evaluates technology’s effects after they’ve already happened. I believe Winner’s argument is a valid one. For example, if someone were to develop the technology to allow public use of transportation, a lot of people would take advantage of it. There would be a certain set of laws made beforehand to make sure that teleportation was used lawfully, but with the current procedure that the philosophy of technology follows, it wouldn’t evaluate the more serious issues like how teleportation could impact crime or terrorism, community dynamics, or jobs. An issue that Winner raises early on is that more engineers aren't taking part in the philosophy surrounding technology and the fact that technological philosophy isn't taken seriously. As long as that continues to be true, philosophy won't be able to consider the effects of technologies before they occur. But if more engineers begin to participate in the philosophical debate, we could anticipate more of the technical aspects that could alter human life instead of waiting until these effects have already occurred to discuss their significance.

The Wrong Test

Alan Turing, one of the founders of computer science, set out to answer: “Can machines think?” (Christian 4).  In order to solve this, he designed an experiment in which judges had to decipher between two correspondents: one human and one computer.  During the experiment, each correspondent is free to say anything in order to be convincing.  They can converse over serious issues, freely chat, joke, question...His hypothesis was that in the year 2000, computers would fool 30% of human judges after five minutes of dialogue.  Though extreme advancements have been made: his prediction was inaccurate, and author Brian Christian participated in a modern contest in 2009 to discover who is the most human robot and the most human human. 

Turing believed that fooling judges by sufficiently imitating humans displayed intelligence because the behavior would be seen as that of an intelligent being.  The ability to adjust an answer or find something suitable to say rather than simply recite a fact is a humanlike characteristic.  One winner claimed he won by “being moody, irritable, and obnoxious:” this humorous remark tells a deeper story about what it means to be human.  In class, we discussed complexities of emotional expression, but overall, decided that it is one of the ideas that we think makes us human. In Black Mirror: Be Right Back, after acquiring a robot imitator of her deceased husband, Martha is unsatisfied.  She discovered that she missed these emotional, unpredictable aspects of Ash.  Flaws, weaknesses, and bad days were not shown – these characteristics make up a crucial part of individuals.  Next, the Turing Test assumes that humans are unique, superior beings.   Of course, this is a notion created by human beings. When robots gain access to all information, history, videos, etc – I believe they will be able to develop far beyond human capabilities.  They will no longer need to prove themselves human, but we will attempt to keep up with their vast ability. 


Overall, the Turing Test assumes that imitating humanity is the goal.  I understand that it is an experiment focused on technological advancement and innovation, yet humans are deeply flawed.  We live in a society of racism, sexism, homophobia, transphobia, mass destruction, rape, genocide…human nature is not the goal.  I wholeheartedly respect the changes being made within computer science and the idea of artificial intelligence, but the Turing Test, the imitation game, the most human robot contest, and similar experiments assume that human nature is the marker of intelligence, when quite frankly, logical, artificial intelligence may be far more beneficial to society.  We, as humans, can go beyond programming and hone in on certain emotions, but often these lead to  mistakes and rash decisions.  The ability to express ourselves and have free will is a positive – robots are not assumed to have this (contrary to Ava).  Yet we have hindered others for centuries because of our flawed nature, biases, and human tendencies.

Question 6

Our Bittersweet Rollercoaster

In Langdon Winner’s essay he argues that with our new emerging technologies, and with old ones, that humans have been “sleepwalking” through our lives without acknowledging how much technology has changed our lives. He mentions that technology and its value to us has become so evident that no one ever takes a moment to reflect on how it affects our lives. A society that once existed with little to no advanced technology has had to open itself up to a whole new world of technology that has made our lives even more bitter sweet. We live our lives with new technologies being thrown into our faces, and marketed to make us feel that we would be archaic if we do not partake in the newest gadget. The most obvious would be the era of the smart phone. We began in the early 1990’s with our keeping up with the Jones’ mentality about smart phones. They were initially only available to those who could afford to pay several hundreds of dollars for a cellular phone. Then phone companies decided to make them more available to the everyday person by slightly lowering the prices, and making where were could just pay it off in small payments. With that being done more people had the opportunity to partake in the novelty. We stepped into this new technological world, and quickly found out how easily it could let us down. Our calls started to drop, we did not receive phone coverage where we wanted, or you simply dropped your phone on the floor, and the phones we almost no use to us at all. However, we did not bother to move on from the craze, but we become more embedded into it. Now not only can you get your $600-$900 smartphone paid off in chunks, but if a new phone comes out while you are still paying for your old one you can just trade that one in for the newest phone on the block. I agree that we have evolved into a society that wanders through our world not realizing how we have made our lives easier, and harder at the same time. We have been able to exist this way because these technologies are made so easily accessible to most of us. It is so easily accessed and makes our lives so much easier that we just take the bad with the good from them, just as we do our own lives. In our lives we have good times and we have bad times; however, we innately try our best to live in the good times and just work through the bad ones. Therefore, with our new technologies they will become more and more available and we will continue to just enjoy our bittersweet technological rollercoaster.

Gee! I always wanted a twin! MOMMY THIS THING IS ME!

       Could we imagine living in a world amongst robots? Not just any robot though. Robots that closely resemble human beings, both mentally and physically. Would we as humans be able to handle this thought? Would these robots even be classified as robots anymore if we actually achieved artificial intelligence? How would we treat them in our society? As one of us or something strange? What type of moral obligations would they be allowed?

       In class, we watched a short film, Be Right Back: Black Mirror.  In this film there is this couple. The boyfriend (Ash) eventually dies from a mysterious accident and the girlfriend is left alive depressed because the love of her life is no longer around. Or is he? At the funeral, the girlfriend finds out that she could potentially bring back her boyfriend. Freaked out by the thought of it at first, she decides to give it a try at bringing her boyfriend back. It all started with a simple computer chat at first and later progressed into phone calls, video chats, and eventually an actually being of Ash. We see that in this short film that through the character of Ash, that artificial intelligence of Ash was actually achieved. Although it was achieved we see that it was still flawed. But flawed in a way as we actually humans are ourselves. When she tells Ash to jump at first he was about to jump but then she gets mad because the real Ash would not have jumped but would have been questioning himself and crying. When she explains this to the new Ash, he gives her what she wants and she is not able to deal with it. The fact that these machines are capable of resembling us so closely is just truly amazing.

       If we were to live in a world with things like Ash, would they be allowed the same moral obligations as us humans? Especially considering that the fact that they are not actual organic or natural humans beings like me and you, but in fact they are what is known as a resemblance of human beings. But how do we determine this? We know that every single human being is expected to be treated with the upmost respect what are these machines obligated to the same things.

    In my honest opinion, especially if these beings are a resemblance of Ash then they should have some type of moral obligations. I mean they are just like us. At that point I feel like we should treat them just like every other  citizen in our society. Although they are not natural humans like us, in some cases I feel like it would be hard to tell. They definitely should not be disrespected. Why? Simply because they are artificially intelligent. They are capable of doing the same things, if not more than us natural humans. If they are capable of resembling us so closely then why should they not be obligated to have the same exact moral obligations as us? With that being said, I believe that the same moral obligations as us should be held.

   

Catching Feelings

Artificial intelligence is defined as any computer systems that can do a function that would typically require some form of human intelligence such as speech recognition, visual perception, or decision making. Therefore, I would consider Ava and Ash 2.0 as examples or artificial intelligence. They are both very advanced forms of artificial intelligence because they are literal emulations of human beings. Earlier in the semester we debated whether or not our everyday electronics were moral agents, and if we held any moral obligations to them. As a class, we came to a consensus that we did not have any moral obligation to our gadgets, just as they had none to us.  In our discussion we used the examples of our smart phones, laptops, and other common electronics. Therefore, it was easy for us to decide that we had zero moral obligation to them, and vice versa. However, when that same artificial intelligence is in human form it somehow enters an ethical grey area for most people. Ava and Ash 2.0 specifically seemed to land in a grey area because of the context from which they were invented. During the modern day Turing test that Ava was a part of she was supposed to convince Caleb, and us as an audience that she was as human like as possible. During this process she flirts with him, and makes a special connection with Caleb that even he was not prepared for. I cannot say that Caleb held any moral obligation to her just as the humanoid robot that she was. However, his moral obligation to her began when he felt that he had a genuine human like connection to her. I think that it what ultimately made her pass the Turing Test. Ava was programmed to form relationships, and that alone will make another human have some form moral obligation to them. Comparable to the relationships that we share with our pets; they seem to create some sort of a bond with us where we feel that we should be morally obligated to them. Also, for example with dogs, they almost seem like they share some sort of moral obligation to their owners as well. However, when we speak of other humans I think that we have the moral obligation to help each other when we are in need. We also have the moral obligation to make sure that humans receive what some would call basic human rights. The rights of health, shelter, and freedom from human captivity should be what we morally oblige to other humans. Although, with “inorganic” humanoids I do think that we have the same innate moral obligation to them. We have to think about why they were invented. Humanoids typically are not built to just live the common everyday life as humans. They usually are built to carry out some sort of task, whether it be to teach, care for, or for sexual uses. They are becoming more human like just so they will realistically carry out more human like tasks. So the moral obligation comes in when the actual human decides that they have some form of relationship with the humanoid. Furthermore, I do not think that the robots will have the genuine feelings back to them, but they will be programmed to exhibit the feelings or emotions necessary to carry out the task that they were built to do just as Ava did. 

#3 "Work It Harder Make It Better Do It Faster, Makes Us stronger" - sincerely, the human race

The singularity is an event that is believed to be the beginning of the age or super A.I. (Age of Ultron) This will cause us, the human race, to fall behind in trying to comprehend this new machines and we will either have to become more like them or be cast off as obsolete. Kruzwell believe this will being in 2045 and I believe that there is truth to this theory. We already see cyborgs now in our time. War vets have robots limbs, mechanical eyes, and event those that can’t speak have a machine to talk through to convey thoughts.
I see nothing wrong with using technology to advance medicine and to better life for humanity for humans to physically function as humans to the best of their ability. Fusing such technology in this way can and has better the lives of many families. However, just like needless plastic surgery, there will be people that abuse the use of this technology. I can imagine even a black market of robot gun arms and laser legs for criminals to use on citizens. And who will be in charge of such a large industry?
AIDS won’t be the only thing people will have to worry about. If we are fused with technology, are checkups to the doctor will soon require us to fill out what type of virus do we have, physical or technological. It is true even in today’s time that things aren’t built sturdy like they use to because companies want people to keep coming back for the newest thing and having their laptops or phone break within 2 years is a perfect way to insure money flow. How much more for technology that is fused in our bodies? Every year we is something go wrong with the flu shots, if we allow technology to enter into us, imagine the implications and damage done internally to citizens. And let’s not forget that robotic research will still be on the rise as well
The problem comes into play when we add on to things we simply don’t need. Kruzwell also believes that there will be better A.I as well that will rebuild and generate more o itself. We don’t need A.I.s building other machines that are better than itself in every way better than us. It is a simple fact that only the strongest survive and that dominant traits are passed down more often than recessive. Likewise the machines, how will be better at everything won’t need us. Sure they are advanced enough in morality and reason, but what is the most reasonable thing according to a non-based carbon life form? Meaning if they will be better than us, even in thought, would we be able to comprehend the best for humanity. Will they? If so, what is best for the human race according to them? (Assuming they will care)

I close with the following: know the limits. Of course there technically aren’t any, but there needs to be a point in which we simply cannot go beyond for the sake of the lives of humanity. as of right now, this technology is good, but any more and we will have more problems to deal with.

Ha! Machine Brain.

       In class we discussed on many different situations about the outcomes and the future possibilities of the modern-day test formally known as "The Turing Test." In Brian Christian's book, The Most Human Human, we know that he participates in this test known as the Turing Test. To define what the Turing test is, let us first examine where it gets its ideas from. To begin with, the Turing Test gets its name from a famous British mathematician known by the name of Alan Turing. Alan Turing is one of the great founders of the subject of computer science. In the year of 1950, Alan Turning tried to ansewer the question on whether or not machines could or could not think for their selves. In other words, would these machines be so complex that they would be actually capable of having a mind, or rather brain of their own?

        According to the text, the turing test is an annual event where judges, machines, and humans (confederates) are basically having a conversation. The machines and confederates are competing against each other to try to display which one is the actual human (or pretends to be human). It is the duty of the judge to construct a series of questions for both the machine and the confederates. There are no restrictions on what questions that could be asked. At the end of the series of the questions, the judge must be able to identify which one is the confederate and which one is the machine. It is the duty of the machine to be capable of having a mind and proving that it can think for itself. On the other hand it is the duty of the confederate to prove that they are the "most human human." In an overall view, the Turing test is basically just "an imitation game."

       In the year of 2000 it was predicted that the machines would be able to manipulate at least 30 percent of the judges after only five minutes of communicating, but however that prediction has not yet come true. However in some cases, especially in the year of 2008, these predictions have come up very close. When this prediction has finally come true and machines are capable of thinking for themselves, then we as human being would have achieved artificial intelligence. As time progresses we are actually becoming very close to achieving that goal and robots are becoming more and more like us each and every single day that we live.

Taylor Flake-Midterm Exam

Question 6: Bet You Can’t Think Like Me
The Turing Test is a test that is named after Alan Turing, of Great Britain.  The whole point of the test is to test a man-made machine’s ability to think like a human.  The Turing Test was designed to presuppose a couple of things from the beginning.  The Turing Test first presupposes that a human will have the capability to determine whether or not they are conversing with a human.  The Turing test also presupposes that a machine would be able to hold a conversation with a human.  So, every year a panel of judges gets together and asks questions sort of “anonymously” to either humans or machines but the judges are unaware of who they are asking these questions to.  They call the human the “confederate.”  With the judges being blindsided it allows for them to not come with any pre-disposed thoughts on the test as a whole.  It also adds a level of authenticity to the conversation.  Because of that desired level of authenticity the judge is able to  engage in any type of conversation with the “confederate” or machine.  They can literally discuss anything.  At the end of the competition it is up to the judge to decide who or what is the human/confederate or the machine.  It was Alan Turing’s belief that “by 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result ‘one will be able to speak of machines thinking without expecting to be contradicted” (Christian, 4).  Do the judge talks to either the confederate/machine for the first five minutes and then the other for the next five minutes.  After that the judge has ten minutes to choose which he thinks is which.  The judge also has the opportunity to sort of rate how convinced they are in their decision.  The program that is rated and voted the highest gets named “The Most Human Machine” and the confederate that wins is named the “Most Human Human.”  These awards are given out annually even if no program actually wins the Turing Test.

Turing believed that if a couplet did ever “win the imitation game” they will have exhibited intelligence because he really viewed communication as the true test of being able to tell if something or someone was truly a human.  One thing that we humans do and for the most part do well with is communicate.  We communicate in a way that truly no other species can compare.  Communication is our way of expressing ourselves, surviving and it is our way to connect with one another.  So, if there is a program that can communicate in such a way that other humans would believe that they were speaking to a human like them then intelligence would be achieved.  One modern day example of robots mostly effectively two-way communicating with humans is thorough the “chat” customer service option where people chat with sometimes programs but other times with a human.

Question 2: Beware of my Privacy
Jeroen van den Hoven dove into some of the issues that come with he violation of privacy in his essay “Nanotechonology and Privacy: The instructive case of RFID.”  The essay basically introduced new and not so new devices and practices that draft sensitive information both knowingly and unknowingly from individuals.  One of these pieces of technology is the “RFID” which stands for Radio Frequency Identity Chip.  This small chip transmits a radio signal and was designed to track objects like our mail, packages, equipment and more.  These chips are interesting in a way because they can be scanned or picked up from a distance because of the radio signal that they give off.  But there are many people that will argue that there are issues with a piece of technology like this even without knowing the full capabilities of it.

There are issues with privacy not only with the RFID but with technology that we come into contact with daily.  So what is the big deal?  Why do people so called “want their privacy” while at the same time have the desire to share their life story on social media?  From the outside looking in it really makes no sense at all but once you start thinking about your personal life then some of the more irrational parts seem to become just a little more clear.  It really all boils down to consent.  When individuals consent to share information then the issues with privacy seem to dwindle but when the individuals feel like they did not consent to sharing information that is when the issues with privacy seem to be more prevalent.  We like to think that we have a choice about what we share as opposed to us not knowing and the information that is collected on us comes out and we find out on the back end.  In addition to that, privacy or the lack of privacy is also more harmful when the information that is being collected is sensitive information like credit card information, social security numbers, medical records, etc.  I think that violations of privacy are the most harmful in the cases of sensitive information because the information that is gathered, stolen or taken could really break a person down.  That type of information could really mess up someones finances, disclose sensitive health secrets and can allow someone to steal their identity.  I just think its a really strange phenomenon that we have.  We want to feel like we have a choice.  

We also expect a certain level of privacy in life.  When we are in public we tend to expect less privacy from the outside and by that I mean we know people are seeing and evaluating our physical appearance and our conversations but when we are in private we expect more privacy because we have control over who is in that private area with us.  We can either be by ourselves or surrounded by others.  That is one of the reasons why privacy is most important and harmful when the individual does not know that their privacy is being invaded.

Question 5:  What is AI? Or Who is AI?
Artificial Intelligence can be described with many words but to sum it al up… Artificial Intelligence is the field of study that studies and creates software and programs that are smart/intelligent.  We call the information that these programs, software and machines emit “Artificial Intelligence.”  The short film “Be Right Back” and the film “Ex Machina” are both films that shed light on Artificial Intelligence.  I will call the being that exhibited forms of artificial intelligence in “Be Right Back” Ash 2.0 and the being that exhibited artificial intelligence in “Ex Machina” Ava.  When given the opportunity to compare both Ash 2.0 and Ava, I have come to the conclusion that both Ava and Ash 2.0 are examples of artificial intelligence.  It is hard to say one is more of artificial intelligence than the other because I believe that they both truly achieved artificial intelligence.  The only difference between them was the purpose of the beings.  Both Ash 2.0 and Ava served their purposes effectively.  Ash. 2.0 was designed to comfort Martha during her loss of her boyfriend Ash.  He was purposed to fill that void and emulate Ash in all ways possible.  He was supposed to continue their relationship.  On the other hand Ava was designed to sort of beat the judge in the Turing Test.  She was supposed to be human.  She was supposed to be able to think, look and communicate effectively enough that Caleb would believe that Ava was human.  I think that both Ava and Ash 2.0 were both great examples of artificial intelligence because they are both forms of creative programming that are intelligent enough to hang with the humans.  


Now, both Ash 2.0 and Ava had issues in the films and I won’t get into them because we all know what happened.  But what happened in those films raised an important question.  How responsible am I for the intelligent beings that I create.  I will make the argument that we are fully responsible and morally obligated to fulfill those responsibilities because these man-made artificial beings are still just that man-made and while most of them are probably able to hold their own just like we are imperfect are creations are as well and we are responsible for their behavior.  One example that I used in class  earlier this semester was that parents are responsible for their children so why would the creators of things like Ava and Ash 2.0.  From day one parents are not fully aware of what their children are capable of.  They do not know that their children might grow up and kill someone and to that same effect they also do not know that their child might be the next world leader.  The same with artificially intelligent beings.  We do not know their full capabilities which should be even more reason for us to be responsible.  These obligations should still be held even if they are not “natural” or “organic” beings.  If no one is obligated to take responsibility for these beings then who will?  Where should the blame be cast?

GET OUT YOU THING!!!!!!!

       In Jeroen van den Hoven's essay "Nanotechnology and Privacy: The Instructive case of RFID," Hoven argues that the violation of privacy can lead to harm in many different ways. The violation of privacy in this sense can lead to cases of  identity theft, rape, and even in some cases the invasion of privacy may even lead to death. Hoven explains that if we were to put these little chip devices (formally known as RFID chips or "Contactless Technology") into every single thing that is owned, these chips will violate our privacy rights in many different way eventually leading to the harm of many people. Although it sounds like a very bright idea to some of us, like for example if a burglar walks into your home and attempts to steal from your home, those items can be tracked down and in some cases when the items are moved out of its place an alarm will sound hopefully causing the burglar to run in fear. As for the other side who feels like the chips are a bad idea, the thought of it makes them cringe in fear of their privacy. One example that Hoven gave us in his essay was the concept of Information-based harm. In this example we see that having these chips would be very dangerous as to who we are as a person. Meaning not only will our privacy be invaded, but also our protection will be doomed as well. These chips are indeed a form of technology that can be hacked by possibly anyone at any give time.For example, if a killer was after you, having these chips would put you in harms of the killer simply because the killer now has the ability to hack into the data bases of this flawed chip and find out exactly where we are.  Another example that Hoven uses is Informational Injustice. Say that someone has a very serious and contagious disease and they walk into a public library and they proceed to check out several books detailing their serious disease. Because the book buyer has this chip implanted in them, this simple research book is now on that persons file. They know this and so does the librarian. Although it is wrong, the librarian now has the right to treat that person different because they now know their business. In most cases even refusing service to the book buyer. This in any case is morally wrong.

       In my honest view, in some ways I do feel like our privacy rights are somewhat violated. If we are chipped with the RFID chips, everything about us is accessibly on file to possibly anyone at any given time. If these things also have cameras in them, how do we know that person who is monitoring these chips are some sort of a pervert. Now even we go to the shower we are being watched by some strange Peeping Tom. Our rights will be violated greatly to some extent. Another example, a murderer breaks into the chip system and comes after their victim which would be way easier seeming how this chip knows every single detail about us. In both cases our privacy is violated and we are put into harms way. I feel that there should be some sort of restrictions on both private and public privacy. We should only be monitored in public, but in our private lives... of course not! Private and public differ greatly. Private is our lives, and public is basically the world seeing every single thing that we do. Although there is a line, I strongly feel like that line is very thin.

#5 "To be or not to be?" - quote from Siri

We have seen both Ex Machina and Be Right Back and I have to say both had me in complete shock as to the results of the technology and the ending of the movies. My analysis was of my reaction to the robot characters and how I became emotionally attached due to their dispositions. This is, until I realized that they were simply machines. It is hard to say which A.I. had the most believable performance, but I would have to go with Ava as the winner. There are three things that stepped her up in the tiers of human acceptance.
First off, she had intentions. Of course it was mentioned that she was programmed to escape from the place and to use anything to do so, but we all have been “brainwashed” in some kind of way to believe we have original thoughts or ideas. It was in the way she does it that shows the intelligence of a real woman. The drawing that the scientist rips up is a prime example. She shows this to Caleb and his reaction is flawless. The picture symbolized her affection for him (or so he believed) and her will to be free and be human. The fact that she drew a picture of him tells the audience of her sophistication in knowing how it would affect him. that simply cannot be exampled to someone and seems to be something a human women would recognize to do in order to manipulate a man.
Secondly, she has negative feelings towards humans. This is by far the most interesting part of the movie. The Ava is drawing the picture of Caleb; she mentions to Nathan that she knows that Caleb hates her because she is a robot. However, according to Ava when she is with Caleb, she recognizes that he has feelings for her. Therefore, this leads to show not what humans think of her, but her actual opinions about humanity. Ash on the other hand had no opinions that were his own. Of course, he was supposed to be a copy of someone else, but the holes that were not filled weren’t filled with new information of different situations, instead, he kept asking, “What would the real Ash do?”
Finally, she has a sense of community. In the final moments of  the film, we see Ava looking through Nathan’s room and find the robots before her. The Asian robot also reveals herself and we get a sense of purpose for her actions. She finally has a reason to escape other than a programmed agenda. She recognizes her own kind and gets revenge on Nathan by teaming up with the servant robot to kill him. Furthermore, the scene shows more of her humanity by not helping out the other robot to escape. In Ava’s head, it’s all about number one.

I personally believe the only obligation we have to A.I. is a respect of property. Regardless of how human it looks, I along with many others will show the same respect to something they had to buy than to an actual person. If I have a problem with a robot, I am resetting it. I won’t argue with it or reason with a machine. I believe the fact that we make them will automatically make them less human. Robots now, for example, have taken over the humans of humans because you don’t have to pay a robot. Would we have to pay an A.I.? Would I honestly give food stamps to a family of A.I.? How do you give human benefits to something that needs none? Will they have social security? They surely won’t need it. Technology, such as this, needs to stay as a means to an end.

Midterm Question #6 - Artificial Intelligence and Robotics

In Brian Christian's book The Most Human Human, he discussed the Turing Test and it's effect on the way we view artificial intelligence. The Turing Test is basically an online chat, where a human candidate chats with either another human candidate or the computer itself. The human must then decipher if they are chatting with a robot or a human. If the robot were to successfully imitate a human as far as tricking the human into believing it is human, then the robot would have passed the test. Having past the test would mean that we have now made artificial intelligence. This test presupposes that all we need in artificial intelligence is the ability to simply choose a side on a controversial question, if asked by the human candidate. The one problem I see with this test is that it is only a chat. No actions can be done. For example, in the short film, Be Right Back, Martha, the widowed wife, is brought much comfort when she is able to talk to Ash 2.0 on the phone. She can barely tell a difference between him and her real late husband, until he has no memories that they share, but that were not put online or videoed by the two of them. He is able to make her laugh and act like himself. The problem arises when Ash 2.0 is put into a body that looks like the late Ash. Martha realizes that Ash 2.0 cannot be the real Ash when he has no free will. He doesn't get mad, he doesn't fight with her, he doesn't have any emotions that the real Ash would have. The Turing test, which could be compared to the phone calls with Ash 2.0, is passable by robots that do not have artificial intelligence because no emotions can be detected through a chat. But when the robots are made tangible, the lack of emotion is abundantly clear. They will not pass the tangible Turing Test. Not being able to produce emotions, they, therefore, are not artificial intelligence. But when it comes to Ava, the robot in Ex Machina, she does have emotions and picks sides and makes decisions that are questionable in nature. She therefore would pass the Turing Test online and in the tangible world that she resides in. The Turing Test may work online, but it cannot account for emotions. Emotions are unable to be seen in the human body, so they are difficult to reproduce and even program into a robot. Until that is able to be programmed, we will not have a robot that can pass the tangible Turing Test. Do we even want a robot to be emotional? Would we be able to control the emotions? It is important also who controls the emotions. The programmer's opinions could be programmed into the robot as well, so whose to say what the robot's end goal is? I think we need to seriously considered these questions before we further our understanding of artificial intelligence.

#1 "Is it a boy or girl, doc?" "No, it's a Macbook!"

Langdon Winner is taking thoughts of other life into a higher form. In his essay, Technologies as a Form of Life, he makes the claim that we are changing our view of how we see technology. The term is called technological somnambulism. This phrase suggests that were are sleep walking into the advances that we are making and therefore need better definitions or concepts of this new “life form”. Winner goes on and says that we either use technology as tool as a means to an end and nothing more, we separate the makers of the products versus the consumers, and we need to recognize the realms in which technology opens for us in our daily lives.

I agree that most of us do not understand exactly how much of ourselves is put into the tools that we have. We use computers as a way to research everything. I have personally seen the transition of speech in my short life. When I was young, When I didn’t know a word, My mother would reply with, “Look it up in the dictionary” or “We have an encyclopedia, don’t ask me”. Now, if I don’t know a word or any other random inform, the response is, “Google it”. We have grown dependent on these tools and without them we cannot function. I can recall when the electricity went out at CBU and because of half of the school had classes that used computers, they were cancelled.

In this semester, the internet has shut down multiple times. When it first happened, I freaked out and didn’t know how I was going to pass the time without Netflix. I exited my room and heard screams up and down the hallway of dorm mates screaming how they were in an “important” match of a computer game they were playing. In light of such a tragic event, people began to come out of their rooms and talk to one another. I live in the smallest dorm on campus and I had no idea who lived across from me until this event. One guy literally crawled up into a ball on the floor praying for the internet to come back on. This made me glad in a way that it shut down. I believe we as humans have become more social, but less interactive. We would rather text someone in the other room than getting up and knocking on their door. We are indeed sleepwalking, but a cliff into cyberspace.

In conclusion, Winner is dead on when speaking of technological somnambulism. There is no doubt that we will not be able to function unless there is a microchip in our brains or something because we believe that more technology equals advancement. But does it? I have heard of the phrase, “Let’s get back to Eden” and one of the ways I heard to do so was through advancement. But when was the last time you read about Adam and Eve connecting to Wi-Fi? Sadly, we will only make this whole deeper, but I would suggest stepping back and regaining the physical human values that we once had; back when we were all fully human.

Three Laws of Robotics

Asimov’s “Three Laws of Robotics” explain how robots would be unable to harm humans in any way. The laws are incredibly useful in keeping humans safe from robots. Robots can not directly or indirectly harm a human, they must obey human orders unless it could cause a human to be injured, and a robots must protect themselves unless it hurts a human or disobey’s a human’s order. It isn’t until later when the zeroth law is introduced that robots are able to manipulate the rules in order to harm humans. The zeroth law states that a robot may not allow humanity to come to harm. A robot could manipulate the zeroth law, imprisoning or conquering humanity for it’s own protection. Other issues arise with the zeroth law, since there are ways to harm humanity that are not physical or immediate. Is the robot also in charge or keeping human morals intact? Or in keeping cultures alive? How would a robot be able to do those types of things? If that is included with the law to never arm humanity, even by inaction, then a robot may not be able to exist.

If these rules are supposed to say something about humanity itself, it would be how we are wary of technological advances. Robots are our own creations, but we are afraid of them and their potential. Humans only see the merits and the benefits of these robots once they are restricted. The rules also show how our only concern is human survival and well-being. The laws of robotics only discuss human protection and obedience to humans. There’s nothing there to protect animals. The zeroth law may extend to protect these factors, but only if they are pivotal in the survival to humanity. This is why a robot would be unable to directly harm the environment. 


In the end, these laws are still considered a work of fiction. However, maybe if similar laws were somehow enforced in the real world more people would be comfortable with the advancement of robots. If people still feel that robots are a danger to themselves and society, they are less willing to see them progress. The only real change to the laws that’s necessary would be the zeroth law, since it handles mostly abstracts and would be difficult for a robot to follow perfectly. In addition, the zeroth law is too easy to manipulate so that harming humans is actually the correct thing to do. The fact that robots cannot harm humanity through inaction makes it easy to justify conquering humanity or imprisoning them. Instead of the zeroth law, there should be something else in place which is similar but also a simpler law to follow and understand. When the rules are kept simple and to the point it is much more difficult to find loopholes. Overall, the laws of robotics are very useful and a good idea. They would work to make people more comfortable with robots and more accepting of other technological advancements. 

Moral Obligations to AI's

An artificial intelligence seems to understand to be able to imitate intelligent human behavior. They can visually recognize things, tell the difference between voices, and make hard decisions. An artificial intelligence is still a robot, but has many human-like qualities. Basically, an artificial intelligence is a robot which behaves the way a human does. It could, for example, be able to emote. In the short film, “Be Right Back” Ash was an artificial intelligence. While Ash admitted to being a copy of a human, and to being a robot, he still was able to imitate human behavior the way that an artificial intelligence is able to do. Ash answered questions, seemed to have desires which could be seen since he was asking for upgrades, and because of his speech patterns seemed to have a sense of humor. If he had not explained that he was a robot, and if we had not seen the original human Ash’s death, robot Ash could have been confused for a human being.

We do have a moral obligation to beings which are intelligent. Every being deserves to be treated with respect, and the more intelligent they are the more respect they are entitled to. It is looked down on to treat a pet unkindly, it is looked down on to treat the environment poorly, it should also be looked down upon to treat an artificial intelligence unkindly. Even though these robots are not natural humans, they still seem to have emotions and opinions. People need to respect that because we are unable to tell if those emotions are genuine. While from our perspective they may not be, a robot might not think the same way. This is why they deserve to be treated with respect and dignity, while for us they might not feel and think the correct way, their thoughts and “emotions” could be equally as genuine from their perspective.


While we do not currently have full functioning AI’s, these are things we need to be thinking about. Do these AI’s deserve to be treated as our equals morally? At the very least, are we obligated to treat them with some sort of respect? I would think that we do have a moral obligation to these artificial intelligences. Since these beings are so similar to us in the way that we act in our daily lives, it would make sense that we should treat them with respect. If the grand majority of people feel that their animals deserve to be treated with respect and that we have some sort of moral responsibilities towards them, the moral obligations that we have towards creatures that look, sound, and act just like us should be much greater. The fact that they aren’t organic humans shouldn’t change this, since they may have emotions, just not in the same way that humans have emotions. The fact that these emotions are different does not make them less worthwhile, it would be similar to how different people have different social norms or speak different languages. This doesn’t make them less worthwhile, and a robot’s artificial state should not make them less deserving of respect.

Technological Somnambulism

Langdon Winner states that human “form of life” is changing because of our “technological somnambulism.” Technological somnambulism is the fact that we take our technological advances for granted because of how often we use them and how often they appear in our lives. Since things that become a pattern in our daily lives usually become unconscious actions, this means that the technologies are being absorbed by us and becoming a part of what it means to be a human. What is interesting to Winner about technological somnambulism is how willing humans are to simply ignore these changes to our lives, as if they had no importance. The human form of life is very easy to define, it’s simply how humans live and what is apart of it. It changes as we progress. The same way that a few centuries ago, televisions were not a daily part of our lives, nor were radios, nor were telephones, and so much of what we take for granted wasn’t even an idea.

An example of technological somnambulism is how news is now being delivered via social networking sites. Since it’s widely known that people are constantly on these social networking websites, news and media stations have begun to deliver news through this platform. The fact that we can know what exactly is going on across the country or across the world as it is happening through a phone in our pockets is incredible. We are even more connected, and more knowledgeable about the world around us. However, we don’t seem to understand how to handle this and what it means about our responsibilities to these people. It was not a subject that was considered, and people still try to ignore it. The new technological advances require that we change what we consider our responsibilities. People instead prefer to just ignore that this is an incredible privilege, or that it even means anything to us. It has simply become a daily part of our lives which we use and benefit from without thinking.


I agree that we are unwilling to realize how our technological advances are changing our lives. We have access to so much information at almost all times, and nobody thinks about how that impacts us. It’s simply a fact of life. People should be more preoccupied with how we have been put into a position of privilege and it should be changing what we consider to be our moral obligations. Instead, we continue to use our phones, and our computers, without thinking about what it means. Our daily human lives have definitely been changed by the evolution of technology, and we are responsible for making sure that we use these tools in a way that benefits the most people overall. In the end, all these technological advances are not just a way to make our lives easier or safer, they are also tools which can be used to help others. In order to do this, though, we are required to way up from our technological sleepwalk and reconsider what our responsibilities are.

Privacy doesn't exist, Robots have better morals than humans, and Ash and Ava are slightly too human-like.

Essay #2

       The difference between privacy in public and private places depends on the things a person wants to show others. Privacy is more pressing and offered in a public setting versus a private, secluded setting. For example, a person might be more reserved about their sexual life, past traumatic experiences or even their kids when discussing those topics in public. However, in the comfort of their own home or a close person’s home, they might feel a little bit more at ease and comfortable enough to discuss those serious and private topics. In public, I am very reserved and private about the things that I discuss. I try not to engage in political, religious or embarrassing conversation in public because I never know who is listening or feels strongly offended about something that I have said. In private, I am more inclined to give my opinions on serious or sticky issues because I trust that the people I am talking to will keep our conversations private. With the idea of being chipped, privacy goes out of the window and is a huge liability. There would not be any privacy if people are chipped with their entire history of their identity.  If people’s private lives went viral on the internet for everyone to see, it is a possibility that the person would be criticized, judged or even shamed depending on the types of things they believe or indulge in.
          The most harmful violations of privacy would be social security numbers, home addresses, a person’s children’s information, past occurrences a person is not proud of, or anything a person can use to intentionally hurt someone. With social security numbers and home addresses, a person’s identity could be stolen and used for fraud. A hacker could very well take on someone else’s identity if their social security number is in their chip. People are very protective of their children and usually do not want people to know everything about them. For example, if a sex offender goes to a park and is able to scan the children, he or she could easily find a way to manipulate the child and hurt them. Past situations are always brought up when people are arguing. For instance, if I get mad at someone and knew that they committed a crime a long time ago, I could use that information against them.  Certain information can also be used to discriminate against a person. For example, if a person has a record or a medical condition, that could easily stop them from getting a job or health benefits, even if they were qualified. Spying on someone’s privacy is morally wrong and could harm the person in many types of ways. If we are not allowed privacy, then we could not be our true selves, whether for good or bad purposes. As a counter argument, I believe that people who have a history of hurting children and molesting them should be closely monitored. Also those who have committed murder or rape should be monitored, in order to insure the safety of others. Privacy is a very sticky situation and should not be violated or hacked for the wrong reasons.

Essay #4

      Isaac Asimov’s created three laws pertaining to the development of robots. The first law states that a robot may not injure or kill a human being or through an action allow a human to come to harm. The second law states that a robot must obey the orders given it by a human being unless it conflicts with the first law. The third law states that a robot must protect its own existence unless such protection violates laws one or two.  Although these rules were created to help limit violence, there are also limitations because of the usage of robots. For example, the first rule is null in void because robots have already been created to kill humans. Heavy machinery and tanks are forms of robots, and they have been created to kill humans. It also suggests that humans do enough killing of each other and the robots should be programmed not to. With the second law, a human could easily give the robot the wrong order or use the robot for bad intensions. When humans obtain an adequate amount of power or more, they become power hungry and want things to go their way. If the person cannot get what they want, they will do any and everything to get it. The robots need to be able to decipher between a right and wrong order or an order that actually has good intensions for the greater good of humanity. The limitations of the third law are will a robot be able to protect itself without harming a human? Also, if a human is attacking a robot, how can the robot with self- defense skills be rational about what harms a human and what does not?
       In my opinion, these rules imply that the worse destroyers of human nature are humans themselves. Humans are constantly killing each other every day, as we have seen on the news for over many years. Humans are the ones that are causing major harm to themselves without regard or recognition for the other people that they are harming as well. Humans have their own agendas and ways for how the government should be operated, how the police force should behave and how the world is supposed to function in general. If these laws were actually used before military robots were created, there would be fewer deaths during war or there might not be a need for war. Also, there would be better police protection if robots were the actual police. These types of robots should be designed to help right all of the wrongs humanity has already caused and endured. If these types of robots were created, they would show humanity that violence is not the right thing to do. It also reflects how poorly humans have acted since their existence. These robots would not make the same type of immoral actions and mistakes that humans do. It would allow for the world to become a better place as far as violence, and humans would not have to worry about improper treatment of the police. Human beings would be protected from themselves.

Essay #5

        In Ex Machina, Ava represented “artificial intelligence” while surpassing the criteria of the software. Ava was built with artificial knowledge, simply because she did not grasp the knowledge through learning, but could also adapt to her surroundings like a human would.  For example, when asked a question, she answered as if she had been programmed to say those things or was pulling from a source like google. Her answers were precise and unemotional. However, when she talks to Caleb when the power shuts down she shows emotions, feelings and tells him all the wrong things Nathan, her creator, is doing. Ava showcases a lot of human like qualities over time. She eventually uses her manipulations and human like qualities to trick Caleb in to helping her escape.  Ash, in the film Be Right Back, is a little trickier than Ava. Ava was completely built on artificial intelligence, whereas Ash’s knowledge was based on real life situations. I feel as though Ash’s 2.0 body is robotic and slightly made of artificial intelligence, but the knowledge he acquired was not artificial. None the less, since he did not live in those situations himself it is considered artificial intelligence to a certain degree. I believe that both Ava and Ash give a great representation on two different forms of artificial intelligence. Ava is programmed with outside knowledge that was not applied to a specific real life situation. Ash is programmed with images, social media encounters, emails, phone calls and texts messages in order to produce the best possible version of Ash there ever has been.  However, neither version of robotic artificial intelligence really measured up to being fully human and was a mere advance of technology.
Both situations show case humans feeling as though they have moral obligations to these robots. I believe that humans have moral obligations to other intelligent beings under the circumstance that we are all the same. Humans have a moral obligation to treat each other with respect, protection and to decipher between right and wrong. For example, if a person is being attacked and another person sees it, they are morally obligated to help them. They are morally obligated because it is the right thing to do. In Ava’s case, Caleb felt obligated to help her escape from being trapped by Nathan. In Ash’s case, Mar, although she wanted to kill him, decided to keep him in her attack because she felt morally obligated to him. In my opinion, with these types of artificial intelligence, humans do have some type of moral obligation. This is only because of the relationships built with the robots. If a person does not become attached to a robot, that person does not have a moral obligation.  I also think that if robots are able to adapt to situations like Ava and Ash did, they are attempting to become human like. This means that humans would have a moral obligation to them.  The moral obligation might not go as far as saving a robot from being attacked, but I do believe that humans would be morally obligated to treat them with respect.