The difference between privacy in public and private places depends on the things a person wants to show others. Privacy is more pressing and offered in a public setting versus a private, secluded setting. For example, a person might be more reserved about their sexual life, past traumatic experiences or even their kids when discussing those topics in public. However, in the comfort of their own home or a close person’s home, they might feel a little bit more at ease and comfortable enough to discuss those serious and private topics. In public, I am very reserved and private about the things that I discuss. I try not to engage in political, religious or embarrassing conversation in public because I never know who is listening or feels strongly offended about something that I have said. In private, I am more inclined to give my opinions on serious or sticky issues because I trust that the people I am talking to will keep our conversations private. With the idea of being chipped, privacy goes out of the window and is a huge liability. There would not be any privacy if people are chipped with their entire history of their identity. If people’s private lives went viral on the internet for everyone to see, it is a possibility that the person would be criticized, judged or even shamed depending on the types of things they believe or indulge in.
The most
harmful violations of privacy would be social security numbers, home addresses,
a person’s children’s information, past occurrences a person is not proud of,
or anything a person can use to intentionally hurt someone. With social security
numbers and home addresses, a person’s identity could be stolen and used for
fraud. A hacker could very well take on someone else’s identity if their social
security number is in their chip. People are very protective of their children
and usually do not want people to know everything about them. For example, if a
sex offender goes to a park and is able to scan the children, he or she could
easily find a way to manipulate the child and hurt them. Past situations are
always brought up when people are arguing. For instance, if I get mad at
someone and knew that they committed a crime a long time ago, I could use that
information against them. Certain
information can also be used to discriminate against a person. For example, if
a person has a record or a medical condition, that could easily stop them from
getting a job or health benefits, even if they were qualified. Spying on
someone’s privacy is morally wrong and could harm the person in many types of
ways. If we are not allowed privacy, then we could not be our true selves,
whether for good or bad purposes. As a counter argument, I believe that people
who have a history of hurting children and molesting them should be closely monitored.
Also those who have committed murder or rape should be monitored, in order to
insure the safety of others. Privacy is a very sticky situation and should not
be violated or hacked for the wrong reasons.
Essay #4
Isaac Asimov’s created three laws pertaining to the
development of robots. The first law states that a robot may not injure or kill
a human being or through an action allow a human to come to harm. The second
law states that a robot must obey the orders given it by a human being unless
it conflicts with the first law. The third law states that a robot must protect
its own existence unless such protection violates laws one or two. Although these rules were created to help
limit violence, there are also limitations because of the usage of robots. For
example, the first rule is null in void because robots have already been
created to kill humans. Heavy machinery and tanks are forms of robots, and they
have been created to kill humans. It also suggests that humans do enough
killing of each other and the robots should be programmed not to. With the
second law, a human could easily give the robot the wrong order or use the
robot for bad intensions. When humans obtain an adequate amount of power or
more, they become power hungry and want things to go their way. If the person
cannot get what they want, they will do any and everything to get it. The robots
need to be able to decipher between a right and wrong order or an order that
actually has good intensions for the greater good of humanity. The limitations
of the third law are will a robot be able to protect itself without harming a
human? Also, if a human is attacking a robot, how can the robot with self- defense
skills be rational about what harms a human and what does not?
In my
opinion, these rules imply that the worse destroyers of human nature are humans
themselves. Humans are constantly killing each other every day, as we have seen
on the news for over many years. Humans are the ones that are causing major
harm to themselves without regard or recognition for the other people that they
are harming as well. Humans have their own agendas and ways for how the
government should be operated, how the police force should behave and how the world
is supposed to function in general. If these laws were actually used before
military robots were created, there would be fewer deaths during war or there
might not be a need for war. Also, there would be better police protection if
robots were the actual police. These types of robots should be designed to help
right all of the wrongs humanity has already caused and endured. If these types
of robots were created, they would show humanity that violence is not the right
thing to do. It also reflects how poorly humans have acted since their existence.
These robots would not make the same type of immoral actions and mistakes that
humans do. It would allow for the world to become a better place as far as violence,
and humans would not have to worry about improper treatment of the police. Human
beings would be protected from themselves.
Essay #5
In Ex Machina, Ava
represented “artificial intelligence” while surpassing the criteria of the
software. Ava was built with artificial knowledge, simply because she did not
grasp the knowledge through learning, but could also adapt to her surroundings
like a human would. For example, when
asked a question, she answered as if she had been programmed to say those
things or was pulling from a source like google. Her answers were precise and
unemotional. However, when she talks to Caleb when the power shuts down she
shows emotions, feelings and tells him all the wrong things Nathan, her creator,
is doing. Ava showcases a lot of human like qualities over time. She eventually
uses her manipulations and human like qualities to trick Caleb in to helping
her escape. Ash, in the film Be Right Back, is a little trickier than
Ava. Ava was completely built on artificial intelligence, whereas Ash’s
knowledge was based on real life situations. I feel as though Ash’s 2.0 body is
robotic and slightly made of artificial intelligence, but the knowledge he
acquired was not artificial. None the less, since he did not live in those
situations himself it is considered artificial intelligence to a certain degree.
I believe that both Ava and Ash give a great representation on two different
forms of artificial intelligence. Ava is programmed with outside knowledge that
was not applied to a specific real life situation. Ash is programmed with
images, social media encounters, emails, phone calls and texts messages in
order to produce the best possible version of Ash there ever has been. However, neither version of robotic artificial
intelligence really measured up to being fully human and was a mere advance of
technology.
Both situations show case humans
feeling as though they have moral obligations to these robots. I believe that
humans have moral obligations to other intelligent beings under the circumstance
that we are all the same. Humans have a moral obligation to treat each other
with respect, protection and to decipher between right and wrong. For example,
if a person is being attacked and another person sees it, they are morally
obligated to help them. They are morally obligated because it is the right
thing to do. In Ava’s case, Caleb felt obligated to help her escape from being
trapped by Nathan. In Ash’s case, Mar, although she wanted to kill him, decided
to keep him in her attack because she felt morally obligated to him. In my
opinion, with these types of artificial intelligence, humans do have some type
of moral obligation. This is only because of the relationships built with the robots.
If a person does not become attached to a robot, that person does not have a
moral obligation. I also think that if
robots are able to adapt to situations like Ava and Ash did, they are
attempting to become human like. This means that humans would have a moral
obligation to them. The moral obligation
might not go as far as saving a robot from being attacked, but I do believe
that humans would be morally obligated to treat them with respect.
No comments:
Post a Comment