Friday, October 16, 2015

Privacy is Dead, My Phone is an Appendage, & Unnaturally Natural Life: A Series of Miniature Essays by Brock Swims

#2: Privacy is Dead
Privacy seems to have gained a negative connotation over the past few years, especially after 9/11. We reacted impulsively as a nation out of fear and gave parts of our government too much power via the Patriot Act. This led to a massive invasion of American privacy that is still happening today. The author and myself are both in agreement when it comes to a lack of privacy causing harm. The subtle violations of privacy we have been putting up with for years, like how the NSA has been collecting our phone conversation metadata and we’ve done basically nothing about it, have snowballed. This is what has caused our current expectations of privacy in public and in private spaces to become shaped in a way that is too open, too willing to share.
The expectation of privacy always depends on where you are and who is around. When people are in the privacy of their own homes, they naturally will expect whatever they do there to remain unknown and private to the rest of the world. When people are out and about, that expectation disappears; but it’s not that simple either. Sometimes, when we’re out and about, we create an expectation of privacy by choosing what looks like a safe room or area to talk or something. But nowadays, anywhere a smartphone is, is not truly a private space.
These smartphones, regardless of brand, version, or any other details, basically perform a lot of the same functions such as being able to record and store pictures, audio, and video. And the majority of apps nowadays require a lot of access to these functionalities of smartphones via the apps that the user downloads. Now put yourself in a company’s shoes that exists in today’s mostly capitalist society.
Big data is big money, as we have seen. They are going to exploit every possible thing they can to make profit, as that is the purpose of a capitalist business. If it wasn’t for the socialist regulations we have on businesses, blatant abuses of privacy would probably be much more common, as there would be less penalties (or possibly even no penalties) on companies for not using customer data appropriately and keeping it safe. But even with all the legislation we have come up with as a nation over the years to better protect our data, we still cannot prevent violations of privacy from happening, partly because we are not sure of whom exactly the enemy of privacy is.
The legislation currently in existence (to my knowledge) is only concerned with keeping customer data safe when it is in the hands of a company; basically, defending only against hackers. That’s very good of course, as there are plenty of illegal entities (hackers, nation states, etc.) trying to gain access to this data all the time in order to exploit it. But companies are violating American privacy in the bright of day with no penalty all the time. They are accessing data about people via other companies whose sole purpose is to collect information on Americans via other companies. They are accessing data about their customers that they did not even know they had or need to protect. It’s these violations of privacy I find the most harmful since they play off of ignorance. And these companies whose apps we use are learning way more about us than we ever intended or could have imagined. Together, US businesses and government agencies are performing an ungodly and unjust amount of violations of privacy every day because of programs like the NSA and products that find themselves in the hands of many Americans (namely smartphones).

#3: My Phone is an Appendage             From my understanding, “The Singularity” is the concept of humanity merging with technology, or in other words, machines or robots. Basically, it is the idea humanity will eventually merge with robots/machines/technology in some way that will lead to an age of super intelligence and probably android humans. Just like the rest of humanity, there is no way I can know the future, but from what I have experienced in my lifetime, I would have to say that this merging of humanity and machinery is already taking place. One reason I say this is because of my own experiences with smartphones.
            Ever since I was fourteen years old, I have had a smartphone. The potential the smartphone had was unseen at the time, but now, looking back in hindsight, part of that potential has made itself clear to me. These devices have so deeply ingrained themselves in our society within just the past decade that it almost would be silly of me to NOT expect some sort of merging of human and machine. These smartphones have hardware and software that keeps improving and becoming more complex year by year. We have new devices and new functionality on our current devices popping up all the time. We keep bringing these devices closer and closer to us, in an abstract sense AND physically.
            For example, for the past seven years I have had a smartphone on or near me almost every single minute of every single day. Any time I have not had my phone on me for even a short length of time, I always start to physically feel bad. My anxiety levels rise rapidly and my emotions become heightened. My smartphone is so valuable to me that I treat it just like an appendage or any other part of my body. Because of the current infrastructure of today’s society in regards to humans and technology, this smartphone is the only thing that allows me to truly keep up with everyone else around me. This leads me to believe that if this one technology can vastly change things, why couldn’t another more advanced technology have a different, but amplified effect?
            Whether or not “The Singularity” actually happens, its potential effect on humanity could be so great that it could wind up destroying us or possibly saving us. For all we know, we might create artificial intelligence that will turn malicious and destroy humanity. We might also create AI that will never go bad and could end up helping or possibly even saving humanity (especially due to the environmental damages we have caused over the past century). But we will never truly know until we get there. For now, I think it is neutral concept, neither good nor bad. We simply do not know enough about the future technologies that will be inexistence or are close enough to that future to really know. And the goodness or baseness of the merging of humanity and technology all depends on the results in the future, which none of us will likely get to see.

            #4: Unnaturally Natural Life
            If artificial intelligence exists whenever a machine/robot/computer can successfully trick humans into making them think they’re talking to another human, then Ash is absolutely an example of AI. Personally, I would define AI as a robot/computer that becomes self-aware and is a special kind of intelligent being separate from humans and other animals, since they would be made of inorganic material. But regardless of which definition, I am still inclined to think Ash was an example of this. At first, I saw Ash purely as imitation software. But it seemed almost as if the robot Ash became a new version of the old Ash, especially once he started imitating emotions towards the end of the film.
            When it comes to morality and how we should treat other intelligent beings, I think we should remain consist with what I believe the overwhelming majority of humanity has done for the majority of history: treat other intelligent beings, regardless of the amount of difference in intellect, with respect and help improve their quality of life along with ours as much as possible. I feel like this sort of moral obligation is one that should exist throughout the universe, regardless of the varying forms of life. Since we obviously do not currently know if there is other life outside Earth, this moral obligation is relevant only to life here. But I think this moral obligation definitely applies to future forms of intelligent life as well, which I expect will be here within the next one hundred years or so, if not sooner.
            The fact that an artificially intelligent machine/robot/computer is inorganic and not “natural” should have nothing to do with our moral obligations toward it, as it is a form of intelligent life. The inorganic robot form of life may not have gone through the same things our organic forms of life have to get to where we are, namely, evolution and natural selection. But that doesn’t mean that life can’t appear via another fashion. Maybe this could be the evolution of the concept of evolving: for humans to create a more durable form of intelligent life implies that literally building more/equally intelligent beings should be the next step up the evolutionary ladder. Regardless, artificially intelligent machines should be treated with the respect and concern for other forms of life that we humans do, if they truly are intelligent, self-aware beings.
            Another moral obligation we would have towards a form of life that we create might be one that could make the human race sort of a parallel to the concept of a creator or god. Humans have looked to religion for an explanation behind where our origin is. Imagine if another intelligent being made us. We would want some sort of explanation from them about why we are here. We would want to know our purpose and who we are. We would have about a thousand questions. I feel like we would be morally obligated to help guide and maintain/care for any artificially intelligent machines, should we create them and they’re similar enough to humans. If they aren’t human in the way that we are (meaning they don’t want to know everything there is to know in the universe and our purpose in life), then we have no obligation; otherwise, they should be treated almost like our children.


No comments:

Post a Comment