Moral Questions I have yet to Resolve

Talk about a change in perspective…Before reading the second half of Philip Dick’s “Do Androids Dream of Electric Sheep?” I felt no sympathy whatsoever for any of the androids in the novel. Anything lacking true life I deemed artificial, and deserved absolutely none of my respect. Animate or inanimate, intelligent or not, machines simply would never receive any respect from me. However, after reading the second half of the

brokenTV

What makes breaking this television better than terminating an android? At what point do we assign inalienable rights?

novel, I began to contemplate the definition of life: a definition that goes beyond scientific explanation. I am writing this blog still trying to decipher the qualities of these fictional androids and the individual rights those qualities entail. Is retiring an android worse than breaking a television set? Is blasting an android in the head better than shooting a human in the brains? Just as “general laws are not inviolable truths” (Newman 224; Abstractions Website 1) the distinctions between android, man, and animal are not as clear as we perceive them to be.

I stubbornly read through the novel determined to prioritize human life over the androids in this novel, and even the animals. As the novel’s main character, Rick Deckard, journeys on his bounty hunting expedition his thoughts on androids begin to change. His “disposition to treat human beings and animals with consideration and compassion (Anthology 274I) stretches into the “lives” of androids. I quote the word lives because I saw a clever change in the author’s descriptions of the androids as the novel progresses. When Rick begins to see androids as more than expendable machinery, Dick begins to describe androids like Luba Luft as “-at least briefly-alive” (Do Androids Dream of Electric Sheep 131) I question myself as I read the book: “Why do we deserve more rights than androids?” “Is it pain, origin of creation, or empathy distinguish us from them? “Is it none of the above?” Not only do I fail to reach answers to these questions, but I become even more confused as the novel continues and my moral grounding begins to crumble.

happyrobot

Does a robot sense enough, feel enough, think enough to have the right to exist? Can a robot really be "happy" like this one?

The differences between human and android became extremely questionable and complex when Deckard is taken to the justice department building on Memorial. In this situation Androids had not only created perfect replica of a societal function normally run by humans, but they hired a human, Phil Resche, a bounty hunter of their own kind . Furthermore, many of these androids were direly desperate to be “humane.” Even before Luba Luft’s assassination, she tells that Deckard and Resch “life consisted of imitating the human, doing what she would do, acting as if I had the thoughts and impulses a human would have. Imitating, as far as I’m concerned, a superior-life form”(Do Androids Dream of Electric Sheep 132) Even after Luba reveals her thoughts on mankind, I still remained uncertain about mankind’s “superiority.” The deterioration of human emotion and the hypocritical killings performed by humans on androids. Even Deckard sees the cruelty in his job. During this revelation he uncovers new truths and perspectives.

While this is not particularly my taste in music, this song talks about using robots “will do anything for you.” The song is titled “Robots have feelings too.”

He considers quitting his job and questioning his ability to sympathize with android Luft and his inability to empathize with human Resch. During this self-questioning, he briefly mentions that Luba Luft had done nothing wrong. He asks himself how “a talent like that can be a liability to our society” (Do Androids Dream of Electric Sheep 135) Maybe this is what determines the question of morality I have been trying to answer:

THINKER

I find most of my moral questions unanswered. I suppose it will take more time for me to figure out what defines life and the rights associated with it.

“Who should we be able to destroy/kill and when is it acceptable?” Perhaps it is possible to judge a machine, animal, human, or any other thing for that matter, solely on the actions they are responsible for. On the other hand, one could argue that taking the life of anything is always wrong. BUT, do androids have life? what is it that guarantee’s something or someone’s right to exist? I can’t help but get trapped in this circular reasoning. I finish this blog frustrated and unsatisfied, knowing that my attempts at defining life and assigning values to the lives of different creatures, men, (and in this case maybe even machines) remain fruitless.

Media References:

Advertisements
Published in: on November 3, 2009 at 1:17 am  Leave a Comment