Monthly Archives: March 2012

Investigating Ethics (5/5)

Over the past four posts, I have examined the difference between ethics and morals, and what the purpose of each is. So far, I have determined that an individual cannot function practically as an ethical absolutist -one who adheres to a single ethical code -and have determined that the function of morality is to make judgments about ethical decisions. What I have not determined is what the purpose of judging ethical decisions is. I have examined the argument that morality judges ethical decisions to push us toward better societal interactions and self-preservation, but have dismissed these notions on the basis that on the whole, morality has not changed human interaction in many big ways. In thousands of years we have not yet created utopia, but instead we continue to wage war, overlook genocide, justify racism and sexism, as well as slavery. We even make jokes about subjects such as rape or abuse, perhaps subconsciously because we have accepted that those will always exist and we would rather learn to laugh than be stuck in perpetual bewilderment or sadness.

I have determined that the only purpose for morality that I can know to be true is that morality exists to show me my own failure. In some ways it conditions me to make better choices, though I have realized that my moral conscious will never prevent me from choosing to make all the bad decisions that I inevitably will. What I understand from this is confirmation of what my faith tells me: that I will inevitably live improperly, and that I am in need of a savior.


Investigating Ethics (4/5)

In my last post, I examined the reason for which morality serves to judge ethical decisions, and found only that that morality judges ethics to make known to us our failure in a particular decision. In it, I made the observation that perhaps one’s ethical code is not intended to be relativistic, but in fact that our moral conscious makes a judgment when we leave a specific ethical framework. In the post, I proposed that that framework might in fact be virtue. I chose virtue out of all the other ethical frameworks because I asked myself under what contexts have I experienced a moral conviction. I found that I experience moral conviction when I have personally caused some harm in the situation; or know that I will cause harm through my decision. I have found that this harm does not need to be strictly defined. I find that I will experience regret whether I knowingly I harm someone who is innocent, guilty, familiar or unknown, or even myself. It should be noted that regret occurs when a choice is made, not when a situation is beyond one’s own control.

I believe that, if followed, virtue ethics would free an individual from regret in decision making. However, it would also appear that it is impossible to make a virtuous decision in the context of the lesser of two evils. When one has to choose between “less wrong” and “more wrong”, ultimately it is not a virtuous decision. Virtue ethics maintains that decisions must be made by determining and choosing the outcome that is morally “right”, but in the context to a situation where there is no moral “right”, the individual must refuse to choose, or otherwise must change the ethical framework under which he is operating. And it would seem again that there is no functional ethical absolute.

Which leaves us where we were, asking, “why does morality judge ethical decisions?”

Investigating Ethics (3/5)

In my last post, I examined the difference between morality and ethics -that ethics is a framework for decision-making, and morality exists seemingly only to judge the decisions that are made under the ethical framework. What I would like to examine in this post is the purpose for which morality judges ethical decision making.

After much contemplation I have come up with the following progression.

Morality judges ethics for the purpose of:

  • Reminding us that we have failed.
  • Imploring us to learn from our mistakes; or the
    mistakes of the situation.
  • Pushing us to strive for an ideal in our decision-making.

But even these I am at a loss to understand. It would seem that regardless of how conscious we are of our morality, we continue to enter into situations where we disregard its judgment. Of course, I will not suggest that we are walking down a path of unending self-destruction by choosing to disregard morality in all of our decisions; most of the time we appear to make morally positive decisions, though it is undeniable that in many cases people choose to perpetuate their own self-destruction. For example, a man chooses to cheat on his wife. He knows this is wrong, yet he disregards this judgment, choosing to cheat, once, and very likely again and again. For this reason, I must refute the latter two observations because it appears that morality has so far been unable to condition human society into utopia.

The first observation is the difficult observation because it is both true, yet incomplete. It is a correct observation of the effect of the moral judgment, though from it, we have only arrived at the statement: “Morality judges ethics for the purpose of making ones own failures known.” At the moment, the only following observation that I have found is that perhaps my initial judgment -that ethics should be relativistic -is false, and our conscious is there to make some observation; perhaps that virtue is always the better choice.

Investigating Ethics (2/5)

Over the San Francisco trip I enjoyed a conversation with one of my peers about the novel by Issac Asimov, I, Robot, where robots are programmed with an ethical code -the three laws of robotics, where:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Ultimately in the novel, deciding that the only way to operate optimally under this system of ethics is by taking control of human society, the robots revolt, killing many people. The idea behind these actions is that humans are inherently self-destructive, and fewer people would be killed if the robots chose to eliminate the most destructive humans to the greater population. An interesting idea of its own, but it made me wonder whether one day there will be a computer that can comprehend morality.

In my last post, I discussed the difference between morality and ethics, and it can be said that any computer program has an ethical framework under which it operates. It is not so called, but essentially it is programmed to respond in a certain way when a specific input is given. =IF(X=Y), “yes”, “No”, at the most simple level.  I imagine that this code could be extrapolated to the point where a computer could be pre-programed to answer any number of expected situations with the
appropriate response -even when they begin to conflict with each other. You could even write a piece of code so long that the machine would cycle through thousands of responses to find the appropriate one. But the difference comes with the unexpected. You see, we may all expect to operate as ethical absolutists until the point where we realize that the only way to preserve our humanity is through virtue. At this point, our entire framework shifts because we have a moral conscience that instructs us that to continue to operate under this framework conflicts with either our idealism or our pragmatism, and to do so is indescribably “wrong”.

In a way, it may seem as though morality is simply another ethical position, though at the same time it is something entirely different. Morality can be overlooked, but it cannot be denied, just as actions can be justified and abhorred at the same time. Morality does not decide action; instead, it judges action. But the real question is for what reason does it judge?

Investigating Ethics (1/5)

If you’ve ever wondered what the world might look like during the zombie apocalypse, I invite you to watch an episode of the new AMC series “The Walking Dead”. My roommates and I have created out own cult this year on Sunday evenings at nine to examine the places where the human mind will travel when humanity is lost. In this post, I would like to examine the complexity of normative ethics.

This season’s finale ends with the murder of one of the main characters by the hero of the story. After continual threatening by the murdered, the hero finds himself led into a field at gunpoint to be killed by the man who would be murdered -a situation he has been in time and time again with this same man, and had succeeded in talking the man down. Though this time the outcome would be different. The hero, who had maintained a virtuous ethical position throughout the story now found himself in a position where he was unable to deny the danger of allowing this man to go on living. Ultimately, he decides to forgo virtue ethics in exchange for utilitarian ethics, choosing to kill the man instead.

This poses an interesting question: is there no such functional thing as an absolute ethical stance? If one permanently operates under a totalitarian code of ethics, individuals will be done injustice, though if virtue is employed, at times greater harm may be done to the larger group.

This is where ethics and morality differ. While normative ethics, being the framework under which decisions must be made, may be exchanged to produce the best outcome, morality simply is itself. One can justify the death of three to save ninety-seven, but one cannot reasonably argue that the death of those three was a good thing. War may at times be the most practical solution, though it can never be a desirable solution. We all know the glory of the U.S. involvement in WW2; it was a great thing that we moved to end the conquest of the Third Reich, though we seldom consider the lives of the millions of German men who were drafted into the Nazi army to die for their country.

I will continue this discussion in my next post.