Over the San Francisco trip I enjoyed a conversation with one of my peers about the novel by Issac Asimov, I, Robot, where robots are programmed with an ethical code -the three laws of robotics, where:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Ultimately in the novel, deciding that the only way to operate optimally under this system of ethics is by taking control of human society, the robots revolt, killing many people. The idea behind these actions is that humans are inherently self-destructive, and fewer people would be killed if the robots chose to eliminate the most destructive humans to the greater population. An interesting idea of its own, but it made me wonder whether one day there will be a computer that can comprehend morality.
In my last post, I discussed the difference between morality and ethics, and it can be said that any computer program has an ethical framework under which it operates. It is not so called, but essentially it is programmed to respond in a certain way when a specific input is given. =IF(X=Y), “yes”, “No”, at the most simple level. I imagine that this code could be extrapolated to the point where a computer could be pre-programed to answer any number of expected situations with the
appropriate response -even when they begin to conflict with each other. You could even write a piece of code so long that the machine would cycle through thousands of responses to find the appropriate one. But the difference comes with the unexpected. You see, we may all expect to operate as ethical absolutists until the point where we realize that the only way to preserve our humanity is through virtue. At this point, our entire framework shifts because we have a moral conscience that instructs us that to continue to operate under this framework conflicts with either our idealism or our pragmatism, and to do so is indescribably “wrong”.
In a way, it may seem as though morality is simply another ethical position, though at the same time it is something entirely different. Morality can be overlooked, but it cannot be denied, just as actions can be justified and abhorred at the same time. Morality does not decide action; instead, it judges action. But the real question is for what reason does it judge?