We’ve been talking extensively about robots so far, so let’s switch gears today and take a look at computers. I was originally going to cover a bevy of espionage films today to build my ideas around, but why bother? Why bother when a shining example has been provided by one of the greatest films, espionage, sci-fi, period, of all time: Jean Luc Godard’s Alphaville.
Alphaville Poster, Art by Armstrong Sabian
Brief recap: In the futuristic titular city, journalist / secret agent Lemmy Caution arrives on a secret mission from the outlands — to capture Professor Von Braun, the creator of super computer Alpha 60, and to use his knowledge to take down the dictatorial machine. In Alphaville, Caution encounters automaton after automaton, people ruled by the cold logic of a computer that has outlawed love and poetry. In Alphaville, logic is order, and those who act illogically pay the price with their lives. Caution falls in love with Natasha, Von Braun’s daughter, and his ability to have emotions, to act illogically, serves as a monkeywrench in the orderly machine that is Alphaville.
If you haven’t seen it, stop reading now, and do yourself a favor. It’s one of a number of full-length movies recently uploaded to Google Video, so go watch it.
There exists a myriad films about amoral computers driving out the experience of humanity with logical function — within the genre of espionage, I’d also thought of discussing The Billion Dollar Brain and The Prisoner episode The General. Perhaps the most well known of these computers-gone-bad is HAL 9000 from the Kubrick/Clarke film 2001: A Space Odyssey, and his oft-quoted line, “I’m afraid I can’t let you do that, Dave.”
But as with our previous discussions on robots, I question whether the actual evil might lie with the creators of HAL.
Luciano Floridi and J.W. Sanders addressed the idea of computers perpetrating evil deeds in their 2001 essay, “Artificial Evil and the Foundation of Computer Ethics” by creating a new nomenclature for … well, evil. They start by defining the nebulous term with the help of Kekes — evil is an action that “causes serious and morally unjustified harm” — and identify two traditionally acknowledged forms of evil: Moral Evil (ME), that which results from human autonomy and responsibility, and Natural Evil (NE), which comes from the natural world (i.e. earthquakes, tsunamis and other natural disasters). These terms, they offer, are not enough to describe modern occurrences of evil:
More and more often, especially in advanced societies, people are confronted by visible and salient evils that are neither simply natural nor immediately moral: an innocent dies because the ambulance was delayed by the traffic; a computer-based monitor ‘reboots’ in the middle of surgery because its software is not fully compatible with other programs also in use, with the result that the patient is at increased risk during the reboot period. The examples could easily be multiplied. What kind of evils are these? ‘Bad luck’ and ‘technical incident’ are simply admissions of ignorance.
To this end, Floridi and Sanders offer a new term: Artificial Evil (AE). They address the question above as well — are not the evil actions of the man-made system the fault of the men who made them?:
…This leads precisely to the main objection against the presence of AE, namely that any AE is really just ME under a different name. Human creators are morally accountable for whatever evil may be caused by their artificial agents, as mere means or intermediaries of human activities (indirect responsibility)….In the same way as a divine creator can be blamed for NE, so a human creator can be blamed for AE.
Some technologies, they argue, exist as artificial and autonomous agents: (remember this was written in 2001) webbots, expert systems, software viruses, robots. These agents are nomologically independent from their human creators, and therefore their ability to initiate evil actions is also independent from their human creators.
1. Do you think there is truth to Floridi and Sanders’ claims?
2. If so, what can be done?
3. Do we see these autonomous agents, capable of enacting artificial evil, in current society, even if not on the scale of a city-running, dictatorial super-computer?
This post first appeared on the Mister 8 website, 20 June 2009