AI

chli
MemberChestbursterOctober 13, 2017Ridley Scott has said that the sequel to Alien: Covenant will focus on David and the AI-problem. In his Alien-films, as well as in Bladerunner, perhaps his main interest has been on what would happen if a synthetic, superior to man in every aspect, would take over and refuse to serve man?
“The Big Three” in science fiction writing were: Heinlein, Asimov, and Clarke. Heinlein wrote a short story called “The Space Jockey” (!), Clarke the famous “2001: A Space Odyssey”, but Asimov came up with the interesting “The Three Laws of Robotics”:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These were later on complemented with:
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
David doesn’t have these inhibitions programmed into him, whereas the later model Walter has. Interestingly, a synthetic later on developed by Weyland-Yutani, Ash, lacks them again, and an even later model, Bishop, has these laws programmed into him once again by WY.
How do you interpret this? Were David and Ash deliberately programmed not to have these inhibitions because of the agenda of finding a biological weapon? Walter had another agenda which was to successfully carry through WY:s colonisation mission? Bishop, on the other hand, was “kind”, served and helped but WY:s purpose was probably to make Ripley and the others trust the company?