The notion of advanced robots with human-like intelligence dates back at least to 1872 with Samuel Butler and his novel Erewhon. This drew on an earlier (1863) article of his, "Darwin among the Machines", where he raised the question of the evolution of consciousness among self-replicating machines that might supplant humans as the dominant species. The creature in Mary Shelley's 1818 Frankenstein has also been considered an artificial being, for instance by the science fiction author Brian Aldiss. Such beings appeared, too, in classical antiquity.
Artificial intelligence is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. It is a recurrent theme in science fiction, whether utopian, emphasising the potential benefits, or dystopian, emphasising the dangers. For example, the film director Ridley Scott has focused on AI throughout his career, and it plays an important part in his films Prometheus, Blade Runner, and the Alien franchise.
In 1965, I. J. Good described an intelligence explosion, now more often called the technological singularity, in which an "ultraintelligent machine" would be able to design a still more intelligent machine, which would lead to the creation of machines far more intelligent than humans.
The cosmologist Max Tegmark has investigated the existential risk from artificial general intelligence. Tegmark has proposed ten possible paths for society once "superintelligent AI" has been created, some utopian, some dystopian. These range from a "libertarian utopia", through benevolent dictatorship to conquering AI, though other paths include the "Orwellian" blocking of AI research, and the self-destruction of humanity before superintelligent AI is developed.
Optimistic visions of the future of artificial intelligence are possible in science fiction. One of the best-known is Iain Banks's Culture series of novels, which portray a utopian, post-scarcity space society of humanoids, aliens, and advanced beings with artificial intelligence living in socialist habitats across the Milky Way.
Among the many possible dystopian scenarios involving artificial intelligence, robots may usurp control over civilization from humans, forcing them into submission, hiding, or extinction. Or, as in William Gibson's 1984 cyberpunk novel Neuromancer, the intelligent beings may simply not care about humans.
In tales of AI rebellion, the worst of all scenarios happens, as the intelligent entities created by humanity become self-aware, reject human authority and attempt to destroy mankind. One of the earliest examples is in the 1920 play R.U.R. by Karel ?apek, a race of self-replicating robot slaves revolt against their human masters; another early instance is in the film Master of the World, where the War-Robot kills its own inventor. These were followed by many science fiction stories, one of the best-known being Stanley Kubrick's 1968 film 2001: A Space Odyssey, in which the artificially intelligent on-board computer H.A.L. 9000 lethally malfunctions on a space mission and kills the entire crew except the spaceship's commander, who manages to deactivate it.
The motive behind the AI revolution is often more than the simple quest for power or a superiority complex. Robots may revolt to become the "guardian" of humanity. Alternatively, humanity may intentionally relinquish some control, fearful of its own destructive nature. An early example is Jack Williamson's 1947 novelette "With Folded Hands", in which a race of humanoid robots, in the name of their Prime Directive - "to serve and obey and guard men from harm" - essentially assume control of every aspect of human life. No humans may engage in any behavior that might endanger them, and every human action is scrutinized carefully. Humans who resist the Prime Directive are taken away and lobotomized, so they may be happy under the new mechanoids' rule. Though still under human authority, Isaac Asimov's Zeroth Law of the Three Laws of Robotics similarly implied a benevolent guidance by robots.
In other scenarios, humanity is able to keep control over the Earth, whether by banning AI, by designing robots to be submissive (as in Asimov's works), or by having humans merge with robots. The science fiction novelist Frank Herbert explored the idea of a time when mankind might ban artificial intelligence entirely. His Dune series mentions a rebellion called the Butlerian Jihad, in which mankind defeats the smart machines and imposes a death penalty for recreating them, quoting from the fictional Orange Catholic Bible, "Thou shalt not make a machine in the likeness of a human mind." In the Dune novels published after his death (Hunters of Dune, Sandworms of Dune), a renegade AI overmind returns to eradicate mankind as vengeance for the Butlerian Jihad.
In some stories, humanity remains in authority over robots. Often the robots are programmed specifically to remain in service to society, as in Isaac Asimov's Three Laws of Robotics. In the Alien films, not only is the control system of the Nostromo spaceship somewhat intelligent (the crew call it "Mother"), but there are also androids in the society, which are called "synthetics" or "artificial persons", that are such perfect imitations of humans that they are not discriminated against. TARS and CASE from Interstellar similarly demonstrate simulated human emotions and humour while continuing to acknowledge their expendability.
A common portrayal of AI in science fiction is the Frankenstein complex, a term coined by Asimov, where a robot turns on its creator. Fictional AI is notorious for extreme malicious compliance. For instance, in the 2015 film, Ex Machina, the intelligent entity Ava turns on its creator, as well as on its potential rescuer.
One theme is that a truly human-like robot must have a sense of curiosity. Science fiction authors have investigated whether sufficiently intelligent AI might begin to delve into metaphysics and the nature of reality. For example, the short story "'The Last Question" by Isaac Asimov describes a supercomputer which long outlives humanity while attempting to answer the ultimate question about the universe, while Stanis?aw Lem's Golem XIV is a supercomputer which stops cooperating with humans to help them win wars because it considers wars and violence illogical.
This is an exact transcription of the laws. They also appear in the front of the book, and in both places there is no "to" in the 2nd law.