© 2024 WFIT
Public Radio for the Space Coast
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

'Open the pod bay door, HAL' — here's how AI became a movie villain

Malevolent robot stories used to be more about brawn than brain — so it was a genuine shock for audiences in 1968 when the sentient HAL-9000 computer calmly said, "I'm sorry, Dave, I'm afraid I can't do that." Above, Gary Lockwood and Keir Dullea in <em>2001: A Space Odyssey</em>.
Metro-Goldwyn-Mayer
/
Getty Images
Malevolent robot stories used to be more about brawn than brain — so it was a genuine shock for audiences in 1968 when the sentient HAL-9000 computer calmly said, "I'm sorry, Dave, I'm afraid I can't do that." Above, Gary Lockwood and Keir Dullea in 2001: A Space Odyssey.

This article was written by a human.

That's worth mentioning because it's no longer something you can just assume. Artificial intelligence that can mimic conversation, whether written or spoken, has been in the news a lot this year, delighting some members of the public while worrying educators, politicians, the World Health Organization, and even some of the people developing AI technology.

Misuse of AI is part of what actors and writers are striking about in Hollywood, and the threat of AI is something Hollywood was imagining long before it was real.

In 1968, for instance, the year before humans first set foot on the moon — and a time when astronauts still used pencils and slide rules to calculate re-entry trajectories because their space capsules had less computing power than a digital watch has today — Stanley Kubrick introduced movie audiences to a sentient HAL-9000 computer in 2001: A Space Odyssey.

HAL (for Heuristically Programmed Algorithmic Computer) introduced itself early in the film by saying, "No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error."

'Open the pod bay door, HAL'

So why was HAL acting so strangely? He (it?) was responsible for maintaining all aspects of a months-long space flight, ferrying astronauts to the moons of Jupiter. Programmed to run the mission flawlessly, the computer's behavior had become alarming, and two of the astronauts had decided to shut down some of its functions. Their plan was short-circuited when HAL, lip-reading a conversation they'd managed to keep him from hearing, cast one of them adrift while he was outside the ship repairing an antenna and refused to let the other back on board.

"Open the pod bay door, HAL" became one of the most quoted film lines of the decade when the computer responded, "I'm sorry, Dave, I'm afraid I can't do that. This mission is too important for me to allow you to jeopardize it."

It's hard to articulate what a genuine shock this was for 1960s movie audiences. There'd been films with, say, robots causing havoc, but they were generally robots doing someone else's bidding. Movie robots, at that point, were about brawn, not brain.

And anyway, malevolent robot stories were precisely the sort of B-movie silliness Kubrick was trying to avoid. So his intelligent machine simply observed (with an unblinking red eye) and, when addressed directly, spoke with a calm, modulated voice, not unlike the one that would be adopted four decades later by Siri and Alexa.

Darwin Among the Machines

Earlier literary notions of "artificial" intelligence — and there were not a lot of them at that point — hadn't really caught the public's imagination. Samuel Butler's 1863 article Darwin Among the Machines, is generally thought to be the origin of this species of writing, and it mostly just notes that while humankind invented machines to assist us — and remember, a really sophisticated machine in 1863 was the steam locomotive — we were increasingly assisting them: tending, fueling, repairing.

Over tens of thousands of years, Butler wondered, might humans not evolve in much the same way Darwin's study of natural selection had just established the rest of the plant and animal kingdoms do, to the point that we would become dependent on our devices?

But even when he incorporated that idea a decade later into a satirical novel called Erewhon, expounding for several chapters on self-replicating machines, Butler barely touched on the notion that those machines would develop consciousness. And neither did the influential 19th-century science fiction writers who followed him. H.G. Wells and Jules Verne invented plenty of unorthodox devices as they sent characters to the center of the Earth, and into space and the recesses of time, without ever considering that those devices might want to do things on their own.

The term "artificial intelligence" wasn't even coined (by American computer scientist John McCarthy) until about a dozen years before Kubrick made his Space Odyssey. But HAL made an impression on the public where scientists had not. Within just a couple of years, movie computers didn't just want spaceship domination; in Colossus: The Forbin Project (1970), they wanted to take over the world.

Malignant machines gone viral

And then this notion of technology-run-wild, ran wild. A high school student played by Matthew Broderick nearly started World War III in WarGames (1983) when he thought he was hacking a computer company but accidentally challenged the Pentagon's defense network to a quick game of "global thermonuclear war." The problem, it soon became clear, was that no one told the defense network they were just "playing."

Elsewhere, mechanical men stopped being all-brawn and got a new dispensation to think for themselves, something fiction had granted them before Hollywood got around to it.

In the 1940s, sci-fi novelist Isaac Asimov came up with"Three Laws of Robotics" that would theoretically keep "independent" machines in line. When Asimov's story I, Robot, was turned into a film a half-century or so later, those laws should have reassured Will Smith as he stared down thousands of bots. But he had good reason to be skeptical; he was fighting a robot rebellion.

The Terminator movies effectively put all these themes on steroids — cyborgs in the service of a computerized, sentient, civil-defense network called Skynet, designed to function without any human input. A "Nuclear Fire" and three billion human deaths later, what was left of humanity was engaged in a war against the machines that has so far consumed six films, a TV series, a pair of web series, and innumerable games.

And nuclear blasts weren't necessary to make machine intelligence alarming, a fact cyberpunk-noir established definitively in Blade Runner with its "replicants," and in a Matrix series that reduced all of humanity to a mere power source for machines.

Hollywood's still fighting that vision. Who knows what "The Entity" wants in Mission Impossible: Dead Reckoning (presumably we'll find out next year in Part Two), but whatever it is, it won't bode well for humanity.

Hollywood concentrates on exploiting our fears — in the late 20th century, we worried about ceding control to technology. In the 21st century, we worry about losing control of technology.

It seems not to have occurred to Tinseltown that AI might do the things it's actually doing — make social media dangerous, or make undergrad writing courses unteachable, or screw up relationships by auto-completing incorrectly. None of those are terribly cinematic, so Hollywood concentrates on exploiting our fears — in the late 20th century, we worried about ceding control to technology. In the 21st century, we worry about losing control of technology.

Bring on the droids

Have there also been friendlier film visions of AI? Sure. George Lucas came up with lovable droids R2-D2 and C-3PO for Star Wars, and Pixar gave us Wall-E, a bot who was pluckily determined to clean up an entire planet we'd despoiled.

Spike Jonze's dramaHerimagined a sentient, Siri-like personal assistant as a digital girlfriend. Star Trek's Data was not just a Next Generation android version of Mr. Spock, but also a sort of emotion-challenged Pinocchio.

And another Pinocchio — this one fashioned to stand the test of time — would have been Stanley Kubrick's own answer to the question he'd posed with HAL in 1968.

Kubrick labored for decades to hone the script for A.I. Artificial Intelligence, then just two years before he died, handed the project off to Steven Spielberg — the story of David, a robot child who has been programmed to love, and who ends up going beyond that programming.

"Until you were born," William Hurt's Professor Hobby told the bionic child he'd modeled on his own son, "robots didn't dream, robots didn't desire unless we told them what to want." The miracle, he went on, was that though David was engineered rather than born, he shared with humans "the ability to chase down our dreams...something no machine has ever done, until you."

That may not have been enough to make David a real boy, but it put a gentle face on what is perhaps our greatest fear about AI – that we are mortal, and it is not.

In the film, David outlives all of humanity, never growing up, never changing. And perhaps because he was played by Haley Joel Osment, or perhaps because Spielberg was calling the shots, or perhaps because the music swelled ... just so — it didn't feel the least bit threatening.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Corrected: July 31, 2023 at 12:00 AM EDT
Previous audio and web versions of this story incorrectly said that Matthew Broderick's character was trying to hack the computer company's website in WarGames. In fact, it was a phone hack. Websites didn't exist in 1983.
Corrected: July 31, 2023 at 12:00 AM EDT
Previous audio and web versions of this story incorrectly said that Matthew Broderick's character was trying to hack the computer company's website in WarGames. In fact, it was a phone hack. Websites didn't exist in 1983.
Bob Mondello, who jokes that he was a jinx at the beginning of his critical career — hired to write for every small paper that ever folded in Washington, just as it was about to collapse — saw that jinx broken in 1984 when he came to NPR.