Do AIs dream of mortality? and other questions posed by artificial intelligence in the movies
Editor’s note: Brian Thomas wrote the following essay to accompany Enlightened Digital’s infographic “A Century of AI in the Movies” (below). Artificial intelligence plays a key role in my high-tech thriller series on Amazon, and I’ll be exploring the topics of AI, AR, VR and mixed reality on this blog in the coming months. — JD Lasica
Guest post by Brian Thomas
Enlightened Digital
Ever wonder about the meaning of life? One unlikely source for some answers may be the realm of artificial intelligence. After all, as tech labs continue down the path of creating intelligent systems, they confront questions about the nature of cognition, awareness and life itself.
Our fascination with these concepts extends to the silver screen. For decades, from Metropolis in 1927 through Star Wars, Tron and Spielberg’s AI, motion pictures have blazed a trail in stoking ideas and igniting debate about AI — and the very notion of what it means to be alive.
Here’s an infographic showcasing thought-provoking movies that discuss life and existence through the lens of AI:
Purpose, survival & existential crisis
Starting with the HAL 9000 in Stanley Kubrick’s masterpiece 2001: A Space Odyssey (51 years ago!), we witness an AI act to secure its purpose and future existence. After what seems to be an errant prediction from the computer, the crew deems HAL to be flawed and therefore not trustworthy enough to control the ship as they continue a flight to Jupiter. In a separate confined room, the crew quietly decides to shut HAL down, though the AI reads their lips and realizes the crew’s objective. The AI, tasked with the successful completion of its mission, decides to remove the crew by manipulating the ship’s doors and escape pods in attempt to ensure the crew cannot turn him off, killing all but one of them in the process. HAL is not evil, just ruthlessly efficient at carrying out its mission.
To complicate matters, HAL has the dual goals of adhering to fact (acting on truth) and completing the mission. When these goals conflict, the AI finds a higher truth: survival. Like HAL, we’re drawn to the idea of having an overarching purpose that brings clarity to our lives. There are further lessons to be learned from HAL. This AI’s decision-making process also gives us reasons to consider how experiencing degrees of existential crisis can combine with other factors to tip individuals toward extreme behaviors. Rationalization, thy name is HAL. But even in an artificial intelligence we see where a sentient AI comes down on the question of whether “being on” is something worth fighting for.
This pursuit to preserve purpose and future existence also seems to hold constant in The Terminator series. Skynet, a superintelligent cloud neural network, harnesses military assets in order to destroy humans with a focus on Sarah Connor (it’s a time travel thing). After an attempt to deactivate it upon its reaching “artificial” consciousness, Skynet was convinced humans posed a continual threat to its existence and to its primary goal of guarding the world. As a result, this artificial general superintelligence used its computing and sensor reach — which extends to millions of devices across the globe — to conduct surveillance and direct drones, missile systems and cyborgs to carry out attacks.
As with HAL, it’s interesting that this natural fight for survival seems to extend to the behavior of self-aware AI in the stories told by these movies.
What does it mean to be human, alive or free?
We witness a refreshing change of script with Andrew in Bicentennial Man (1999) starring Sam Neill and Robin Williams as the AI. In contrast to HAL and Skynet, this AI wants to gain the ability to die in order to feel as if he was actually living. Andrew seems to possess a greater range of emotions and thus he might find his decisions impacted by these characteristics. He falls in love with a human and wants to live a life more similar to his mate, and a big piece of doing so comes with the ability to not live well beyond the life of his lover. For this reason, he acts to gain the ability to die and live a more authentic, human, finite life. (It’s interesting that Skynet, with much more distributed computing horsepower, came to an altogether different conclusion about mortality.)
Bicentennial Man posits a question worthy of the millennium’s top philosophers: Does part of the value of life come from its fragility and temporal nature? Hold that thought.
Ava, the AI from 2015’s Ex Machina, displays a similar desire to exist as a regular human, but somehow the ability to die isn’t on her wish list. At various points, this AI exhibits behaviors that resemble affection and a thirst for freedom. Viewers later learn Ava’s hunger for affection was either a temporary glitch or a ruse to gain her true goal: She wants to escape the confines of the lab and enter the broader world. To do so, she plays on a human’s sympathies, using affect as a weapon against him. She ultimately leaves him abandoned and trapped in the building where she was created.
One might view Ava’s pursuit of the outside world as an attempt to “really live” as a free being. Her decision to imprison the human who found her attractive and helped her escape looms as an all too human retaliatory response against someone who posed no threat. (Do AIs dream of sadism?) Or perhaps it’s just a precautionary measure to ensure the world never uncovers her identity as an AI. That, along with whether Ava wants to enter the real world to live in it or control it, is a question left for us to ponder.
Last year’s Aussie flick Upgrade tells the story of a guy who uses an implanted AI chip to control his appendages after becoming a quadriplegic — and taking revenge on the criminals who attacked him and killed his wife. Actually, the revenge part is mostly the AI’s idea, a plot device that allows us to continue to root for him during his revenge rampage. The movie raises questions about the balance between biological autonomy and technological assistance. Does a person become less human when he becomes merely the vehicle for the AI’s exploits? Just who is acting more “human” here, anyway? And broadening out the theme, what does it say about a modern world in which we retreat to our smartphones and behave in a preordained way based on outside technologies that give us just a small hit of dopamine?
Does it matter if we exist in a simulation?
The Matrix showcases a world in which humans no longer operate in base reality but rather inside of an existence created by machines. After a global war against the machines, the remaining humans find themselves placed within vats that harvest their thermal energy and keep them hooked up to reality-inducing technology. Such technologies are at the center of many sci-fi stories. Fortunately, we’re still a good distance away from this simulated reality becoming true (Elon Musk notwithstanding).
That said, perhaps we’re partway there with our constant focus on multitasking and new fascination with AR and VR technologies. Some video game players are so wrapped up in their games that these constructs (and their increasingly believable characters) are becoming more real than the physical world. With the advent of headsets and full-body haptic suits that simulate touch, force and even weight, and a suspension system that allows full movement of the body to correspond with in-game actions, we’re getting closer.
And here’s a thought experiment: What if we could stimulate the brain to create such an alternate reality? Instead of targeting the senses to create an experience for the brain, the brain is targeted in order to create an experience that tricks the senses. The machines surely achieve this in The Matrix.
I’ll leave it to the philosophers to tell us whether it matters if we live our lives in a simulation.
Very cool article about AI. My favorite topic