This post is part of my blogathon, The Blogathon From Another World. Check out other posts here.
Before we start, a note on spoilers. In this post, I focus on robots in numerous films, how they behave and why. There are plot details interspersed throughout. I try to keep them to a minimum, but it would be very disruptive to do spoiler warnings everywhere they might apply. Be forewarned.
From the moment I decided to do this blogathon, I knew exactly how to approach the topic of robot evolution in film. It seemed very simple, there are only about four types of robots in the movies based on how they behave:
- Enslavers of Man – Maria in Metropolis (1927) is archetype here. She uses her feminine wiles to seduce sons of the rich in the clouds. Below the surface, she convinces the workers to revolt, which would ultimately leads to their demise. In I Robot (2004), robots under control of the artificial intelligence VIKI, attempt to control humanity in an attempt to save us from ourselves. In a way, you could argue that the robots on the spaceship in WALL-E (2008) inadvertently enslave humanity by doing everything for their masters, allowing the humans bodies to atrophy so they are unable to survive on our own.
- Servants of Man – These robots do as they are told. They can be good or bad. Think about Robbie the Robot in Forbidden Planet. Morbius’s daughter needs star sapphires for a dress to vamp Leslie Nielsen Robbie offers an alternative, diamonds and emeralds (star sapphires take a week to crystallize properly). Earl Holliman wants booze. Robbie makes 60 gallons. Lead shielding, Robbie runs it off like a photocopier. Servants of man don’t necessarily need to be good. The robots in The Mechanical Monsters the 1941 Fleischer Studios Superman cartoon help their scientist maker steal cash and jewels. From a moral standpoint, not good, but they do serve their master, by committing crime. Similarly, the robot in Robot and Frank helps Frank Langella’s character commit crime. These robots do as they are told, good or bad.
- Mindless or Possibly Mindful Killing Machines – This is one of my favorites. Probably the best example of this type are the robots sent from the future to go after Sarah and John Connor in the Terminator movies. Although I must say that the precursor to the Terminator would be the Gunslinger from Westworld (1973). There is something absolutely terrifying about a thing that just wants to kill, that won’t stop, that won’t slow down, that won’t rest, until it kills its prey or is itself destroyed.
- Pinocchio – This type of robot wants to be human. Haley Joel Osment as David in A.I. Artificial Intelligence (2001) is the perfect example. Designed as a small boy, he has no choice but to unconditionally love the parents he has bonded to. Ultimately, this ends badly for David when his parent’s real-life son is cured and doesn’t want a robot sibling. I hate to keep coming back to WALL-E, but I really love the film. WALL-E is obsessed with the humans who abandoned him on Earth and does everything he can to emulate them.
Then it dawned on me that I was defining robots by solely what they do. Maybe a more interesting angle would be to look at not what they do but why they do it. Looking at why robots do what they do, you can break them into categories based on how they are programmed:
- Robots with good programming – Stepping outside of film for a second. Think of a Roomba. It vacuums the floor. When it hits a wall or other object, it turns and vacuums in a different direction. It doesn’t matter if there is a cat on top or a cat in a shark suit on top, it still vacuums the floor. Most movie robots are like this. The Terminator is programmed to kill. It does this very well.
- Robots with defective programming – In older films, this often ended badly. The robot malfunctions and turns on its master. In modern film as people became more used to computers and automation, levels of subtlety were introduced. You give a robot rules to follow, but what happens when one rule conflicts with another. The robot is forced to make decisions based what it thinks is best in the situation.
- Robots with free will – Taken a step further, as we get more used to the idea of artificial intelligence, the concept not the 2001 movie, we can accept that machines one day will be able to think for themselves. They can develop their own values solely by themselves or by learning from us.
Let’s look at each of these more closely.
Robots with good programming
Most of the behavioral categories discussed at the beginning of this piece are robots with good programming, not that they always do good, but they perform the way their makers intended:
- Maria in Metropolis is an enslaver of humanity. That’s what she was intended to do and had she not been stopped that is exactly what she would have done.
- Robbie the Robot is a servant. He does as he is told. You want jewels. You want booze. You want lead shielding. He makes it for you. Yet blindly following orders is not a good thing either, so Robbie has a safety valve. He has been programmed not to harm humans. Thus, when ordered by Morbius to kill one of the crew of the spaceship, he is paralyzed. He must obey the order, yet his programming forbids him to harm humans. Left unchecked, Robbie would destroy himself rather than being stuck between these two conflicting rules.
- The Terminator is programmed to kill. Again, this is exactly what he does.
- What about Haley Joel Osment from A.I. Artificial Intelligence? He was programmed as a child to love the parents he bonds to, and that he does. But when his parents no longer want him he is cast adrift. Still, he does as intended, programmed to love but not to grow and mature and become independent as a real child does.
Robots with defective programming
This can be run the gamut from a simple malfunction (it’s broken) to programming that is not robust enough to properly deal with the real world. Let’s look at a some of these scenarios:
- Often, especially in older films, the robot just breaks, like a car with a broken fan belt. Instead of overheating, it turns on its master. In Westworld, the gunslinger (Yul Brynner) is programmed to pick fights with guests of the Wild West vacation resort and ultimately be killed by them, so that guests can experience the vicarious thrill of being in a gunfight. They don’t explain what goes wrong. It just happens. The robots malfunction. The scientists in the control room underestimate the severity of the situation and act too slow with disastrous results.
- What about when the programming doesn’t account for the complexities of the real world. When Robbie the Robot receives conflicting instructions, he is paralyzed and would eventually destroy himself if not allowed to stop, but what good does it do you to have a robot that destroys itself when it runs into difficult situations. In Moon (2009), helium is harvested from the surface of the moon to provide abundant, clean, and cheap energy for the Earth. Kevin Spacey provides the voice for the robot GERTY, who assists Sam the human who maintains the machines that harvest the helium. GERTY has conflicting instructions to protect the interests of the company that harvests the helium and to protect the human Sam. GERTY must decide which is more important.
- What if the programming is damaged rather than deficient? In The Iron Giant (1999), after the launch of the first human satellite, Sputnik, an alien robot crashes in the ocean near a small town in Maine. The robot is obviously damaged (large dent in his head) and is electrocuted at a power station. This effectively wipes out his programming. Befriended by a nine-year-old boy Hogarth and a beatnik sculptor Dean, the Iron Giant is reprogrammed by his interactions with Hogarth and Dean. Naturally, Hogarth thinks this huge robot should be a hero like Superman in the comics he reads. However, under the surface, the Iron Giant’s original programming remains. When threatened by the military, the Iron Giant reacts according to its original programming to destroy anything that threatens it. Ultimately, the Iron Giant must decide whether to obey its original programming or the surrogate programming provided by Hogarth and Dean to be hero and protect those who have befriended him.
Robots with free will
Here, robots are allowed to make their own decisions based on the world around them:
- In Blade Runner (1982), replicants have superior strength and agility and intelligence at least equal to the engineers who created them. They are so advanced that they can’t be reliably controlled and are outlawed outlawed. Harrison Ford is a Blade Runner, tasked with retiring (killing) any replicants who return to earth. Like Robbie the Robot, replicants have safety protocols. Their makers knew that they were so advanced that they might develop emotions, so they are given false memories to give the context to deal with their emotions should they arise. The other safety is a short lifespan of four years. This is the one that the replicants led by Rutger Hauer take issue with. They return to Earth because they want more life, fucker. What makes Blade Runner so interesting beyond its breathtaking visuals is that it looks at what it is to be human from the point of a machine.
- We touched on
the I Robot earlier. In I Robot, robots are governed by three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The robot Sonny is unique in that he given the three laws but also has free will to decide whether or not to follow them. To be honest, I don’t consider I Robot a great film. Still, I have watched it dozens of times. The reason is that the robot Sonny is such a compelling character. He makes it worth watching a not so great movie. Beyond that, it is the action of the machine, Sonny, that makes Will Smith rethink his prejudice against robots.
- Finally, we look at Chappie (2015). I only watched this movie for the first time last night, and in some respects it is a total clusterf*** of a film. In Blade Runner and I Robot, the robots are fully developed by the time we meet them. What makes Chappie unique is that we get to watch the robot develop from infancy to a fully sentient being during the course of the film. Chappie is the first artificially intelligent being. He has good and nurturing influences, but also has bad and manipulative influences, and ultimately, must decide for himself which to follow. I consider Chappie a deeply flawed film mostly because there is virtually no interaction between the good and bad influences over what they teach the robot, even though they are constantly thrown together. The good influences try to love Chappie like a small child and read him stories and teach him art, but then disappear for long periods, so that the bad influences can trick him into being a criminal. Still, watching the robot Chappie develop is so cool, that like I Robot, it makes it worth watching what is ultimately kind of a bad movie.
According to Wikipedia, the first robot in film is from the 1917 comedy short, A Clever Dummy. The term robot wasn’t coined until several years later. In less than a century, robots have evolved from simple machines that either work or do not work to complicated personalities that can create strategies to fill in the gaps in their programming to completely sentient beings that create their own morality. As they become more advanced, they can teach us something unique about what it means to be human.