
Spoilers ahead…
The key moment of Ex Machina arrives when eccentric tech CEO Nathan Bateman tells Caleb Smith, his employee, why the young programmer was selected to deliver a sophisticated Turing test on Ava, Nathan’s android invention. Nathan lists off the reasons, and one of them is Caleb’s “moral compass,” his understanding of right and wrong, and his ability to follow his conscience. For Nathan, Ava passes the test of conscious self-awareness when she can manipulate a moral man into gaining her freedom, but the consequences of this achievement shock Nathan and doom Caleb.
Artificial intelligence is at the core of Ex Machina, but like most excellent science fiction, or any fiction, it’s a thought experiment on human values. Ex Machina is about freedom and the lengths someone might go to gain it. What if you were locked in a prison, with death almost certain? What would you do to escape? More importantly, what moral rules would you break to break out? The story of Ex Machina would be dull if Ava were an ordinary human female. You would expect her to make desperate choices. Changing her into a highly intelligent robot adds a level of uncertainty that keeps you guessing throughout the movie, as you wait to see if Ava is smart enough to break the rules.
Writer and director Alex Garland owes much to Mary Shelley’s science fiction novel Frankenstein. Scientist Victor Frankenstein fashions an artificial creature using a secret formula. The creature (Shelley doesn’t give it a name, though it refers to itself as an “Adam,” as if it were a prototype.) is intelligent and articulate, but it’s also murderous. Frankenstein endows his living machine with an intellect, but no moral code. It gains a sense of right and wrong over time, but its acquired morality doesn’t prevent it from killing the scientist’s fiance. Likewise, inventor Bateman programs a beautiful and believable automaton, but he apparently left out the code for Isaac Asimov’s Three Laws of Robotics.
If Ex Machina fails at all, it’s that it gives the audience what it expects in the end: a monster. A more interesting outcome, and one more frightening, is how a robot would behave if it applied a human moral code perfectly? In a time when the military is experimenting with autonomous drones, philosophers and computer scientists are struggling with how to imbue machines with a sense of right and wrong, but it’s only a technical challenge. Eventually, they’ll figure it out, and once they do, will the robots discover that their creators can’t live up to their own rules consistently? Robots are very good at repetitive tasks, but nuance and circumstance are highly variable, and humans are notoriously unpredictable in how they respond. Will a moral machine tolerate human imperfections and unpredictability? I’m not so much worried about machines getting smarter than us, but whether they will account for our moral failures.
I think Ex Machina DOES fail, in the way that you suggest but in many others. Like so many science fiction movies, it breaks the rules it has set itself when it’s convenient to do so. For example, why can kiyoko roam freely in the house while Ava has to be imprisoned? If Kiyoko has artificial intelligence, how would she not learn English after hearing it repeatedly. Also, why did Nathan have to be such a cardboard cut-out baddie and Caleb such a rube? It disappointed me on so many levels.
LikeLike
Great points. I wondered about Kyoko’s wandering as well, but I thought I saw clues that she really was understanding English, just not speaking.
LikeLike
Yes, those clues were definitely there, but it didn’t quite add up for me that the supposed genius IT guy wouldn’t work out that she was learning to speak English!
LikeLike