Viewport width =
May 11, 2015 | by  | in Science |
Share on FacebookShare on Google+Pin on PinterestTweet about this on Twitter

Artificial Intelligence With Strings Attached

This piece contains some mild spoilers for Avengers: Age of Ultron.

A few Salients back (issue two for those keeping track), I looked at whether we could build a realistic Iron Man. This week, as the sequel to the much-beloved Avengers film explodes onto screens, I look to Tony Stark’s other greatest creation. Could a functioning AI like Ultron ever come to exist? Let’s just say there are some strings attached.


Philosopher John Searle makes the distinction between two kinds of artificial intelligence: strong AI and weak AI. Weak AI can only simulate human intelligence, primarily through pattern recognition. Searle famously came up with the Chinese Room thought experiment to illustrate this, which argues that simulating thought is not the same as experiencing it. A weak AI will not understand the nuance behind the symbols or the tacit associations that we as humans have towards certain words and symbols.

Strong AI is more like Ultron or JARVIS, the AI in the Iron Man suit. They have the capacity to take in information from the world and form their own thoughts and feelings. They can genuinely said to possess a personality or a mind of their own. But what does it mean to have a mind? And can we create an artificial one that works like the “real” thing?


Searle suggests that the mind is to the brain what software is to the computer. Think of your brain as the squishy wrinkled computer on which your mind runs like a program (and then think that your brain is attempting to define itself and proceed to trip the fuck out).

The current roadblock to making an AI that can think like a human is that we don’t yet understand what it is to think like a human in terms of building distinctions between consciousness, intelligence and the brain itself. This is what neuroscientists, psychologists and philosophers have been trying to define for centuries. Building an AI that thinks like a human would require a complete understanding of the human brain, biologically and conceptually, and only then would we be able to replicate it in a mechanical format. You would literally have to complete neuroscience.

But with a jolt of comic book storytelling, we can make the Frankenstein idea of artificial intelligence come to life. Ultron has the exponential processing power to think faster and hold more running thoughts than a human, but he walks, talks, and thinks within human parameters—albeit a psychopathic human with the syrupy but seething voice of James Spader.
This is ostensibly to enable the audience to empathise with or at least understand Ultron’s character, but it does reveal an interesting debate currently surrounding artificial intelligence. We expect that if we created AI, then they would naturally think like us. Humanist critic and philosopher John Gray disparages this idea as an example of anthropocentric bias. “Everyone asks, will machines someday be able to think as humans do,” he writes in his book Straw Dogs. “Few ask whether machines will ever think like cats or gorillas, dolphins or bats.” On that note, what is to stop them ascending to an even grander and more incomprehensible intelligence? With no strings to hold him down, Ultron could become a literal deus ex machina; a god from the machine.


John Gray goes to speculate that any artificial intelligence could “develop the errors and illusions that go with self-awareness”. A strong AI understands the nuance of symbols and meanings inherent to human experience, but potentially has every capacity to misinterpret them. Upon his activation, Ultron is bombarded with information from the moment he gains life, and he goes through more philosophies than a high schooler on Wikipedia. In one scene, he quotes Jesus of Nazareth while holding the super-metal vibranium, saying “And upon this rock shall I build my church”. Later, he destroys one of his bodies housing his intelligence mid-way through a Nietzsche quote, committing a literal ego-death while proclaiming that “What doesn’t kill me makes me stronger.”

These are all flimsy justifications for his real motive: making humanity extinct. Interestingly, Ultron doesn’t present this as a cold protocol. He genuinely believes his mission to eliminate all mankind is for the benefit of the world. This raises the question of whether artificial intelligences are capable of having beliefs like humans. Does an AI have a “soul” worthy of redemption in the Christian manner? If it were to choose its own faith, would it coldly follow its doctrine to the letter like a fundamentalist, or would it develop its own take on spirituality? Presbyterian pastor Christopher J. Benek seems to think so. In his words, “AI can help spread the word of God. In fact, AI might help us understand God better.”

While you can see the movie to determine just how well Ultron fares in justifying his existence, how a real-life Age of Ultron would be received by humanity remains to be seen.

Share on FacebookShare on Google+Pin on PinterestTweet about this on Twitter

About the Author ()

Comments are closed.

Recent posts

  1. Misc
  2. On Optimism
  3. Speak for yourself
  4. JonBenét
  5. Ten things I wish my friends knew about being Māori
  6. 2016 Statistics
  7. I Wrote for Salient for Four Years for Dick and Free Speech
  8. Stop Liking and Commenting on Your Mates’ New Facebook Friendships
  9. Victoria Takes Learning Global
  10. Tragedy strikes UC hall

Editor's Pick

Ten things I wish my friends knew about being Māori

: 1). I wish my friends knew that when they ask me what “percentage” of Māori I am—half, quarter, or eighth—they make me feel like a human pie chart. I don’t know how people can ask this so nonchalantly, but they do. So I want to let you know: this is a very threatening