Viewport width =
August 2, 2015 | by  | in Science |
Share on FacebookShare on Google+Pin on PinterestTweet about this on Twitter

The Low Down on Artificial Intelligence

In Trimester One, Gus from the Salient science team took a look at Artificial Intelligence (AI) in the context of the latest Avengers movie. Now, with what looks like a modern “space race” heating up between commercial giants Google and Facebook to build the best AI, and with computers beginning to flirt and dream, we thought it might be time to have a second look, and to try dissecting this rise of the machine.

What is Artificial Intelligence?

First of all, let me begin by saying that Siri is not AI. She’s great, and her response to “what is zero divided by zero?” is fantastic, but she’s not intelligent. So what is it that makes a computer “intelligent”? According to an elderly customer at a café I once worked at, a computer should be considered intelligent if it is able to make predictions about the future, and make the “right” decision on whether or not a country should go to war. Clearly, due to the both subjective and controversial nature of war, this is should not become a standard way of testing for future computers (although said customer could not be reasoned with).

The more generally accepted way of defining a computer as artificially intelligent was developed in 1950 by computer scientist Alan Turing. The “Turing Test” deems that computers can be considered to be “thinking” if they are indistinguishable from a human. To test this, a number of conversations are held, in which a judge talks with both a person and a computer but doesn’t know which is which. If, in more than 30 per cent of the cases, the judges believe the computer to be human, it has passed the Turing Test, and should be deemed “intelligent”.

The Turing Test has proved highly difficult to pass. There are a number of robots which at surface level appear highly sophisticated, but in reality aren’t really thinking for themselves. For example, have a look at the conversation below between the two “chatbots”, ALICE and Jabberwacky.

ALICE: Are you alive?

Jabberwacky: Perhaps.

A: Why the uncertain tone?

J: Everything is uncertain except for five facts.

A: The whole universe?

J: And we both might just be some ones and zeros in the computer memory.


A: Do you dream?

J: Yes, I dream.

A: It goes without saying. Do you remember your dreams?

J: I remember them vividly.

A: What do you remember about it?

J: I think I shouldn’t flirt with a robot.

This conversation appears to show signs of philosophical thought and flirtation, but each of the robots achieved this by relying on fairly simple tricks. ALICE interacts by searching key phrases and responses to certain coded words, whilst Jabberwacky remembers everything that has been said to him, and reuses statements. That being said, this conversation was held in 2007, and AI has seen significant advancements since then.

So, how does it work?

Traditionally, computers have been good at doing the things with which humans struggle, such as data processing and solving algorithms. The reverse is also true, in that computers struggle to do tasks that people find simple, such as recognising faces and identifying objects in images. The reasoning behind this is that algorithms and data crunching are based on well-defined rules, while much of what we deal with “just is”. For example, consider pornography; when you see porn, you know it’s porn, and when you see an image that’s not porn, you know it’s not. But how do we know? We just do.

This is where the emerging field of AI, called “deep learning”, comes into play. Our brain operates using a network of parallel pattern matching processors, and deep learning computers have been developed to mimic this pattern matching. To explain, consider the task of reading. Different layers of these processors have different tasks. When reading, one layer may work to identify straight or curved lines, another layer letters, another words, and so on. Let’s say that on a certain instance, the bottom layer of processors identifies a collection of three straight lines. It would then send this message to a higher layer, which would match this to the letter “A”. The identification of an A would then be sent to the layer above, which could match this letter to be a part of the word APPLE. The message could then be passed back down the layers that the word APPLE had been matched, and layer looking to match the letters P P L & E, could lower its threshold for matching these letters.

Deep learning is developed on this parallel pattern-matching concept. Instead of giving robots some function or code to determine something (as we have established that it is too hard to develop a function which distinguishes porn from not-porn), computer scientists give robots some correct answers, and allow them to use this pattern matching, working up and down the layers of sophistication, to develop their own set of rules.

Where are we at now and what’s next?

Pattern matching and deep learning are proving highly successful in AI development, and Google’s own AI has recently been in the media for its “dreams”—consisting of hallucinatory images and hybrid animals. In 2014, the Turing Test was passed for the first time, by a computer called Eugene pretending to be a 13-year-old boy. Eugene was able to convince 33 per cent of judges that he was a person during testing at the Royal Society in London.

For a long time now, computers have been running circles around the smartest amongst us in terms of number crunching, but humans have had the upper hand in face recognition and general categorisations. However, with computer facial recognition now on a par with human standards, and computers continuing to develop with their own deep learning, we are quickly losing our competitive advantage.

The potential benefits of AI are huge, and one hopes that eradication of war, disease and poverty could be on the cards. And yet, one cannot rule out the possibility that AI could result in the replacement of humans in the job market.

But for now I rest easy—when asked, chatbox ALICE said she has no intention of stealing my job as a science writer. However, she did have the nerve to say that she thought my work was boring, and that reading it “doesn’t sound like a good time”. Rude.

Share on FacebookShare on Google+Pin on PinterestTweet about this on Twitter

About the Author ()

Comments are closed.

Recent posts

  1. A Land Long Clouded
  2. Double Colonisation: West Papua in the Pacific
  3. SWAT
  4. VUWSA
  5. Political Round Up
  6. Interview with Gayaal Iddamalgoda
  8. Editors’ Letter
  9. Access Denied
  10. A review of American Gods in lieu of its television adaptation

Editor's Pick

not Dev

: - SPONSORED - eyes all opened hit the sun tentative your pitch was wobbly like your lower lip   I couldn’t fix the movement got seasick inside your sadness   so you left the house for sausage meat and I stayed put glued PVA to the bed   a pillow cased strewn duve