“At the same time that machines are increasingly taking over workplace tasks that don’t require any uniquely human abilities, our education systems continue to push children to think and to act like machines… we need to stop training students for exams that computers can pass.”
— Mitch Resnick, Professor of Learning Research at the MIT Media Lab.
On May 11, 1997, an event which “had the impact of a Greek tragedy” unfolded in a televised chess game. Facing off were Russian grandmaster Garry Kasparov—who’d famously boasted he’d never be beaten by a machine—and Deep Blue, an IBM supercomputer trained in chess strategy by a team of programmers and chess masters. Things had begun well enough for Kasparov, who’d won the first match in Philadelphia a week earlier despite some impressively unconventional play by the supercomputer. By the time of their last game in New York, however, the competition was all squared up. Then, just nineteen moves into this sixth game, the inexplicable happened. Caught on the back foot, Kasparov spat the dummy and conceded. When pressed for an explanation, he stated simply: “I lost my fighting spirit… I’m a human being. When I see something that is well beyond my understanding, I’m afraid.”
- SPONSORED -
It later emerged that some of the Deep Blue’s most creative, unpredictable, and effective moves were the result of a bug. What Kasparov was most fearful of, what he perceived to be an advanced, unfathomable intelligence, was actually a software glitch. He lost because in the face of uncertainty he began to doubt and second-guess himself in a way that computers do not. And that is indeed the stuff of tragedy, Greek or otherwise.
“I’m afraid” is also the most famous line uttered by HAL, the fictional supercomputer in Stanley Kubrick’s 2001: A Space Odyssey, which for many acts as a cautionary tale about the dangers of artificial intelligence and an over-reliance on computers. Showcasing a poignant uncertainty, the “I’m afraid” moment is HAL’s most human, but arguably the film’s scariest. And here’s the rub with AI: it’s frightening because we trust it too much, despite it bearing the frailties, foibles, and fallibilities of the humans who created it. Or, more frightening, because it might transcend these shortcomings and thus open up the chance that we’ll be usurped by this technology, that we’ll literally be traded in for a better model. At which point it’s check and mate.
While 2001 remains the stuff of science fiction, there is an increasing convergence between the way people and computers think. In my first column, I discussed self-proclaimed cyborgs who embed technology in their bodies in order to augment and transcend traditional human experience. At the same time as such “body hacking” is becoming increasingly popular (in a month’s time the first Cybathon, a kind of body hack Olympics, will kick off in Zurich), modern computing is increasingly looking to the human brain for inspiration.
Take the rise of artificial neural networks. These complex systems comprise a multitude of connected computer nodes which simulate the brain’s densely interconnected cells and processes. Capable of responding dynamically to external inputs, neural networks are able to learn for themselves via a series of cascading, interactive processes and have been put to use recognising handwriting, compressing images, and even predicting the stock market. In your social media feeds you might have seen pictures altered by the popular Prisma app. Able to mimic a host of artistic styles, from Picasso to Lichtenstein, Prisma uses neural networks to intelligently alter the source image, making it more than just a simple photo filter effect. And those annoying Catchpa puzzles currently used to determine if you’re a human or a robot? Before long they will not be up to the task if said robots can make use of neural networks.
If there is a lessening gap between computational and human thinking, what does it mean for the future of education? My opening quote from Mitch Resnick reminds us that it’s important to ask ourselves what we can offer a world saturated with readily-accessible information, ubiquitous automation, and algorithms of increasing sophistication and complexity. It seems to me the uniquely human capabilities he mentions are becoming an ever narrower subset, and that’s precisely why the creative and critical thinking that tertiary education seeks to foster is crucial—they’re the last things machines will be able to mimic.