Viewport width =
July 16, 2012 | by  | in Opinion |
Share on FacebookShare on Google+Pin on PinterestTweet about this on Twitter

Science: What’s It Up To?

Domo Arigato Mister Roboto

Singularity is the hypothetical future emergence of greater-than-human super intelligence through technological means (AI). Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted or understood.

Proponents of the singularity typically state that an “intelligence explosion” where super intelligences design successive generations of increasingly powerful minds, might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Whether or not an intelligence explosion occurs depends on three factors. The first is the accelerating factor. The new intelligence enhancements are made possible by each previous improvement. This is not necessarily what would happen though. As the technology becomes more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Secondly, each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally, there is the issue of a hard upper limit (amount of data processed per unit area). Absent of any quantum computing, eventually the laws of physics will prevent any further improvements.

I could go into the various predictions about software and hardware improvements over time, but they are complex and heavily debated. What you guys want to hear about is the implications on society.

Science fiction writers have been presenting a vision of this future for decades. A particularly good example is Isaac Asimov’s I, Robot (the book not the Will Smith film). The book details the development of positronic robots, and the plight of robo-psycologist Susan Calvin as she struggles to understand the increasing complexity of robotic brains. In Asimov’s world, robots are kept from turning on their masters by the three laws of robotics that keep them in check.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Spoiler alert, robots end up passively ruling the world, controlling the all means of production and employment, and creaing a bizzare eutopia.

This can, of course be contrasted by the Terminator or Matrix films, where the machines became self-aware and enslaved/exterminated all the humans. Lets hope it doesn’t go this way.

Though secured firmly in fiction, these works raise interesting questions about morality and robotics and the implications of singularity on the human race. Are you ready for the robo-revolution?

Share on FacebookShare on Google+Pin on PinterestTweet about this on Twitter

About the Author ()

Comments are closed.

Recent posts

  1. Sliding into VUWSA President Marlon Drake’s DM’s
  2. There’s a New Editor
  3. An (im)possible dream: Living Wage for Vic Books
  4. Salient and VUW tussle over Official Information Act requests
  5. One Ocean
  6. Orphanage voluntourism a harmful exercise
  7. Interview with Grayson Gilmour
  8. Political Round Up
  9. A Town Like Alice — Nevil Shute
  10. Presidential Address

Editor's Pick

In Which a Boy Leaves

: - SPONSORED - I’ve always been a fairly lucky kid. I essentially lucked out at birth, being born white, male, heterosexual, to a well off family. My life was never going to be particularly hard. And so my tale begins, with another stroke of sheer luck. After my girlfriend sugge