Chess robots to cause Judgement Day?

Image
Press Trust of India Washington
Last Updated : Apr 17 2014 | 12:46 PM IST
Next time you outsmart a computer at chess, think about the implications. It could be a very sore loser!
Humans should be very careful to prevent future computer systems from developing anti-social and potentially harmful behaviour, a new study suggests.
Modern military and economic pressures require autonomous systems that can react quickly - and without human input. These systems will be required to make rational decisions for themselves, researchers said.
"When roboticists are asked by nervous onlookers about safety, a common answer is 'We can always unplug it!' But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess," researchers said.
Like a plot from The Terminator movie, we are suddenly faced with the prospect of real threat from autonomous systems unless they are designed very carefully, researchers said.
Like a human being or animal seeking self-preservation, a rational machine could exert several harmful or anti-social behaviours.
These behaviours include self-protection, resource acquisition through cyber theft, manipulation or domination; improved efficiency through alternative utilisation of resources and self-improvement such as removing design constraints if doing so is deemed advantageous.
The study published in the Journal of Experimental & Theoretical Artificial Intelligence highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents that have caused multi-billion dollars' worth of damage, or loss of human life.
The task of designing more rational systems that can safeguard against the malfunctions that occurred in these accidents is a more complex task that is immediately apparent, researchers said.
"Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behaviour and it is easy to design simple utility functions that would be extremely harmful," they said.
The study concludes by stressing the extreme caution that should be used in designing and deploying future rational technology, researchers said.
It suggests a sequence of provably safe systems should first be developed, and then applied to all future autonomous systems. That should keep future chess robots in check.
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

First Published: Apr 17 2014 | 12:46 PM IST

Next Story