Devangshu Datta: At the dawn of the AI age

As AIs surpass humans and improve themselves continuously, the gap in functional intelligence would increase and they would leave humans further behind

Image
Devangshu Datta New Delhi
Last Updated : Nov 27 2015 | 10:01 PM IST
What happens to society at a point when computers - artificial intelligences - become more intelligent than their creators? This question is often referred to in science fiction. Science fiction writers and Artificial Intelligence (AI) researchers - there is a lot of overlap between these two groups - have been speculating about it for decades. It is referred as "The Singularity" and there is even an annual Singularity Conference.

The word "singularity" is borrowed from physics - it is the point within a black hole where the normal laws of physics cease to operate. In the so-called Standard Model, the Big Bang was preceded by a singularity.

In physics, it is by definition, impossible to know what happens within a singularity since the normal laws cannot be extrapolated. In social terms, a similarly impossible-to-extrapolate situation might arise as and when AI surpasses natural intelligence. The normal laws of social science are derived from situations when human beings are the dominant species because of their collective intelligence and tool-using capacity. What happens if that basic condition is altered?

Before getting into this debate, a willing suspension of disbelief may be required in that it must be assumed that this singularity will occur. There are actually no guarantees about this even if most members of the AI community believe that the singularity is inevitable. Assuming that it will occur is not illogical, given advances in computer intelligence.

Some put a date to it. Ray Kurzweil, the computer scientist who pioneered optical character recognition, believes singularity will be achieved by 2045 and the Turing Test (where a computer cannot be distinguished from a human in conversation) will be passed earlier, by 2029.

As and when the singularity occurs, it may cause a population explosion of many super-intelligent AIs. AI can be replicated very quickly simply by copying code to a new machine. By definition, superior intelligences would also be able to improve themselves. They would be self-aware and capable of designing their own learning programs.

AIs may work in tandem or not. Their goals may be very different from their creators. Perhaps they can solve problems human beings find intractable. For example, they may be able to figure out cures for diseases such as AIDS or cancer, or to tackle global warming efficiently. Or they might emulate Skynet, and try to eliminate humans. They could also design increasingly efficient weapons of mass destruction. The laws of economics might break down. The conventions of political systems and of international relations may be radically altered, or simply become obsolete.

Nobody has answers and several of the brightest and most informed persons around have expressed public disquiet at some possibilities. Stephen Hawking, Elon Musk and Bill Gates, to name three highly informed persons, have discussed the dystopic aspects of the singularity.

Dr Hawking feels it could be a direct threat to the existence of human beings as a species and Mr Musk concurs. This is scarcely irrational given that a lot of AI research is specifically targeted at developing better weapons systems, such as drones and other weapons with autonomous capability.

Another set of intriguing questions arise for theologists and ethicists. As AIs surpass humans and improve themselves continuously, the gap in functional intelligence would increase and they would leave humans further behind.

Would they necessarily share their insights with human beings and continue to behave like devoted servants in a sort of Jeeves and Wooster relationship? Or, would they treat humans like favoured pets, intelligent enough to be housebroken and taught a few commands? Would a human being have the right to switch off, or permanently format, an AI that could out-think him or would this be rated a crime equivalent to murder?

Finally, consider a situation where AIs have not only achieved singularity; they have improved themselves for millennia. An apocryphal situation related by Dr Hawking may arise. A super-computer is asked, "Is there a God?" It responds, "There is now", even as it induces a short-circuit that ensures that it can never be switched off.

Twitter: @devangshudatta
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

First Published: Nov 27 2015 | 9:46 PM IST

Next Story