Business Standard

Humans need to safegaurd themselves from AI's risk: NITI Aayog Member

Saraswat there is a need to define matrices to assess the impact of AI on fairness and social justice and strategic plans are required to improve such matrices

Illustration: Ajay Mohanty

Representative Image

Press Trust of India New Delhi
Reminding the words of physicist Stephen Hawking that the development of full artificial intelligence 'could spell the end of the human race', Niti Aayog member VK Saraswat on Friday appealed for caution in the development of the next generation technology.
While speaking at International Conference on Artificial Intelligence 2023, Saraswat shared the benefits and value addition that it can do to the economies but at the same time charted out a list of checks and balances that need to be put in place in the development of AI.
"We should not forget what Stephen Hawking, the physicist, said. The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it will take off on its own and redesign itself at an ever increasing rate. Humans who are limited by slow biological evolution could not compete and would be superseded. Now we have to safeguard ourselves against that," Saraswat said.
He said that AI is estimated to contribute USD 1 trillion to the Indian economy by 2035 but it will have a lot of impact on society at the same time.
"AI has a potential to add almost about 1 trillion to India's economy in 2035...the augmentation will be highest about USD 597 billion. The intelligence automation will be another one which will be USD 277 billion and of course, total factor productivity as we improve is about USD 83 billion and all this will happen through Intelligent Automation human-machine interface and productivity improvement," Saraswat said.
He said there are risk factors which are associated with AI and many tend to overlook those risk factors.
"If you take a societal risk, risk of automation, weapons proliferation is what we are talking of. We have the risk of intelligence divide. This may happen. Performance risks are in the terms of risk of errors, risk of bias risk of opaqueness," he said.
The Niti Aayog member said that there is a need to define human-centric AI in terms of meaningful human control, transparency, explainability fairness, justice, inclusiveness, sustainability and education.
"Second, combine technological and philosophical considerations, adopt a fundamental human right framework, endorse The OECD AI as a principal, interpret AI systems as support to human decision making, not a replacement," he said.
Saraswat cautioned against recognising machines as moral agents and giving them an electronic personality or identity.
"In fact in the gaming systems which are emerging, now they are giving identity to the machines. machines are becoming no different names are being given and they're becoming like the human identity," he said.
Saraswat there is a need to define matrices to assess the impact of AI on fairness and social justice and strategic plans are required to improve such matrices.
He recommended applying a multi-stakeholder approach to all decisions regarding AI, measuring the impact of AI on the environment and considering the well being of both current and future conditions when deciding AI-related initiatives.
"Include data and technology, ethics in the science curriculum. Expand lifelong learning initiatives, and create AI literacy activities for citizens. Set up an independent and multidisciplinary area AI ethics committee in each government and at the G20 level," Saraswat said.
He said AI-mature states should support and accelerate the journey taken by less AI-mature countries towards human-centric AI as well as all governments should define and share milestones and timelines for adopting and implementing an operational approach for AI.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Feb 17 2023 | 10:56 PM IST

Explore News