Organisations are still not confident about implementing industrial artificial intelligence (AI) solutions which can be embedded in the workflow of applications and help in the day-to-day decision-making process, according to a recent survey conducted by global consulting major PwC.
An overwhelming majority of decision-makers worldwide confessed that they may not have robust tools or processes for ensuring the reliability of their AI solutions. Interestingly, only 10 per cent of Indian respondents were confident about the reliability of their AI applications, show findings of the survey with over 1,000 business decision-makers from India and other regions.
However, nearly 62 per cent of Indian corporate chiefs say their organisations have implemented AI in some form. But the worrying part is that 53 per cent of Indian respondents significantly outnumber their global counterparts (36 per cent) in admitting that they have no formal approach to identify AI risks.
In addition, 29 per cent of the respondents from India feel that they have no tools to access security flaws in their AI systems.
"This suggests that the enthusiasm to implement AI projects is very likely to run into headwinds unless organisations adopt a robust framework for using AI responsibly," concludes the PwC report titled 'With AI's great power comes great responsibility' which conducted the comprehensive study between May and September.
The respondents spanned across industries such as technology, media and telecom, financial services, professional services, health, industrial products, consumer markets, government and utilities. The survey also included respondents across various business functions such as IT, finance, operations, marketing, customer service, sales, human resources, legal, risk and compliance.
AI has the potential to solve complex problems effectively at scale. However, badly designed AI can cause more harm than good. The report highlights need to invest in building AI systems that are responsible, understandable and ethical which ensure customer trust.
"It is encouraging to see Indian organisations adopt or willing to adopt AI significantly in the coming few years," said Deepankar Sanwalka, Leader of Advisory at PwC India.
"However, to scale AI initiatives, organisations will have to ensure these solutions are ethically sound, compliant with all regulations and backed by a robust governance framework."
Sudipta Ghosh, Leader of Data and Analytics at PwC India, said merely adopting AI will not yield desired results. AI must be supported by strong performance pillars addressing bias and fairness, interpretability and explainability, robustness and security.
"Or else, the enthusiasm to implement AI projects is very likely to run into headwinds. Benefits of AI may be realised when an appropriate governance framework and dimensions are in place, and humans and machines can collaborate effectively. We need to ensure that AI acts in the interests of society at each stage of development," he said.
PwC advocates the implementation of the 'Responsible AI' framework that can help organisations assess potential threats and mitigate foreseen or unforeseen risks. The study reiterates the need for a comprehensive Responsible AI (RAI) framework and toolkit for its widespread adoption.
It also highlights how AI's potential can be unlocked as well as maximised if a structured approach is taken towards addressing the associated risks. Consequently, even when using AI, businesses need to ascertain what benefits AI will offer them while being aware of their operations being vulnerable to any disruptions.
(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)