Good, bad and intelligent

ChatGPT is at the cutting edge of AI research. Its use cases are endless. But it is also not hard to see how AI could potentially cause a global catastrophe

Image
Devangshu Datta
4 min read Last Updated : Dec 30 2022 | 10:25 PM IST
ChatGPT, an AI created by OpenAI, has made huge waves. There are claims it could supersede conventional search engines. Unlike a search engine, ChatGPT doesn’t simply list links when a search term (or a natural language question) is entered.

Instead, it produces a verbal precis of what it considers relevant information from what it considers pertinent links. The quality of that can vary a lot, of course. One reason why this is popular is familiarity: The ChatGPT mode of supplying information is akin to that of a schoolteacher, albeit one who isn’t very discriminating. A second reason is simply that this provides a filter for search results, saving searchers from the tedium of manually reading links.

Natural language processing (NLP), as deployed here, is at the cutting edge of AI research. The uses of good NLP are endless, and it is among the more challenging areas of research. Even the brightest and best humans speak and write with some lack of coherence. The information in human speech and writing is scattered and unstructured, complicated by context, and often larded with humour, irony and sarcasm.

Being able to make sense and extract relevant information and, above all, to speak and write in the same style as humans is very, very hard. To do it in a narrow way like chatbots do is hard enough. For example, a realtor or an automobile agency may use a chatbot to discover the needs of clients. This is narrow since the topics will be, say, one bedroom or two; manual transmission or auto; preferred budget range and so on. 

Doing NLP across a broad spectrum of subjects and doing it well enough to make humans believe they are interacting with humans is the infamous Turing Test. The concept of NLP was originally developed for psychotherapy. It is now considered to be of dubious use to fix mental health issues.

But NLP has other endless possibilities, as it gets better. Using NLP to improve voice commands and responses in critical situations like giving instructions to a car is something a lot of autonomous vehicle research is focussed on.

Another benign use may be the ability to diagnose physical health, by asking questions and parsing natural language responses to figure out symptoms the way human physicians do. NLP could give overworked health workers a big boost as a first filter. Other “secretarial” duties or use-cases, such as writing up the minutes of corporate meetings, or churning out comprehensible technical manuals are easy to think up.

Using NLP to run through millions of social media and mainstream media statements to understand attitude is a more nuanced use case. For example, let’s say there’s a proposed change to tax laws, or a proposal to lift prohibition in a given state. Policymakers could gauge the mood by using NLP to analyse high volumes of commentary.

Unfortunately, less benign uses could also arise if NLP extends its ability to “understand” social media interaction to manipulate opinion. NLP could be a good tool for influencing social media. It could also be a great phishing and social engineering tool.

Another issue is bias. Language models like all AI have to be trained on data. There are multiple ways of training, but all of the methods involve masses of data. In NLP, that data is generally drawn from the Internet and other publicly available sources for verbal content.

Unfortunately, the internet (like the real world) contains much wrong information, as well as fake news and biases. If NLP is trained using racist content, for example, it will amplify opinions that people of a certain colour, or religion, are superior to others. If it is trained on content asserting women are bad at a certain task, it will also assert this is true. Similar problems arise with subjects like climate change and vaccine efficacy.

Suppose NLP is trained on fake news, or weaponised to influence policymakers to go the wrong way on climate change, or launch military attacks on neighbouring countries. In a recent survey, a third of AI researchers polled (total sample of 327) believed AI could potentially cause a global catastrophe. It’s not hard to see this happening.

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :Technologyartifical intelligence

Next Story