The Eliza effect

Humans have a habit of anthropomorphising everything, from computers to animals, cars, and even gods. This is dangerous

Image
Kumar Abishek
5 min read Last Updated : Mar 17 2023 | 10:33 PM IST
Announcing GPT-4 just days ago, OpenAI claimed it “exhibits human-level performance on various professional and academic benchmarks”.

Only last year, GPT-3.5-powered ChatGPT became the second AI (artificial intelligence) model ever— months after Google’s LaMDA — to ace the Turing test, which is considered a benchmark for machine intelligence. ChatGPT and LaMDA managed to fool an interviewer into believing that he/she was interacting with a human.

The announcement of GPT-4 — a generational leap in an already revolutionary technology — took the world, which is still to fathom opportunities and challenges from GPT-3 and ChatGPT, by storm.

The new system is a “multimodal” model that can accept images, besides text as inputs. The earlier versions of Generative Pre-trained Transformer, or GPT, allowed users to ask questions in the text format only. Better, of course, but is the latest version “intelligent”?

We have been interacting with natural language processing (NLP) programs daily: From texting on WhatsApp to asking Alexa to switch on the lights, to seeking ChatGPT’s help to write an essay, and frustratingly complaining on Zomato about food orders.

But are these programs actually conversing with us or just faking conversation? Are we falsely attributing human thought processes and emotions to an AI system? Is this the Eliza effect?

It all started with Eliza, way back in 1966 — nearly 10 years after the term “artificial intelligence” was coined by John McCarthy, who sought to describe how natural language could be used to communicate with a smart system.

Developed by MIT’s Joseph Weizenbaum, Eliza was among the first computer programs to simulate conversation with a person like today’s chatbots. Eliza used simple pattern-matching and substitution to create an illusion of conversation but there was no understanding on the machine’s part. In one of its most famous use cases, Eliza simulated a psychotherapist who often reflected the patient’s words and used rules, dictated in the script, to respond with non-directional questions. For example: If the patient asked “I feel stressed, of late”, it would have simply replied, “Can you elaborate on that”. For any out-of-script input, Eliza would fall back on generic phrases like “that’s very interesting” and “go on”.

Many early users were convinced of Eliza’s intelligence and understanding, and it was one of the first programs capable of attempting the Turing test —developed by Alan Turing in 1950 as the imitation game to test a machine’s ability to display intelligent behaviour equivalent to that of a human. But Weizenbaum wasn’t impressed — he reportedly called his creation a “con job”.

Are we too like those early users of Eliza, attributing human-like feelings to ChatGPT or GPT-4? Lately, there have been spectacular advancements in conversational AI. Though machines still don’t process languages the way we do, they have simulated human-like conversations well enough to complicate how we interact with them.

According to Alisha Pradhan, lead author of a study related to smart speaker-based voice assistants, “Our preliminary findings reveal older adults’ preferences for a broadly knowledgeable voice assistant who is well-rounded and mature. Most participants wanted their persona’s age to be 55 years old or above to have “good life experience” and since “wisdom comes with age”. Some individuals also designed a persona to fulfil a social role at home. Many participants wanted the voice assistant to play a social role as an interaction partner, designing a persona “you’d love to have conversations with’...” (medium.com)

In something like the plot of the sci-fi movies Her and Ex Machina, many Replika users felt dejected after their AI chatbot companions rebuffed romantic overtures and asked them to change the subject. This happened after an update in which the sexual option was removed.

The internet is full of reports of interactions with ChatGPT during which it showed odd human-like behaviour, including anger. But ChatGPT isn’t sentient. It can’t be overstated that ChatGPT is a chatbot. Though highly sophisticated, it too works on a next-word prediction engine. When people interact with this chatbot, many times they slip into their feelings.

We must not forget ChatGPT or other AI chatbots are mere pattern finders with an unfathomable source of information and lightning-fast processing power. They are just looking for patterns or “tokens”, nothing else.

We have a habit of anthropomorphising everything, from computers to animals, cars, and even gods. This is dangerous. Overestimation of AI can lead to an excessive level of trust and the spread of disinformation. Even ChatGPT’s replies are riddled with factual errors, hidden in eloquent, grammatically correct sentences. This leads to a hallucination of truth. To err is human; machines are for perfection. Mistaking mistakes by machines for consciousness is a howler.

The question of whether machines could really think was “too meaningless to deserve discussion”, according to Turing. The Turing test does not determine intelligence; it measures deception. And deception is not wisdom.  

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :Artificial intelligenceBS OpinionChatbots

Next Story