GPT-3, the popular AI-powered tool, was found to reason as well as college undergraduate students, scientists have found.
The artificial intelligence large language model (LLM) was asked to solve reasoning problems that were typical of intelligence tests and standardised tests such as the SAT, used by colleges and universities in the US and other countries to make admissions decisions.
The researchers from the University of California - Los Angeles (UCLA), US, asked GPT-3 to predict the next shape which followed a complicated arrangement of shapes. They also asked the AI to answer SAT analogy questions, all the while ensuring that the AI would have never encountered these questions before.
They also asked 40 UCLA undergraduate students to solve the same problems.
In the shape prediction test, GPT-3 was seen to solve 80 per cent of the problems correctly, between the humans' average score of just below 60 per cent and their highest scores.
"Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well," said UCLA psychology professor Hongjing Lu, senior author of the study published in the journal Nature Human Behaviour.
In solving SAT analogies, the AI tool was found to perform better than the humans' average score. Analogical reasoning is solving never-encountered problems by comparing them to familiar ones and extending those solutions to the new ones.
The questions asked test-takers to select pairs of words that share the same type of relationships. For example, in the problem "'Love' is to 'hate' as 'rich' is to which word?," the solution would be "poor".
However, in solving analogies based on short stories, the AI did less well than students. These problems involved reading one passage and then identifying a different story that conveyed the same meaning.
"Language learning models are just trying to do word prediction so we're surprised they can do reasoning," Lu said. "Over the past two years, the technology has taken a big jump from its previous incarnations."
Without access to GPT-3's inner workings, guarded by its creator, OpenAI, the researchers said they were not sure how its reasoning abilities worked, that whether LLMs are actually beginning to "think" like humans or are doing something entirely different that merely mimics human thought.
This, they said, they hope to explore.
"GPT-3 might be kind of thinking like a human. But on the other hand, people did not learn by ingesting the entire internet, so the training method is completely different.
"We'd like to know if it's really doing it the way people do, or if it's something brand new - a real artificial intelligence - which would be amazing in its own right," said UCLA psychology professor Keith Holyoak, a co-author of the study.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)