Friday, December 05, 2025 | 10:28 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

AI experts divided over Apple's research on large reasoning model accuracy

A token budget for large language models (LLM) refers to the practice of setting a limit on the number of tokens an LLM can use for a specific task

artificial intelligence, AI, COMPANIES
premium

Apple’s observations in the paper perhaps can explain why the iPhone maker has been slow to embed AI across its products or operating systems

Avik Das Bengaluru

Listen to This Article

A recent study by tech giant Apple claiming that the accuracy of frontier large reasoning models (LRMs) declines as task complexity increases, and eventually collapses altogether, has led to differing views among experts in the artificial intelligence (AI) world.
 
The paper titled ‘The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity’ was  published by Apple last week.
 
Apple, in its paper, said it conducted experiments across diverse puzzles which show that such LRMs face a complete accuracy collapse beyond certain complexities. While their reasoning efforts increase with the complexity