OpenAI is rolling out new artificial intelligence (AI) models GPT-4.1 and GPT-4.1 mini into ChatGPT. OpenAI launched these models in Application Programming Interface (API) alongside GPT-4.1 nano last month and now two out of these three are being integrated into ChatGPT. The company is also removing GPT-4.0 mini from ChatGPT to make way for the new model.
These artificial intelligence (AI) models are dedicated to helping software engineers and IT businesses. Successors to the 4o series models, 4.1-series models introduce enhancements in coding, instruction following, and long-context comprehension.
OpenAI while highlighting the enhancements of these models in a blog post wrote: “These models outperform GPT??'4o and GPT??'4o mini across the board, with major gains in coding and instruction following. They also have larger context windows—supporting up to 1 million tokens of context—and are able to better use that context with improved long-context comprehension. They feature a refreshed knowledge cut-off of June 2024.”
Also Read
OpenAI GPT 4.1 model family’s availability
OpenAI on X confirmed that GPT-4.1 is now accessible to users with paid ChatGPT subscription, including Plus, Pro, and Team, via the model picker menu.
Free-tier users will not have access, while Enterprise and Edu accounts are expected to receive the update in the next few weeks.
How OpenAI GPT-4.1 model family supersedes predecessor
OpenAI has detailed major improvements in its GPT-4.1 model family, noting that certain variants outperform GPT-4o in several critical benchmarks. According to the company’s blog, “GPT??'4.1 is significantly better than GPT??'4o at a variety of coding tasks, including agentically solving coding tasks, frontend coding, making fewer extraneous edits, following diff formats reliably, ensuring consistent tool usage, and more.”
The smaller version of the model, GPT??'4.1 mini, also shows promising results. OpenAI stated that “GPT??'4.1 mini is a significant leap in small model performance, even beating GPT??'4o in many benchmarks. It matches or exceeds GPT??'4o in intelligence evals while reducing latency by nearly half and reducing cost by 83 per cent.”
For use cases that require minimal delay, the company is positioning GPT??'4.1 nano (available only in API) as its most efficient option. OpenAI described it as “fastest and cheapest model available,” highlighting its compact size and extended context window of 1 million tokens. The model is claimed to score 80.1 per cent on the MMLU benchmark, 50.3 per cent on GPQA, and 9.8 per cent on Aider polyglot coding—figures that surpass even GPT??'4o mini. OpenAI noted that GPT??'4.1 nano is “ideal for tasks like classification or autocompletion.”

)