Apple promised a major overhaul of its digital assistant Siri at its Worldwide Developers Conference (WWDC) 2024 under the Apple Intelligence banner. The new Siri was meant to be more personal, more aware of what’s on the screen, and capable of taking actions inside apps. Nearly two years later, most of those features are still missing.
While the timeline is still not official, Apple confirmed earlier this month that it will use
Google’s Gemini as the underlying technology for its foundation models, which will power the next generation of Siri, with the change expected to roll out later this year. On that note, let’s recap the delayed Siri features and what could be coming with Gemini’s help.
Delayed Siri features
At WWDC 2024, Apple said Siri would enter “a new era” with Apple Intelligence. The company outlined several major upgrades.
Personal context understanding
Apple said Siri would be able to understand data from emails, messages, photos, calendar events, and files stored on the device to give more context-aware responses. For example, if a user needed their driving licence number while filling a form, Siri could pull it from a saved photo.
App actions for Siri
The redesigned Siri was meant to perform tasks inside apps without opening them. Users could ask Siri to find a specific photo, edit it, and save it to a folder using only voice commands.
On-screen awareness for Siri
Siri was also supposed to understand what was currently visible on the screen. If someone sent a new address in a message, users could say “Save this address,” and Siri would update the contact automatically.
Why were AI-powered Siri features delayed
The revamped Siri was expected to arrive during the iOS 18 cycle, likely around iOS 18.4 or iOS 18.5, but those updates shipped without it. Apple later said the system needed deeper changes to meet its quality standards, pushing the release to 2026.
Its software chief Craig Federighi said the new Siri did not reach the level of reliability Apple wanted in the time it expected. The company decided not to ship features that had high error rates, even if early versions were already working internally.
How Google Gemini could enable these features
Earlier this month, Apple and Google confirmed a multi-year partnership. Under this deal, Apple’s next generation of foundation models will be based on Google’s Gemini models and cloud technology. This includes the personalised version of Siri and delayed Apple Intelligence features.
According to Bloomberg, the new Siri will be built around three layers: a query planning system, a knowledge retrieval engine, and a summarisation system. Reports suggest Gemini will handle planning and summarising, helping Siri break down user requests and generate clearer responses.
The same reports said Gemini could also be used for knowledge search. If that happens, Siri’s knowledge base could expand far beyond its current setup, where complex questions are often handed off to ChatGPT only with user permission. Instead of being an optional add-on, Gemini would become part of Siri’s core system, making the features Apple previewed in 2024 more realistic to deliver.