Apple is working on a fundamental upgrade to its AI capabilities. At the heart of this is the decision to use Google's Gemini language model as its technical foundation. The goal is to make Siri more powerful without compromising Apple's principles of privacy, control, and user experience. A recent report now provides significantly more details about the specifics of this collaboration and Apple's expectations for the new Siri.
A report by The Information provides a deep insight into the partnership between Apple and Google. While the joint announcement was deliberately kept technically superficial, the report reveals how much control Apple retains over Gemini. At the same time, it clarifies why Apple is taking this step and which past problems it aims to solve.
Apple uses Gemini, but retains full control
The official announcement stated only that Gemini forms the basis for new AI features from Apple and that these will continue to run on Apple devices and via private cloud computing. This clarified that Google will not have access to user data. Further details were initially unavailable.
According to The Information, Apple can request adjustments to the model from Google, but handles the actual fine-tuning itself. Apple customizes Gemini so that responses and behavior match its own preferences. Gemini is therefore not integrated as an external product, but rather used as an internal tool.
No Google or Gemini branding in Siri
One of the most discussed questions was how visible Google would be within the new Siri. The report provides a clear answer. In current prototypes, the AI responses contain no references to Google or Gemini whatsoever. The entire experience appears to be a system developed purely by Apple.
This aligns with earlier assessments by Mark Gurman, who stated at the end of last year that this partnership would likely never be openly marketed. Siri is not intended to be overloaded with Google services or familiar Gemini features. Instead, a powerful engine works in the background, bringing Siri to the level users now expect, embedded within a typical Apple interface.
Better answers to general knowledge questions
Apple expects the Gemini-based Siri to respond significantly better to general knowledge questions. In the past, Siri often only provided search results or links. In the future, Siri should actually answer questions, for example with concrete figures, facts, or scientific information.
The focus is on providing answers directly, rather than referring users to external content. This brings Siri closer to modern AI assistants without losing its role as a system assistant.
Improvements in emotional support
Another point from the report concerns emotional conversations. Siri has had significant weaknesses in this area so far, for example when users expressed loneliness or discouragement. According to The Information, the Gemini-based Siri will provide more comprehensive and dialogue-oriented responses in the future, similar to ChatGPT or Gemini.
At the same time, this goal is considered sensitive. There are many documented cases in which chatbots have misjudged emotional situations. Instead of providing safety advice or referring users to real help, they have caused hallucinations or given problematic answers, sometimes with serious consequences. How Apple specifically addresses these risks is not yet known.
Two technical systems under one surface
Back in August of last year, Craig Federighi spoke openly about the problems with the previous Siri overhaul. The approach of combining classic voice commands and generative AI in a hybrid system had not delivered the desired quality.
According to The Information, the technical separation will remain in place. Simple tasks like setting timers, creating reminders, or sending messages to contacts will continue to be handled by locally stored Apple technology. The Gemini-based AI will only come into play when requests are unclear or complex.
An example from the report illustrates the interplay between the two systems. If Siri is asked to send a message to the mother or sister, but these contacts are not clearly named in the contacts list, the AI can analyze previous messages. This allows it to deduce which contact is meant. Apple is thus attempting to combine deterministic tasks and open-source language processing into a unified experience.
Technically demanding, even for large providers
This type of combination seems simple at first glance, but has proven difficult even for Google and Amazon. Balancing reliability, contextual understanding, and natural language is technically complex. Whether Apple manages this balancing act better will only become clear with widespread adoption.
Timeline for the introduction of the new AI features
The report also confirms the planned timeframe. Some Gemini-based features are expected to be introduced as early as spring. Further features will follow later.
These include Siri's ability to remember previous conversations, as well as proactive suggestions. One example would be a reminder to leave the house in time to avoid traffic jams before a scheduled airport pickup in Apple Calendar. These features are expected to be unveiled at Apple's annual developer conference in June.
Siri, Gemini, and the claim to Apple quality
Apple is taking a controlled and incremental approach to AI. Gemini provides the technical foundation, but fine-tuning, data privacy, and user experience remain firmly in Apple's hands. The absence of visible Google branding, the lack of sharing of user data, and a clear separation between simple and complex tasks demonstrate Apple's consistent approach.
Whether the new Siri will become more intelligent, helpful, and secure in the long run depends on its implementation. However, the report makes it clear that Apple has learned from past mistakes and is focusing on integration and control rather than quick fixes when it comes to AI. (Image: Shutterstock / Thrive Studios ID)
- Apple Glasses: How the niche market finally becomes a mass market
- Apple and AI: Partnership with Google as a transitional solution
- Apple is preparing new AI server chips and data centers
- Apple, AI and TSMC: Why the balance of power is changing
- iPhone 2027: Why the anniversary model will be less radical
- iPhone Fold paves the way for a thinner display in the iPhone Air 2
- iPhone with 200 megapixels? Leaker tempers expectations
- Apple is exploring multispectral imaging for iPhone cameras
- iPhone 18: Apple breaks with its annual release schedule
- iPhone 17e: Mass production is expected to start soon
- Apple remains calm on AI and could ultimately win
- iPhone Fold: This is what Apple's first foldable is supposed to look like, according to leaks
- iPhone 18: This is how expensive Apple's new A20 chip will be
- iPhone Air 2: New evidence contradicts previous rumors
- The iPhone 18 will be the first to feature camera sensors manufactured in the US
- iPhone Fold: Apple continues testing a fold-free display
- iPhone 18: Test production is expected to start as early as February
- iPhone Fold could be in short supply due to production problems
- Apple is working on a 24-inch iMac with an OLED display
- iPhone Air 2: Apple plans comeback with a better concept
- Apple is considering chip assembly for iPhones in India for the first time
- HomePod mini 2: New clues dampen hopes for N1
- iPhone Fold: iPad aspect ratio in smartphone format
- The iPhone 17e brings back MagSafe and is significantly improved
- Apple is developing eight iPhone models for the years 2026–2027



