Security researchers have demonstrated that Apple's on-device AI can be manipulated through targeted input. The success rate was 76 percent – potentially affecting hundreds of thousands of users.
Apple has positioned Apple Intelligence as a privacy-friendly alternative to cloud-based AI systems: A local language model runs directly on the device, while more complex tasks are processed via private cloud computing. However, this deep integration into the operating system poses risks. Researchers at RSAC Research have demonstrated that the on-device model can be manipulated using prompt injection techniques – with a success rate of 76 percent in 100 tests.
The results were communicated to Apple on October 15, 2025. The study focused on the local Large Language Model, which is embedded in Apple's operating systems and also available to third-party apps via system APIs. The researchers demonstrate that the deeper system integration, which Apple deliberately chose, simultaneously increases the attack surface.
How the attack works
The researchers combined two techniques to circumvent Apple's security measures. The first method, called "Neural Exec," uses specially constructed inputs that appear meaningless to humans but are interpreted by the language model as concrete instructions.
The second technique utilizes a Unicode function: the so-called right-to-left override. This allows text sections to be visually reversed, making embedded instructions invisible to human reviewers – the language model, however, reads them correctly. Combined, both methods bypass both the model's internal protection mechanisms and external filters.
The result: The model can be made to generate outputs that are controlled by the attacker.
Why this is problematic
The risk extends far beyond unwanted text output. Because Apple Intelligence connects directly to apps via system APIs, manipulated responses can influence app behavior or expose sensitive user data. A successful prompt injection attack could affect multiple apps and system-wide functions simultaneously.
RSAC estimates that between 100,000 and one million users are already using apps that are potentially vulnerable. As more and more apps integrate Apple intelligence features, the number of potential attack targets is constantly growing.
Attackers do not need direct access to the model itself – it is sufficient to send manipulated input via legitimate APIs. This makes the attack particularly difficult to detect and prevent.
Apple's reaction
According to the researchers, Apple already strengthened security measures with iOS 26.4 and macOS 26.4, without publicly disclosing the specific changes. The recent collaboration with Anthropic within the framework of Project Glasswing demonstrates that Apple increasingly intends to enhance the security of its systems using AI-powered methods.
At the time of publication, there is no evidence that the vulnerability is being actively exploited – the attack is currently purely theoretical. However, the techniques used, such as prompt injection and Unicode manipulation, are well-documented in security research and relatively easy to implement.
What this means for Apple Intelligence
The results reveal a fundamental tension in Apple's AI strategy. While the decision to run models locally limits data exposure to the cloud, it simultaneously makes the operating system the gatekeeper and execution layer. If the safeguards fail, the consequences are correspondingly far-reaching.
Apple's approach remains the right one from a data privacy perspective. However, RSAC research shows that local models are not automatically more secure than cloud-based systems. Actual security depends on how well a model can defend against adversarial input – regardless of where it runs. (Image: Shutterstock / Primakov)
- Apple requests Samsung Data in US Antitrust Case
- iOS 26.4.1 activates Theft Protection on Company iPhones
- Self-Service Repair: New parts for MacBook Neo & Co.
- iOS 26.4.1 fixes serious iCloud sync bug
- iPhone dominates Smartphone Ranking: Five Models in the Top 10
- iOS 26.4.1 is here: Apple releases bugfix Update
- Apple ranks last in Repairability – with one exception
- WhatsApp is getting a new CarPlay App with full Functionality
- Apple develops AI Tool for UI Prototypes
- Apple and Anthropic are jointly hunting for Security Vulnerabilities
- Studio Display XDR receives FDA approval for Radiology
- Apple Patent: Vision Pro could become modular and upgradeable
- Steam Link comes natively to the Apple Vision Pro
- Dark Matter Season 2 premieres on August 28th on Apple TV
- Apple Arcade May 2026: Nick Jr. Replay and new Games
- MacBook Neo: Demand exceeds Apple's Chip Supply
- Vision Pro: Insiders report on chaotic Store Launch
- China intensifies attacks on Apple's Chip Manufacturer TSMC
- Class action lawsuit accuses Apple of using YouTube videos for AI training



