Apple has published new research demonstrating how image understanding and image generation can be effectively combined in a single AI model. The model, named Manzano, addresses a key problem of modern multimodal systems: they can usually either understand images well or generate good images, but rarely both simultaneously. Manzano tackles this challenge and, according to the researchers, delivers significantly better results than many previous approaches.
Multimodal AI is no longer a future technology. Models that process text and images together form the basis for image generators, visual assistants, and complex analysis tools. Nevertheless, fundamental architectural challenges remain. The balancing act between semantic image understanding and precise image generation is particularly problematic.
In the Manzano study, Apple describes why many current models fail at this point and why existing solutions often create new problems. Manzano aims to demonstrate that these opposing approaches are not necessarily mutually exclusive.
Why current multimodal models are reaching their limits
The core of the problem lies in how images are represented in AI models. Image generation works best in autoregressive models using discrete image tokens. Image understanding, on the other hand, benefits from continuous embeddings that contain rich semantic information.
Many existing models attempt to meet both requirements with two separate image tokenizers. A semantic encoder generates continuous features for understanding, while a quantized tokenizer like VQ-VAE is responsible for image generation. This forces the language model to process two very different visual representations. One originates from a high-level semantic space, the other from a lower-level, more spatially oriented space. This conflict leads to performance degradation, especially when both tasks need to be performed simultaneously.
Some architectures use separate processing paths, such as mixtures of transformers. These can mitigate the conflict, but are inefficient in parameter usage and often incompatible with modern mixture-of-experts approaches. Other solutions couple a frozen multimodal language model to a diffusion decoder. This preserves image understanding, but decouples image generation from the language model. Mutual learning effects are lost, and scaling the language model offers only limited benefits for generation.
In short: Existing multimodal architectures are structurally poorly designed to treat understanding and generation equally.
Manzano's basic approach
Manzano follows a unified approach. The model uses an autoregressive large language model to first predict what an image should represent. These semantic predictions are then passed on to a diffusion decoder, which generates the actual image pixels from them.
This means the language model remains responsible for visual understanding, while the actual image synthesis occurs in a separate but closely linked step. Understanding and generation are not separate but logically interdependent.
The three central components of architecture
Manzano's architecture consists of three clearly defined building blocks:
- First, a hybrid vision tokenizer. This generates both continuous and discrete visual representations, thus bridging the requirements of understanding and generation.
- Secondly, an LLM decoder. It processes text tokens and continuous image embeddings and autoregressively predicts the next text or image tokens from a common vocabulary.
- Thirdly, an image decoder. This renders the final image pixels from the predicted image tokens. A diffusion process is used to gradually remove noise, thus creating a consistent image.
This combination allows Manzano to meaningfully process even unusual or physically impossible scenarios. The researchers explicitly cite examples such as "The bird flies under the elephant" and compare the model's ability in such cases to well-known top models like GPT-4o or Nano Banana.
Model sizes, scaling and benchmarks
Apple trained Manzano in several sizes. The smallest variant has around 300 million parameters, the largest around 30 billion. The goal was to investigate how unified multimodal performance evolves with increasing model size.
The results show that larger Manzano models benefit significantly. In several benchmarks, the variants with 3B and 30B parameters achieve superior or at least competitive performance compared to other current unified multimodal models.
Even in direct comparison with other state-of-the-art systems, including models from Google and OpenAI, Manzano performs well. The study shows that the approach has proven itself not only theoretically, but also in practice.
Strong results in image editing tasks
In addition to classic image generation, Manzano was also tested on image editing tasks. These include instruction-driven image editing, style transfer, inpainting and outpainting, and depth estimation.
In all these areas, the model delivers compelling results and demonstrates that the unified approach is not limited to a single task. The combination of semantic understanding and precise image manipulation, in particular, sets Manzano apart from many previous models.
Apple's focus on clean AI architecture rather than quick effects
With Manzano, Apple presents a comprehensive and technically sound solution to a long-standing problem in multimodal AI. The hybrid vision tokenizer and the tight integration of the language model and diffusion decoder reduce conflicting objectives that were previously considered almost unavoidable.
Even though Manzano isn't currently used in Apple products, the research clearly points to future applications. Along with other projects like UniGen, it shows that Apple is specifically working to raise image understanding and generation to a new level of quality. The study makes it clear that the focus is less on spectacular promises and more on sound architectural decisions. (Image: agsandrew / DepositPhotos.com)
- Final Cut Pro: New features even without Apple Creator Studio?
- Apple Creator Studio: Icons clearly indicate the subscription version
- Apple Card impacts JP Morgan and causes profit decline
- Apple Arcade will significantly expand its offerings in February 2026
- iPhone Air: Apple rolls out firmware update for MagSafe battery
- Apple TV announces major documentary series about Andre Agassi
- Apple TV: New teaser for Monarch: Legacy of Monsters Season 2
- Apple Vision Pro: Will there be an Apple Pencil-like controller?
- Apple is testing Car Key with car manufacturers in real-world use
- Apple in the PC market in 2025: Growth, pressure & market share
- Apple reported significantly fewer patents in the US in 2025
- Apple restricts new features without a Creator Studio subscription
- Apple officially confirms the end of Pixelmator for iOS
- AirPods Pro 3 receive maintenance update before iOS 26.3 release
- Apple opens Pages, Numbers and Keynote to AI-powered subscriptions
- Pixelmator Pro launches on the iPad with professional features
- Apple Creator Studio as a new platform for creatives
- iOS 26.3: New evidence of encrypted RCS messages
- Apple TV receives new nominations at NAACP & MPSE Awards
- WhatsApp is testing new sticker suggestions in iOS chats
- Apple & Google: Musk criticizes Gemini deal regarding Siri
- iOS 26.2.1 is hinted at as an imminent iPhone update
- iOS 26.3 Beta 2 released: Apple continues testing phase
- Apple and Google on AI: Is data privacy still protected?
- Apple poised for a breakthrough in 2026: Wedbush sees great potential




