Apple has unveiled a new open-source model that can generate a realistic 3D view from a single 2D photograph in a very short time. Called SHARP, the model is capable of rendering a photorealistic 3D scene in less than a second. This once again demonstrates Apple's strength in artificial intelligence and computer graphics, even beyond traditional product announcements.
Apple regularly publishes research papers that reveal its internal work. SHARP is a good example of this. The model isn't intended as a finished product, but rather as a technological foundation. Nevertheless, its practical benefits are immediately apparent. The combination of high speed, realistic rendering, and open-source availability makes SHARP particularly interesting for research, development, and future applications in the fields of 3D, AR, and visual AI.
Apple's new SHARP model at a glance
SHARP stands for Sharp Monocular View Synthesis and was presented by Apple in the study "Sharp Monocular View Synthesis in Less Than a Second." The model can reconstruct a complete 3D scene from a single photograph. It requires only a single pass through a neural network and runs on a standard GPU.
Unlike many previous methods, no slow optimization per scene is necessary. The calculation is performed instantly and is completed in less than a second. The result is a high-resolution, photorealistic rendering that can be displayed from slightly different viewing angles.
How SHARP creates 3D scenes
At its core, SHARP predicts a so-called 3D Gaussian representation of the scene. A 3D Gaussian can be understood as a small, blurry point of color and light located at a specific position in space. When millions of such points are combined, a complete 3D scene is created that appears realistic from the original viewpoint.
Many existing Gaussian splatting approaches require numerous images of the same scene from different perspectives. Apple takes a different approach with SHARP. The model generates the complete 3D Gaussian scene representation from just a single image, without additional captures or iterative calculation steps.
Training with synthetic and real data
To make this work, Apple trained SHARP with a large amount of synthetic and real-world data. The goal was to teach the model typical patterns of depth, geometry, and spatial structure. This allows SHARP to make plausible depth estimates even for new, unknown images.
When a new photo is processed, the model first estimates the depth of the scene. This estimate is then refined using learned data. Afterward, in a single pass, SHARP predicts the position, size, and appearance of millions of 3D Gaussian points.
Speed, quality, and metrics
A key feature of SHARP is its metric representation. The generated 3D scene has absolute scaling, meaning that distances and proportions are realistic. Furthermore, the model supports metric camera movements, which is relevant for applications in 3D graphics and augmented reality.
Apple states that SHARP sets a new standard for multiple datasets. Compared to previous best-in-class models, quality metrics are significantly improved. LPIPS is reduced by 25 to 34 percent, and DISTS by 21 to 43 percent. At the same time, synthesis time is several orders of magnitude shorter than with previous methods.
Intentional limitations of the model
SHARP is designed to realistically depict close-up viewpoints. The model does not attempt to invent completely invisible parts of the scene. If the virtual camera position deviates too far from the original viewpoint, the rendering reaches its limits.
This limitation is deliberate. Apple prioritizes stability, speed, and reliability over complete freedom of camera movement. This keeps the model fast enough to deliver results in less than a second while avoiding implausible reconstructions.
Comparison with existing methods
Compared to previous methods like Gen3C, the difference is most evident in efficiency. While other methods require complex optimization or many images, SHARP delivers immediate results. Despite this simplified approach, the model achieves higher image quality and better metrics.
The focus is clearly on practical applicability and scalability, not on maximum coverage of all conceivable viewpoints.
Open source and possible further developments
Apple has released SHARP on GitHub, allowing the model to be freely tested and further developed. Users have already conducted their own experiments and shared their results.
Some of these tests go beyond the original scope of application, such as using the generated 3D data for video animations. This shows that SHARP's underlying approach is also interesting for future work and extensions.
Apple demonstrates the next step in 3D reconstruction
With SHARP, Apple demonstrates the power of modern AI models in 3D reconstruction. A realistic 3D scene is created from a single photograph in an extremely short time, without complex capture processes or slow calculations. The open-source release makes the technology accessible and lays the foundation for further development. SHARP is not a finished product, but it clearly indicates the direction in which 3D rendering, computer graphics, and AI are evolving at Apple. (Image: pashabo / DepositPhotos.com)
- Apple stock: Morgan Stanley raises price target to $315
- The Trump administration is threatening the EU with retaliation over DMA
- MacBook Pro M5: Apple significantly simplifies battery replacement
- Apple and DMA: Why Europe's developers are protesting
- ChatGPT relies on Apple Music and faster image generation
- Apple plans fabric displays for HomePod and other devices
- Apple in focus: US criticism of Europe's new digital laws
- iOS 26.3 Beta 1: All new features and functions at a glance
- Apple can absorb rising DRAM costs better than others
- iOS 26 leak provides insight into upcoming Apple software plans
- Apple as a White House partner in the „Tech Force“ program
- Apple leak confirms massive product offensive for 2026
- Apple is restructuring: These personnel changes will shape the future
- iOS 26.3 opens iPhone notifications for third-party wearables
- iOS 26.3 allows easy switching between iOS and Android
- iOS 26.3 Beta launched: Apple kicks off new testing phase
- visionOS 26.2 expands the travel mode of the Apple Vision Pro
- iRobot in bankruptcy: What Roomba owners need to know
- Apple & Google should only allow nude photos with age verification
- Apple TV reveals five major premieres for early 2026
- Apple and OpenAI: What's really behind Elon Musk's lawsuit
- iPhone Fold as the new standard for foldable smartphones
- iOS 26.2 fixes over 20 security vulnerabilities in the system



