Apple has released a new open-source AI model that can turn a single 2D photograph into a photorealistic 3D scene. The model, called SHARP, has been detailed in a newly published research paper and is now available publicly on GitHub.
According to the paper, titled Sharp Monocular View Synthesis in Less Than a Second, SHARP is designed to reconstruct a realistic 3D representation of a scene from just one image, allowing it to be viewed from slightly different angles while preserving scale and depth consistency.
Apple’s SHARP model: How does it work?
In simple terms, SHARP analyses a single photo and predicts what the scene would look like in three dimensions. Instead of generating an entirely new image each time, the model builds a lightweight 3D representation of the scene, which can then be rendered from nearby viewpoints in real time.
Also Read
Apple’s researchers achieve this using what they describe as a 3D Gaussian representation. Each “Gaussian” can be thought of as a small, fuzzy point of colour and light placed in 3D space. When millions of these points are combined, they form a scene that looks realistic when viewed from slightly different angles. This approach allows SHARP to generate smooth parallax effects, similar to what you might see when shifting your head while looking at a real object.
Apple’s SHARP model: How is it different from existing tools
What sets SHARP apart from earlier techniques is speed and ease of use. Many existing methods require dozens or even hundreds of photos of the same scene, captured from multiple angles, and rely on slow, per-scene optimisation. SHARP, by contrast, predicts the entire 3D structure from a single image in one pass through a neural network, completing the process in less than a second on a standard GPU, according to Apple’s researchers.
Apple’s SHARP model: What are its limitations
The model is designed to produce results for nearby viewpoints rather than fully exploring unseen parts of a scene. This means users can slightly change the camera angle or viewpoint and still get a believable 3D effect, but cannot move far beyond what the original photo shows.
Apple acknowledges this trade-off in the paper, noting that SHARP does not attempt to fabricate parts of a scene that were never captured in the original image.
Apple’s SHARP model: How can it be used
While Apple has not announced any specific product plans around SHARP, the research points to potential applications in areas such as spatial photo viewing, augmented reality and virtual reality.
Apple already offers related features on newer iPhone models, such as the iPhone 15 Pro and later, allowing users to capture Spatial photos or add depth effects to standard images. With SHARP, Apple could further refine and extend these capabilities.
By releasing SHARP as an open-source project, Apple is also enabling researchers and developers to experiment with the model and explore extensions beyond its original scope.

)