Google Photos added a big novelty last December: automatic photos with 3D effect. Google calls them “ cinematic photographs ” and can be generated automatically from the app, by clicking on the recent highlights section.
From the Google blog wanted to explain how they manage to give movement to photos, which gives them such a stunning 3D effect. As always, they use their neural networks and computer expertise.
The technology behind Google’s “ cinematic photos ”
According to Google, with cinematic photos, it wants to try to rekindle the user “the sense of immersion of the moment the photo was taken”, simulating both the movement made by the camera and the 3D parallax. How to convert a 2D image to a 3D image?
Google uses its neural networks trained with photographs taken with the Pixel 4 to estimate depth of field with a single RGB image
Google explains that, as they do with portrait mode or augmented reality, cinematic photographs require a depth map
With only one point of view (the photo plane), it is able to estimate photo depth with monocular touches such as relative sizes of objects, photo perspective, blur and more. For this information to be more complete, use data collected with the Pixel 4’s camera, to combine them with other photos taken with professional cameras by the Google team.
Basically, the technique is similar to Pixel portrait mode: the image is analyzed, segmented, and once the background is isolated, the movement is simulated by moving the background. It is much more complex, because various corrections and analyzes are necessary on the photograph because a few misinterpreted pixels could ruin the end result.
More information | Google