This is precisely how Google improved Portrait Mode on Pixel Four

0
34

The Google Pixel 4 is the first pixel device with more than one rear-view camera that has long been overdue for the company if triple cameras are expected.

Google was able to do magic with the pixel cell phones with a camera, especially with the portrait mode, which traditionally requires a second rear view camera. Now the company has explained in its AI blog how the Pixel 4 portrait mode improves things.

It turns out that the Pixel 4 still uses the same single camera portrait technology as the previous Pixel. The older phones used dual-pixel autofocus technology for portrait mode, with each pixel of the 12-megapixel main camera being split in half. Each half of the camera pixel sees a slightly different perspective (especially in the background). This enables a rough estimate of the depth by finding corresponding pixels between the two viewing angles.

However, this solution has its limits, as Google states in the blog, since the double pixels are limited to a tiny single camera. This means that the difference between the viewing angles is extremely small and you really want larger viewing angles for a better depth estimate.

Google discovers telecameras

This is where the pixel 4 telephoto lens comes into play Google notes that the distance between the primary and telephoto cameras is 13 mm. Google shows the difference between what double pixels see and what double cameras see in the GIF below. Note that there is a slight vertical shift with double pixels (L), while a significant shift occurs with double cameras.

 Single camera parallax compared to double camera parallax. "Width =" 1600 "height =" 791 "/> <img class= Google AI blog

Dual cameras pose a major challenge in this regard, since pixels in one view may not be found in the other view (due to the wider view). And that's why Google doesn't throw away the old approach of portrait mode on the Pixel 4.

Google Pixel 4 does not work with some USB cables

The Google Pixel 4 has been found to be a somewhat controversial handset. It has a lot of innovative ideas, but there are already a number of things that you can't like about the camera.

“For example, the background pixels to the right of the man in the primary camera image do not have a corresponding pixel in the secondary camera image. Therefore, it is not possible to measure the parallax to estimate the depth of these pixels if only two cameras are used, ”says an extract from the AI ​​blog. "However, these pixels can still be seen in the double pixel views (left), which enables a better estimate of the depth in these regions."

Nevertheless, double cameras also help to solve the so-called aperture problem. This makes vertical lines depth estimation more difficult. In addition, Google has trained its neural network so that it calculates the depth well enough if only one method (double pixel or double camera) is available. The company states that the phone uses both depth mapping methods when the camera is at least 20 cm from the subject (the minimum focus distance for the telephoto camera).

 Bokeh on a reflex camera compared to older pixel bokeh using the new method. "width =" 960 "height =" 357 "/> <img class= Google AI blog

Finally, Google says it has also tried to highlight the bokeh so that you can see the bokeh more clearly Disks in the background of a shot. Check out the pictures above to get a better idea of ​​the finished product.

This is not the only new technology on Google Pixel phones as the company has previously described in detail how astrophotography mode works. In any case, we hope that Google will continue to push the boundaries of computer photography in 2020.

Further contributions to Google Pixel 4

LEAVE A REPLY

Please enter your comment!
Please enter your name here