Google unlocks Pixel 2's AI-based semantic image technology for developers

With this public release, the search engine giant hopes to make it easier for other groups in academia and industry to reproduce and further improve state-of-the-art systems

Google Semantic image segmentation
Google Semantic image segmentation
Khalid Anzar New Delhi
Last Updated : Mar 15 2018 | 2:15 PM IST
Search engine and software giant Google, a has open-sourced its artificial intelligence-based ‘Semantic Image Segmentation’ technology – a technology used in the Pixel 2 and Pixel 2 XL portrait mode to achieve shallow depth-of-field effect without the need for a secondary camera.

“Today, we are excited to announce the open-source release of our latest and best-performing semantic image segmentation model, DeepLab-v3+, implemented in Tensorflow. This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our Tensorflow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks,” stated a blog post shared by Google.

In the blog post, Google explained what the technology was and how it worked to achieve the result. According to the post, the semantic image segmentation assigns a semantic label, such as road, sky, person, dog, etc, to every pixel in an image. These labels then pinpoint the outline of objects, and thus impose much stricter localisation accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.

Google Semantic image segmentation
At a time when most smartphone manufacturers are moving towards dual-camera lenses for enhanced imaging, Google’s flagship smartphones – the Pixel 2 and Pixel 2 XL -- took a radical move to stick with a single 12.2-megapixel sensor, powered by algorithms that utilise the company’s semantic image segmentation technology. The technology allowed Google flagships to perform on-par, or even better, in comparison with other premium dual-camera-based smartphones.

With this public release, the search engine giant hopes to make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

Next Story