Associate Sponsors

Co-sponsor

Google details UI design of Android XR-based AI glasses with display

Google explains how it redesigned its UI for display AI glasses, fixing issues like glare, legibility and focus, ahead of Google I/O 2026

Google Display AI glasses in action (Image: Google)
Google Display AI glasses in action (Image: Google)
Aashish Kumar Shrivastava New Delhi
4 min read Last Updated : Feb 19 2026 | 12:06 PM IST
Google has detailed on how it is building interfaces for display-based AI glasses, outlining the design challenges it faced and the solutions it has developed under a new system called Jetpack Compose Glimmer for Android XR. This comes after Google announced the schedule for its 2026 developers conference, Google I/O, which kicks off on May 19. The company may share more details closer to the event.

Display AI glasses: Challenges faced by Google and how it solved them

Unlike phones, AI glasses do not have a traditional screen. Instead, digital elements are layered over the real world through transparent displays. According to Google, this changes fundamental design assumptions. Colours behave differently, shadows do not work the same way, and even typography must be rethought because the interface sits within a constantly changing real-world backdrop. 
One of the key technical realities Google highlights is focal depth. The interface does not sit on the lens surface but appears at a perceived distance of about one metre — roughly arm’s length. This means users must consciously shift focus from the real world to the projected interface to read content. Google says this shift is not passive, it is an active visual adjustment. As a result, the interface must justify that moment of attention by being clear, readable and restrained. 
Another limitation comes from the display technology itself. These glasses use additive light, meaning they can only add light to the world, not block it. Black is not truly black — it appears transparent. Early attempts to adapt existing Android Material Design components did not work well, according to Google. Bright, opaque surfaces created glare, drained battery and caused halation, where light bleeds into adjacent areas and makes text hard to read.
  To address this, Google redefined how “black” functions in the interface. Instead of being treated as a colour, it acts as a container to provide a clean background for content. The company also developed a new depth system that uses darker, richer shadow effects to create hierarchy and spatial separation without relying on traditional opaque layers. 
Text readability required further adjustments. Because the interface appears at a fixed depth of around one metre, typography is measured in visual angle rather than pixels. Google says it worked with vision science teams to establish a minimum readable size of about 0.6 degrees, ensuring text remains glanceable. It also modified Google Sans Flex using optical sizing to improve letter clarity, such as enlarging counters in letters and adjusting spacing automatically. 
Colour was another challenge. Highly saturated tones that look vibrant on phones tend to “disappear” against real-world backgrounds. Google measured perceived brightness using what it calls an additive contrast ratio, factoring in both display and environmental brightness. As a result, the Glimmer system defaults to neutral, dark surfaces with bright content to maintain contrast across varied lighting conditions. 
Motion design was also reworked. Fast animations proved too abrupt in a heads-up display. For notifications, Google shifted from short 500-millisecond transitions to slower animations lasting nearly two seconds, allowing content to enter a user’s field of view gradually. At the same time, input feedback remains immediate to ensure responsiveness. 
While Google has not detailed hardware specifications, its design disclosures provide insight into what consumers can expect from the upcoming display AI glasses: restrained overlays, high-contrast text, neutral visuals, slower ambient notifications and interfaces built to coexist with, rather than dominate, the real world.

Google display AI glasses: What do we know

Google first showcased its display-based AI glasses at “The Android Show: XR Edition” on December 8 last year, noting partnerships with Warby Parker and Gentle Monster. The prototype featured a discreet in-lens display capable of showing contextual information such as navigation prompts and live translation captions. 
During the demonstration, Google showed how users could call on Gemini for quick assistance. It also previewed Gemini 2.5 Flash Image, dubbed “Nano Banana”, which can edit photos in real time, including inserting objects or people that were not originally present. The company further highlighted on-device memory features: in one demo, the glasses automatically noted snack items on a table and later recalled a specific high-protein option when prompted. Google has announced earlier that these glasses will be launched sometime in 2026.

More From This Section

Topics :Googlesmart glassgadgets

First Published: Feb 19 2026 | 12:05 PM IST

Next Story