At this point in the year, most of the big players in tech have released their latest products for 2017. Google is expected to be the last major player to release an anticipated line of hardware this year, with it’s Pixel 2 and Pixel 2 XL phones as the stars of the show.
In 2017, smartphones have kept us on our toes with the emergence of features like curved screens and face sensors to animate emojis. For product designers, the inventiveness of this year’s tech has changed what we’ve come to understand about interfaces and the way people interact with our designs.
Some of these changes feel useful and necessary while others are forced (and more like a step backward). It may seem like Google played it safe compared to it’s competitors by sticking with bezeled smartphones and singular cameras. Google was selective about what and where to innovate on their devices. They opted to put their own spin on 2017 trends by enhancing the smartphone experience with their Assistant tech and machine learning.
Analyzing the areas where Google has improved and innovated on their latest devices provides insight into what design and tech features are worth focusing on.
The skinny on taller, thinner screens
Like many 2017 smartphones, Google’s Pixel 2 and Pixel 2 XL have larger displays, with an emphasis on making screens longer rather than wider. The Pixel 2 XL has a 6-inch screen, a step up from the 5.5-inch original Pixel XL. Other phones this year have opted for a 18:9 aspect ratio screen as well, a shift from 16:9 screens from the past few years.
Phones can only be made so wide before they are uncomfortable for users to interact with and hold. Extending the height lends itself to showing more content on screen which is typically arranged in long scrolling layouts. With the rise of taller devices like the Pixel 2 XL, it is important to keep in mind what interactions can be easily performed on larger screen sizes, and that the most reachable areas on the phone will be near the lower half of screens, even more relevant than in the past.
Accommodations for taller devices have been made by moving interactions to the bottom of screens. We see Google doing this through the redesign of the Pixel Launcher on the Pixel 2. Originally located near the top of the 2016 Pixel screens, the Pixel launcher now lives at the bottom of the home screen. Relocating this element on the Pixel 2 shows Google is conscious of how their interface must evolve as they follow trends of larger and taller screens, something we all need to take into consideration with interface design moving forward.
Cautiously edging towards edgeless
While neither the Pixel 2 or Pixel 2 XL feature an edge-to-edge display like other smartphones debuted in 2017, they sport thin edges between the screen and phone frame.
Google’s choice to not go edgeless with its Pixel 2 phones is a great reminder that while several new phones like the iPhone X and Samsung Galaxy S8 have gone with edge-to-edge screens, phones with edges aren’t a thing of the past. To achieve an edgeless screen, Apple was willing to add the controversial “Notch” for their sensor housing, leaving us to question if more screen is worth the design concession. It’s refreshing to see that Google is not willing to compromise their aesthetic just to keep up with this trend.
There’s no denying that edge-to-edge displays are an exciting development that could usher in new opportunities for sleek, full-screen designs, but we shouldn’t get ahead of ourselves. Continuing to design great experiences for bezeled devices like the Pixel 2 phones, iPhone 8, and devices prior to this year - while being conscious of adaptations for edgeless screens - is a must. Safe areas will remain our close friends to avoid content being clipped on whatever type of phone an interface shows up on.
Early adoption of off-screen interactions
Google partnered with HTC for it’s original Pixel phones, and brought them back as a partner for the Pixel 2. Although Google selected LG as the manufacturer for the Pixel 2 XL, HTC’s role across both Pixel 2 phones is evident from the inclusion of the company’s unique Edge Sense feature making it into both devices.
A distinct feature that debuted on the HTC U11 smartphone, Edge Sense – known as Active Edge on Google’s devices – allows users to trigger actions by squeezing the frame of the phone. By default, the action initiates the camera on the HTC U11 and launches Google Assistant on the Pixel 2. The feature is customizable and can be utilized for opening any app on the device or performing other smartphone actions.
Taking the common action of a person holding their phone and turning it into a mechanism to control the device itself gives us a glimpse into what the future could hold. With the amount of change we’ve seen with the increased size of phone displays and functionality like physical home buttons being removed to maximize screen real estate, it makes sense to move functions to underutilized areas like the sides of the phone frame, allowing us to explore more innovative, natural interactions.
Using this feature to launch voice assistants and take photos seem to just scratch the surface for where off-screen, phone frame-based interaction patterns could take user experience as more phones adopt similar features.
‘Always On’ features taking off
Google has been one of the top names in AI - branding itself as an AI company, pioneering research and experiments, and devices that prove how machines can solve problems and learn to assist humans.
AI is at the heart of making smarter and more useful devices as they learn to listen, watch, and analyze user actions. As devices learn to gather data from users they can help create more personalized experiences. such as allowing Netflix to recommend shows and movies they think users will like based on previously consumed content, and potentially helping us avoid getting stuck in traffic by learning our daily commutes.
Google’s latest AI-powered feature on the Pixel 2 is an “always on” music recognition feature, similar to Shazam. The phone automatically listens for tracks, and displays the song title and artist on the lock screen. While this feature feels more frivolous and fringes on being gimmicky compared to other use cases for AI, the concept of more “always on,” AI-driven features surfacing in our devices is intriguing.
We’ll see how this feature goes over with users knowing how increasingly skeptical people have become of systems listening and watching even when devices are “off” or “asleep”. This skepticism of technology has lead to phone cases designed to block the front-facing camera, as well as smart speakers including buttons to mute their microphones when not in use. Maybe a feature as harmless as “always on,” automatic song recognition will help users warm up to AI more. Maybe.
Visual search and assistant capabilities from your phone’s camera
While most companies opted for dual cameras on their latest smartphones, Google stuck to a singular camera with an innovative twist. The Pixel 2 phones are the first devices to feature Google Lens, a new camera technology that incorporates machine learning. Lens brings up relevant information and actions based on what users point their phone camera at. So far we’ve seen Lens pull up reviews for a restaurant by aiming the camera at a storefront, as well as identify types of flowers, and save contact information from a poster. Lens provides the same functionality when users upload photos or screenshots.
Visual search and image identification features have been popping up in sites and apps over the past few years, with Pinterest helping users find visually similar posts by cropping into a section of a pin, and Facebook identifying our friends for us in photos. But never before have we seen visual search in real time through a smartphone camera. As visual search makes its way into the world around us, we’ll keep an eye on how this technology evolves.
Overall, with their latest device lineup, we’re seeing Google follow some of the same trends as others in the industry such as shifting toward taller screens and removing the headphone jack. But, as opposed to some of the other big players, who are focusing on edgeless displays and face identification, Google is taking a different stance.
The new Pixel devices aren’t fully edge-to-edge, but they’re doing more with off-screen interactions– like Active Edge– than other devices (while sporting equally as beautiful phone designs). They only have a single camera, but that camera is drawing on powerful technology with Google Lens. And, what the Pixels lack in terms of face sensors, hopefully its AI-driven, always-on features can make up for.
Through the physical design and interfaces of their devices, Google proves that design is as much about what you choose to include, as it is what you choose to leave out. In our eyes, the focused feature set and innovations of Google’s Pixel 2 phones could shape the future of devices, design, and user experience– which could really allow for a seamless, multimodal experience down the line– opposed to trends that will die out. We’ll have to wait and see.