API DevelopmentStrategy and Innovation

WillowNotes on Google I/O 2019: The Q Factor, Intelligence, and Accessibility

Google I/O took place from May 7-9 in Mountain View, CA. Spots at this conference are pretty coveted; we Android junkies like to be the first to know about new developments. Thankfully, the pre-conference raffle has better odds than winning the lottery and we were lucky enough to get in the door.

While this conference is certainly geared toward developers, there are plenty of key takeaways that show us the areas in which Google is leading the industry this year. Here are our three big takeaways from this year’s conference, what they’ll mean for your company, and what to do next:

Takeaway 1: Android Q beta 3 is here—and it’s really not beta

On the day it was announced, Android Q was available for installation on 21 different smartphones from a dozen different manufacturers, including Google’s Pixel 3a and 3a XL, two brand new phones which were also announced at I/O. Android P supported fewer devices than this when it was fully rolled out in 2018, so while Google is calling this a beta, we’re calling their bluff. Android Q is also future-ready, with native support for 5G and foldable screens, along with a number of new and/or improved features:

  • New gesture navigation, including swipe to go back
  • Dark themes for every app
  • New privacy features, such as reminders that tell you how many apps are using your location and incognito mode for maps
  • Faster security updates
  • Focus Mode—part of Digital Wellbeing—lets you disable certain apps to focus on other stuff you have to do
  • An account linking feature that lets parents monitor usage and set app limits for their childrens’ phones

What it means: While it’s not an official theme of this year’s announcements, there’s clearly an effort underway by Google (and Apple) to promote quality screen time at the expense of quantity. Focus Mode may be among the more heavy-handed features that aim to limit screen time, but account linking and privacy features that disable an app’s access to your essential data will also have the same effect. App developers that want to keep their wares front and center with users will have to adapt to a new reality where more usage is not always better.

What to do now: Businesses need to be prepared to fight harder for every moment of user face time, which will mean refining or redeveloping the entire ecosystem of communications and transactions that surround your app. Since the market disproportionately rewards organizations that originate great designs, it’s time to start implementing holistic Design Thinking, practices to help ensure that your app is significantly better to use than anyone else’s.

On a tactical, more developer-focused level, teams must also familiarize themselves with Android Q’s new capabilities and features. For example, the new gesture navigation seems to be an improvement, but some details might trip up both developers and users, such as the fact that there’s no hamburger menu on a swipe-back screen.

Takeaway 2: Google Assistant is going multi-modal

While Google is taking steps to incrementally add more intelligence to everything it does, some of the outcomes are nevertheless quite remarkable. Take Google Assistant. Tell it to book a table at your favorite restaurant tomorrow night at 7, and it responds by asking: if that time isn’t available, can it book you a time between, say, 7 and 8? It then calls the restaurant host with a remarkably lifelike voice—complete with human-sounding “ums”—and conducts a conversation in natural language to settle on a mutually agreeable reservation time. As amazing as that is, it’s just the tip of the iceberg:

  • Google Duplex, the technology behind that Google Assistant restaurant booking, is coming to the web. It’s like having a personal, super-intelligent chat bot looking through the web to find you rental cars and movie tickets.
  • The new Pixel 3a and 3a XL will feature Augmented Reality navigation in Google Maps that accesses the device’s camera to superimpose walking directions over a live picture of what you’re seeing.
  • Google Lens with Google Search integration now allows users to recognize, read aloud, and translate text in 14 languages on-device. It can also look for what you want, for example, by scanning a restaurant menu and making recommendations about your favorite foods.

What it means: The idea of what an app can do is continually broadening in all kinds of exciting ways. Technologies such as conversational UI and applied AI are changing to accommodate different modalities—and it’s up to users to choose what works best for them. Brands should be thinking of ways to expand their offering, not necessarily replace what they have.

What to do now: AI can be used to do amazing things, but it’s limited by what developers ask it to do—so ask it to do more. Optimize your discovery and development process with tools, such as the WillowTree Field Guide that distills lessons learned from building more than 500 digital experiences, with templated learnings from user research, behavioral research, and experiments to help optimize work processes. Harness your organization’s resources more effectively with advanced remote collaboration tools.

Takeaway 3: Accessibility is coming to the forefront

A number of announcements at Google I/O had wide ranging accessibility implications. Google deserves full credit for investing substantial resources to make services more available to users with special needs:

  • Live Caption, part of Android Q, can provide live, on-device captions for any video seen on an Android device—no matter the source.
  • Live Relay enables users to conduct phone calls by text via Google Assistant: Text inputs are read aloud to the recipient and that person’s voice is transcribed back.
  • Project Euphoria is an ongoing effort to improve Google Assistant’s ability to understand speech impediments, with voice samples from a variety of volunteers being used to refine Google’s voice recognition.

What it means: This new technology offers more opportunities for organizations to forge relationships with users who previously felt left out of many aspects of the digital revolution. As large platforms like Google make accessibility a core aspect of their platform, the shortcomings of other, less inclusive platforms will become more and more obvious.

What to do now: Ensure that accessibility is a core component of the design, development, and marketing process for your applications, not an afterthought. Accessibility starts with fully functional design specifications. Make sure your user lab has appropriate processes and procedures in place to evaluate the usability factors that impact special needs audiences.

Join our team to work with Fortune 500 companies in solving real-world product strategy, design, and technical problems.

Find Your Role

Xcode Cloud: Choosing the Right Continuous Integration and Delivery Tool for Your Team

Apple recently announced the release of Xcode Cloud at the latest iteration of their...

Read the article