Mobile Strategy

Google I/O 2017: voice UI & connected cars

person speaking into tin can phone

At WillowTree we work hard to visualize our user’s journeys far beyond traditional mobile interfaces and interactions. Our Strategy Team conducts comprehensive ethnographic research, observing user activities far prior to, and following, intended behaviors, to understand the ebb and flow of users’ needs and wants. With the recent leaps and bounds of development surrounding voice interactions, it’s easy to see how natural language interactions (the key word here being natural) will only improve our interactions with our devices. Consider, for example, requesting “turn up the heat” versus a more-cumbersome and highly-awkward, “change the temperature to…umm, 75 degrees Fahrenheit?”. As Mark Spates, Google’s sharply-dressed Product Lead for Home Automation and IoT, instructed: it’s important we create conversations, not commands.

Here at I/O, our team has taken advantage of the opportunity to explore the latest advancements in voice interactions and it’s varying levels of utility across digital platforms. google io 2017 voice car exterior Inside an orb-shaped tent with “IoT” inked above the entryway, we tested Google’s collaboration with Volvo using Android Auto, a take on Google Assistant while on the go. google io 2017 voice car interior While they’ve yet to nail down the actual acquisition for Android Auto apps (Google PlayStore was mentioned as a possibility) it’ll be exciting to see the boundless opportunities that brands could choose to pursue utilizing this new platform for voice tech. Far beyond car games, why not order Panera in with the same tool you’ve used your whole life- your voice? As Daniel Padgett articulated, the beauty is that users are instant experts - there’s nothing to teach.

In a compelling talk by Jared Strawderman and Adriana Olmos Antillon, they reflected on the varying value and utility of voice across digital platforms. For example, large data sets, like your favorite chili recipe, are not suitable for voice output. David drew a comparison to the verbal response operating like an unending stock market ticker, it’s just not digestible. Instead, a visual aid is necessary or insights should be made on the content itself. For example, ask Google Assistant “What’s the fastest way to work?” the answer won’t be a verbal slew of turn by turn directions but instead “Take Route 250”. google io 2017 voice padgett

“Make sure it’s easier than the alternative. If it’s not more convenient for users to do it with voice, then you shouldn’t pursue it.” - Daniel Padgett, Conversation Design Lead for Google Assistant

Finally, an underlying theme throughout voice discussions at I/O is the imperative to keep the context of the exchange in perspective. Is the user sitting in their car or on their couch? If they are home, it’s important to consider factors like time of day and even the room they’re located. For example, asking the same question in the living room or kitchen could imply something entirely different. It doesn’t stop there though, consideration of how someone asks a question is relevant too - how would you ask your spouse to dim the lights, versus your mom, or friend? To that end, Google unveiled Google Home Graph to help developers store and provide contextual data about the home and devices within it.

As we collect our learnings from I/O there’s one thing we can be certain of. In the world of voice interactions, we’re only at the beginning but we’ve got a lot to look forward to. As our CEO Tobias Dengel highlighted in a LinkedIn article, Google is doing their part to paint the framework for the road ahead, now it’s on us to taken advantage of it.

“OK Google, publish this post.”

WT Blog Post

WillowTree Ranked on Deloitte's Technology Fast 500

Deloitte announced the 2019 Technology Fast 500™ today and WillowTree is honored...

Read the article