To many observers, 2016 was at best a lackluster year for mobile innovation, save the Pokemon Go craze (which ended up not unleashing an AR revolution) and the new Google Pixel phone. Some have argued innovation in mobile has run its course and we are destined for performance-only improvements for the foreseeable future, like we’ve been seeing in the desktop market for a decade.
At a minimum, the days of lines for the next device and users showing off new functionality to their friends seem long gone. What is going on here? Was Steve Jobs the only person on the planet that could drive leaps in innovation?
Go back to 2006, the year the iPhone launched, and innovation seemed incremental at best. CNET picked the Telstra hiptop 2 as its next big thing: http://info.willowtreeapps.com/e/61772/ws-best-mobile-phones-of-2006-/38vdmm/277184370
Then out of nowhere came the iPhone which revolutionized the industry. What’s most interesting about the CNET article from 2006 is that everything was trying to be the “iPod killer” all the time—no one was talking about the iPod turning into an iPhone, which seems obvious in retrospect.
So what’s coming around the corner in 2017? Just like in 2006, the place to start is where the main user problems are. In 2006, the problem was a lack of innovation on the user interface. In 2016 the main problem is (once again) lack of innovation on the user interface. Nothing meaningful has changed in four to five years. 3D touch is not a “delight” feature (yet anyway). What is working is the iOS / Android duopoly. Any innovation looks to come from within those platforms.
The key problem with the mobile user interface is two-fold:
- The touch interface hasn’t changed in 10 years. Typing on it is difficult and navigation is not optimal. As a result, communication is harder than it should be and finding what you want is a pain. Conversely, consuming what you want once you get to it is great.
- Navigation, mediocre within a single mobile app/website, becomes truly disastrous when moving among apps, functions, widgets, and sites. Why do I need to spend 30 seconds swiping to find those apps not on my first screen? Why if I want to follow a link (say in an email or text) into a mobile app do I have to start from the top?
We believe 2016 was a foundational year where the duopoly (iOS and Android) took a year off to regroup and really work on the two UX issues above. No designer should ever forget the following three metrics:
- Humans type at ~40 words per minute.
- Humans speak at ~130 words per minute.
- Humans read at ~250 words per minute.
What this means is that the next-gen interface will receive data from humans via the spoken word, and respond with visual text/graphics. So you will be able to open the app you want via voice and easily interact with the app by speaking to—but you will receive most information visually. Just as in 2006, what will make this possible is the confluence of multiple technologies including speech-to-text and artificial intelligence to interpret the spoken word.
In terms of navigation within the mobile ecosystem, the year was spent laying the foundation for deep linking, in essence allowing us to cross-link anywhere within the mobile universe (when implemented correctly).
Issue number one for our clients right now is mobile engagement (not app downloads)—the primary mechanism to drive engagement is personalized, outbound contact (notifications, emails, social media, etc.) which drive straight and deeply into personalized content. Artificial intelligence and deep-linking are combining to make this a reality.
2016 looks to go down as a ho-hum year for mobile innovation. But when the next big leap forward happens in ‘17 and ‘18, and we can tell our phones to “please do my holiday shopping,” we’ll know what Apple and Google were spending their time on.
TO DO NOW: While Apple and Google are busy laying their foundation, we should all be laying ours. This means doing strategy work around how a spoken / visual multimodal user interface will work for our products and services, and what data and intelligence layers we need to build to make it a reality. For example:
- What common customer service functions can we fulfill via multimodal UI or a chatbot?
- What products will users want to order via voice?
- What messenger-only experiences can we deliver to our users?
- What voice interactions can we enable to make our field employees’ lives easier?