App Development

Review: Amazon Echo Show launches multi-modal experiences

(Image courtesty of Amazon.)

Today, Amazon announced the Echo Show.

This is an important and historic step towards a blended or multi-modal experience. In our Mobile Predictions for 2017 blog post, we talked about how the voice and the screen are going to have to start working together. The key to understanding these developments is the very nature of the speed of human communication:

  • ~40 words per minute when typing
  • ~130 words per minute when speaking
  • ~250 words per minute when reading

So the place we are naturally pulled to is transmitting to our devices via voice, and receiving information back via the written word and graphics.

The Show is a great step—use your voice to ask for what you want, and get a response via the screen. However, its biggest limitation is that the user is still trapped inside a single device within the Amazon ecosystem. The real breakthrough will be when users can easily hop among ecosystems—i.e., ask their Amazon Echo what movies are playing tonight and get the response on their iPhone.

Those experiences are possible today—simply by tying together Alexa Skills or Google Home Actions with mobile apps. Leading companies are prototyping these experiences, allowing users to hop among devices, ecosystems, and interfaces.

For a deeper dive, here’s our recent talk at the Landmark CIO Summit in May 2017: Embedded content:

Moving from Monolith to Microservices Architecture

When a client decides to move from a monolith platform to microservice architecture,...

Read the article