Voice Technology

How to Integrate Voice Technology Into Your Existing Business Systems

For many of us, voice technology means smartphones and speakers powered by voice assistants like Alexa and Siri. These voice-powered devices are helpful; however, they don’t represent how the mass adoption of voice technology will change our lives and business models over the next five to 10 years.

In my Wall Street Journal bestseller, The Sound of the Future: The Coming Age of Voice Technology, I detail how the voice revolution will extend beyond standalone devices and experiences. Instead, it will be about integrating voice into the systems and interfaces people already know and trust.

Businesses that understand this will be best positioned to seize the opportunities offered by voice, including automating tasks, improving accuracy, and reducing costs. Getting there starts with integrating voice into your existing business, a process that happens over the following three stages.

Stage 1: Identify Your Voice Use Cases

Integrating voice technology starts with finding the routine business processes that voice UI can improve. Discovering these use cases begins by analyzing every process, internal and external, where some kind of interaction happens with customers, employees, or business partners.

Gather a team of representatives from each relevant department. List these processes together, then break each down by the specific interactions that occur. This initial step is also an excellent time to choose an AI firm that can bring expertise and a fresh perspective to these conversations.

Together, brainstorm what traditional keyboard-based communications could be replaced by voice. Ask questions such as:

  • What modes of communication drive each interaction? Identify what devices each interaction depends on (e.g., phone, tablet, keyboard) and how they’re used (e.g., typing, recording, searching or accessing information).
  • Where does friction occur? Look for instances in each interaction where tasks become time-consuming, awkward, error-prone, or simply annoying.
  • How could voice relieve that friction? In other words, how could voice streamline, accelerate, enhance accuracy, or otherwise smooth these moments of friction?

With this analysis done, prioritize one or two use cases. You don’t want to bite off too much too soon, and the experience you gain on these initial voice projects will help you understand how voice best serves your business, positioning you to expand voice technology into other functions and departments.

6 places to look for voice technology use cases

To help narrow your focus, here are six categories of interaction where voice has a strong potential to make an immediate, positive impact:  

  1. Conveying information to devices: Interactions that depend on extensive data entry, such as filling out forms, submitting reports, or completing surveys (for instance, a nurse or patient entering medical information during intake)
  2. Retrieving data: Interactions that require pulling detailed, specific information from vast amounts of data (such as employees pulling information from a central database)  
  3. Handling transactions: Interactions that center on making a purchase (including transferring money, subscribing to services, or paying bills)
  4. Operating hands-free and heads-up: Interactions that require close human attention, known as “hands-free and eyes-up” engagement (think retail floors, warehouses and factories, driving and dispatch, law enforcement and military operations, and customer service desks)
  5. Gathering ambient information: Interactions with the surrounding environment where voice technology can capture, analyze, and respond to ambient audio data (such as hospitals, government offices, and contact centers)
  6. Facilitating instant responses: Interactions that seek a direct response to a previously cumbersome experience (like locating a TV show or responding to an ad)

Cathay Pacific Airways offers an inspiring example of how large companies with complex operations might consider approaching voice. The airline started with a broad use case in customer service, then applied what they learned to a completely different part of the company.

So far, Cathay Pacific has automated 40–60% of customer phone calls, intending to eventually handle 90% of calls. On the horizon, Cathay Pacific plans to deploy voice tools that simplify and automate communication between aircraft maintenance teams, a move that’ll enhance safety alongside efficiency.

Stage 2: Identify What Level of Voice Technology Each Use Case Needs

The best way to identify what level of voice technology you need is to analyze the business context as rigorously as possible. Let’s take conversational AI as an example. Though many business and technology leaders see open-ended, human-like dialogue as the pinnacle of voice technology, that doesn’t mean conversational AI is the right solution for every scenario.

Most voice tools today don’t use conversational AI, including many that seem to. Instead, they use technologies such as natural language processing (NLP), large language models (LLMs), and retrieval-augmented generation (RAG) systems in a bound way. Even chatbots trained on legacy intent classification with predefined words and phrases can pull off exchanges that feel conversational to users.

This bounded mode of voice interaction is easier, faster, and less expensive than conversational AI and achieves high success rates within specific business contexts. A big reason why, as Voicebot.ai founder Brett Kinsella points out, is that most people use voice tools for simple, straightforward interactions:

  • Playing a specific song
  • Placing a food order
  • Checking account balances

So, what kind of use cases would conversational AI best serve? Any scenario that requires understanding and responding to complex, open-ended statements or requests is worthy of conversational AI. For instance, conversational AI could listen to live conversations in a customer service call center and generate real-time responses informed by customer data.

Another example is Truist, a bank developing automated voice tools to handle complex financial issues that currently require human intervention. In time, these voice tools will make customized credit card recommendations by considering spending patterns and other customer data.

Stage 3: Choose the Right Partner to Build Your Voice Tools

Finding the right AI partner is crucial when developing and integrating voice technology. Ideally, you work together through the entire process, from identifying use cases to helping you roll out new voice applications.

The right AI partner is crucial because developing voice technology takes immense resources and know-how. The best AI firms constantly run experiments to find efficiencies that benefit their clients, like finding ways to make LLM benchmarking faster and cheaper.

Likewise, the best AI consultants bring fresh ideas thanks to a big-picture perspective on how your industry and others are using voice.

If you’re ready to explore how voice can impact your business, check out WillowTree’s approach to conversational AI and voice technology.

Table of Contents
Read the Video Transcript

One email, once a month.

Our latest thinking—delivered.
Thank you! You have been successfully added to our monthly email list.
Oops! Something went wrong while submitting the form.
More content

Let’s talk.

Wherever you are on your journey, we can help. Let’s have a conversation.