Bots have existed since the start of computing, and now, thanks to advancements in several technologies, they have become truly useful. The bot development process is very similar to the app development process. It all starts with solid UX strategy and leading design and development teams.
With the rise of accessible artificial intelligence, like Cortana and Siri, and advancements in Natural Language Processing (NLP), users everywhere are increasingly comfortable interacting with text and voice-based interfaces. Further driving appeal of bots and virtual assistants, if designed well, is the ability they afford users to simplify tasks that would otherwise require many steps to complete on a screen or in real life (for example, booking a hotel room or airline tickets). And with companies like Facebook and Microsoft opening up AI and NLP APIs to developers, there will be no slowdown in adoption of the technology in coming years.
What is a bot?
A bot, interchangeably referred to today as a chatbot, is a service governed by rules or artificial intelligence users can interact with through an existing chat interface (think Facebook, Slack, or Telegram) or through a custom messaging interface.
Chatbots have taken off recently as a result of improvements to multiple technologies:
- Natural Language Processing (NLP) allows computers to more accurately translate normal human speech patterns into usable code. This means users do not have to know what variables to enter or what syntax to use to tell a computer to do something.
- Automatic Speech Recognition (ASR), also known as Speech to Text (STT), allows users to talk to their devices, making interaction with bots easy and widely available. One of the most recognizable examples of ASR technology at work is Apple’s Siri.
How Chatbots Improve User Interactions
Chatbots are an incremental improvement over apps and web pages for certain tasks using “Conversational UI.” WillowTree’s UX team designs experiences that make it quick and easy for users to navigate through a series of screens and locate information they need in order to perform a task. In the future, for certain tasks with bounded outcomes, having a voice or text conversation with a computer will be more efficient than navigating through multiple screens. Easily understandable consumer-facing examples include asking a Regal chatbot, “when can I see Star Wars?” Or telling a Wyndham chatbot to “book my hotel room.” Room for greater innovation exists in workplaces, where tasks like ordering parts or call center operations could be entirely automated with chatbots.
Where Do Bots Live?
Bots exist primarily in two places:
- Inside Messaging & Chat Services: Think Facebook Messenger, Slack, or even the text messaging app on your phone. Within any of these services, you will be able to pull up thousands of individual bots to interact with. For example, you could pull up the Regal Cinemas bot and ask it to book you movie tickets tonight.
- Inside Apps: Bots will have the option to interact with all or portions of the app. WillowTree believes this is a huge opportunity in the near term as the standalone bots are much more complex to roll out. So for example, you could use the Choice Hotels app to find a list of hotels near a point on a map, see prices, and next move to a bot interface to tell the app to book you a room for tonight. We believe these types of hybrid experiences will become common very quickly.
How To Develop a Bot
WillowTree has created a development strategy to consistently build bots that provide great user experiences. The two most critical pieces of great bot development are the conversational architectures and the language understanding models.
Our strategy centers around conducting extensive user research to get a detailed understanding of exactly what users expect from a bot. Once we have a listing of everything a user might try to accomplish, we begin designing the conversational architecture. For example, if a user is interfacing with a movie theateer bot, they will likely want to ask about showtimes, trailers, or theatres. These queries are all examples of “intents” and they are central to developing the language understanding models that govern the best bots. After developing a list of intents, we proceed to design the conversation in order to capture all the necessary parameters to respond to the user’s request. These intents and parameters are then input into a natural language processor which then trains our models on real user messages.
Our chatbot architecture consists of three main components: the messaging integration modules, the natural language processor, and the business logic. The messaging integration modules should all agree to a contract with the business logic so that new modules can easily be added and the bot will behave consistently across multiple platforms. The business logic receives the raw messages via the integration modules over a REST API, sends the message to be processed by our NLP engine, then handles the parsed intent and entities appropriately.