Artificial intelligence offers immense potential to support digital customers, particularly in heavily regulated industries like healthcare and finance. However, a key challenge remains: to build trustworthy AI features within these industries, ensuring customers find value in AI and machine learning and feel comfortable using this tech to support their wellness, financial, and other crucial decision-making.
WillowTree recently had the opportunity to design a conversational AI assistant for one of our healthcare clients (note that AI-enabled assistants differ from legacy technology “chatbots”), and the project highlighted the obstacles to building trust with AI experiences.
First, the novelty of AI can breed skepticism, making it challenging to build trust. Second, building trustworthy AI requires juggling transparency, reliability, and security. In heavily regulated fields like healthcare, AI systems and underlying algorithms often have complex decision-making processes to ensure security and compliance with sensitive datasets, which can throw off the delicate balance of this trust-building criteria.
This complexity can make it harder for users to understand the "why" behind the AI's information and recommendations, leading to a lack of trust. Concerns around AI bias and hallucination mitigation also remain. Understandably, users and other healthcare stakeholders might hesitate to rely on artificial intelligence or embrace its potential.
We strategically leveraged design tactics to combat the biggest challenges of building trust for our AI healthcare conversational AI assistant. Clear and concise messaging, interface elements that offer a glimpse into the AI's workings, and a streamlined experience that acknowledges the system's limitations are all strategies that can overcome trust hurdles with users and result in a successful and well-received AI experience.
Below, we’ll share details about our top 5 design techniques for building trustworthy AI experiences that came out of the work with our healthcare client.
NOTE: To ensure privacy and confidentiality, we’ve replaced the client's name and branding with "WT Wellness" and are referring to their conversational AI assistant as "Willow."
In the first moments of an AI experience, clearly explaining its benefits and security measures upfront is crucial. If the intentions, guidelines, and expectations of AI are unclear, it can create barriers to usage — especially for users who are skeptical given the newness and potential bias inherent in AI experiences.
For our healthcare client, we decided to check with the user for consent before offering the AI experience. We included a consent capture screen before their first use of the AI assistant experience. Presenting the user with AI agreements and capturing their consent is a necessary moment in the user experience. To make this otherwise banal screen valuable, we anticipated user questions and needs by using the screen to communicate the AI feature's value and security.
General users tend to distrust healthcare systems by default, often fearing any type of engagement with healthcare coverage impacting their costs/premiums. Additional messaging or automated response that explicitly states whether or not using an AI assistant will trigger some action from the provider may be worthwhile.
Simple messaging and clear actions build trust around AI from the first screen and provide users the context they need to confidently move deeper into the conversational AI experience.
As part of good conversational AI assistant UX/UI design, flagging that AI is a part of the experience from the start is essential to set user expectations. When users proactively engage with an AI platform like ChatGPT, they're expecting AI from the start. When they're engaging with a healthcare platform, they might automatically expect to interface with a human agent.
For our healthcare client, we included clear messaging, using “Powered by AI” in the eyebrow to drive home the fact that the user is embarking on an AI experience. This design approach immediately tells users they are interacting with AI, setting expectations, and building trust through transparent AI. Along with this messaging, using AI iconography — such as the “sparkle icon,” which is increasingly becoming a standard indication of AI functionality — consistently throughout the website or app can signal and reinforce the use of AI to align user expectations to such an experience.
Broadcasting the use of AI through verbal and visual design elements minimizes unnecessary barriers to usage. It encourages users to feel supported and comfortable interacting with the AI feature, ultimately increasing their confidence and trust.
Depending on the AI tools and features they’ve already tried, users may have differing expectations of how AI can and should support them in an experience. Suppose the capabilities of AI aren’t clearly shared with users. In that case, they may be disappointed when the AI doesn’t live up to their expectations — and as mentioned, reliability is a cornerstone of trustworthy AI.
For our healthcare client, we included messaging at the start of the experience to reinforce the core use cases supported for initial release: finding a doctor and managing specific health needs.
Clear messaging sets expectations for the kinds of support the product can successfully provide, building user trust as the system delivers on its identified use cases.
In some cases, a user will ask a question that conflicts with regulatory frameworks in place to ensure privacy and the Hippocratic oath to “first do no harm.” While it’s theoretically possible for AI to perform specific tasks, in highly regulated industries, we’ve tempered this capability by inserting Supervisor LLM moderation to ensure safety and regulatory compliance.
For requests outside of the use case the AI tool supports, we planned for “failing gracefully” — or, as one designer on the project called it, plotting “an alternate route of success for the user” — after several unsuccessful attempts at getting AI's help.
We designed the user experience flow for a seamless handoff to human customer service. When a user asks the AI assistant for something outside of its area of expertise or compliance, the assistant explains why it’s unable to help. It offers to open a chat with a human customer service representative for the user.
Considering the UX of requests that fall outside the experience signals to users that the system knows its limits, fostering a sense of reliability in the experience. Approaching the AI system’s limits honestly builds trust with the user by offering a transparent view of how the system works.
In a successful AI experience, showing how the system addresses user needs is essential. Showcasing how AI can fulfill user needs and be accountable for its actions builds confidence in the system.
Our design team applied UI elements that note observations and tasks adjacent to relevant chat messages. These indicators allow the user to see how the AI is interpreting conversational information and tracking essential details. The AI providing these reactions and feedback cues in the midst of conversation is akin to reading facial expressions and body language when talking face-to-face with another person — the equivalent of a nod or a thumbs up as a positive confirmation of what’s being discussed.
Showing and telling how AI anticipates and fulfills user needs helps the user understand what the AI is doing, building trust and confidence throughout the interaction. Users can feel that their needs are being met, making the AI experience valuable.
The stakes are high in a situation where AI is helping users manage specific health needs. Backing up this information with credible sources and information is crucial. Sources add legitimacy, empower users to verify AI-provided information, and encourage users to trust the experience.
Our design shows sources and AI-generated responses for our healthcare client. Attributions appear as snippets of text within the chat that show the source of the information. We made sure to include clickable links to the source content. Users can review the sources, going deeper into topics shared by the conversational AI assistant if desired.
Despite the challenges of building trustworthy AI experiences in heavily regulated industries, product owners can leverage design best practices to smooth out the rough edges created by balancing compliance and transparency in AI systems.
In summary, our experience building solutions for generative AI in healthcare and other heavily regulated industries highlights five key design techniques:
By implementing these techniques, WillowTree designers have designed AI experiences that build trust even in the most heavily regulated industries. Reach out to learn how our AI Strategy & Governance leaders help clients deploy artificial intelligence ethically and safely.
One email, once a month.