Like many of us working in the tech space, I’ve been playing with ChatGPT quite a bit lately and collaborating with my colleagues as they do the same. We’ve discovered that the tool is an incredible time-saver in generating ideas, overcoming writer’s block, and expediting code creation and bug fixes. I often look to ChatGPT to quickly resolve repetitive writing tasks like getting a headstart on usability test instructions or spinning up a quick list of article ideas.
Many others are discovering the time-saving, game-changing nature of generative AI tools like ChatGPT. Wharton Professor Ethan Mollick experimented with free AI tools to see what he could accomplish in 30 minutes. Mollick tasked tech like Bing, ChatGPT-4, and MidJourney to create marketing materials promoting a game he built. In that half-hour, he generated a campaign that included four emails, web page copy, code, 12 images, a video, logo, positioning document, and social media posts for five different platforms.
Notably, Mollick did it all in less than 20 prompts. Not too shabby.
So, yes, LLMs like ChatGPT can accelerate productivity. An equally critical yet less-recognized benefit of ChatGPT has nothing to do with minutes saved but rather how much opportunity the tool opens up for deeper thinking. In an interview with CBS MoneyWatch, Coursera CEO Jeff Maggioncalda said he uses ChatGPT as an extension of his executive team to help him “be more thoughtful in his approach to business challenges, and look at topics from vantage points that differ from his own.”
Like Maggioncalda, I often use ChatGPT for exploratory purposes (a.k.a. talking to the rubber duck). I ask it to provide arguments for or against my ideas or to consider possible models and visuals for expressing ideas.
Ironically, as I’ve sought ways to more effectively prompt ChatGPT, the AI tool has prompted me in return on how I can think more expansively about product research. Based on my explorations, here’s what I believe is more important than creating tangible outputs: deeper inquiries via AI tech almost always illuminate ideas I hadn’t considered. They’re ideas that will ultimately push my product research work forward.
When used well, generative AI tools like ChatGPT make us think much harder about product development problem spaces and customer experience possibilities.
The nudge to go deeper is one of AI’s most valuable benefits.
My fellow UX researchers and designers will recognize “cognitive load.” For readers newer to the space, cognitive load refers to the mental energy one must exert to complete a task. We usually want to minimize end users’ cognitive load as they interact with our digital products. For our work as product practitioners, the same is true. Who wants to spend their days wordsmithing emails when we have far more important things to do?
However, regarding those “better, more important things,” some additional cognitive load is a welcomed luxury. Presumably, the tasks that require more mental energy drive the most innovation and value.
Let’s say we continue using ChatGPT to remove our jobs' mundane or time-consuming aspects. What then? Thinking back to Professor Mollick’s example, arguably, he can now use his extra time to improve the AI-created marketing campaign. He can focus on making these materials more impactful, targeted, and original.
Flexing our mental faculty helps WillowTree teams create more innovative, valuable, and differentiated digital products.
We must be thoughtful in asking ChatGPT to help us manage additional cognitive load. ChatGPT doesn’t care if we’re more creative or better thinkers. The tool isn’t sentient; it’s basically a (really good) word prediction engine — not magic. It won’t instantly make you more creative or your products more successful. You still need critical thinking and outside-the-box prompting to reach value and build your firm’s use cases for generative AI technologies like ChatGPT. You need experimentation.
As one group of economics and business scholars conclude in Frontiers in Psychology, “AI is of great help in dealing with uncertainty, which is the natural environment for innovation, but it does not have the capacity to find creative solutions (useful, beyond novel) that are superior (or even comparable) to those that emerge from the human mind.”
As you explore framing your inputs to spark deeper thinking, keep these prompt considerations in mind:
Be prepared to do some testing to discover which prompts deliver nuance — and which give you surface-level responses. (And remember: don’t share proprietary information about your concept!)
Once you’ve landed on solid outputs, you can focus on evaluating and maybe — just maybe — incorporating those results into your organization’s services. As you assess further, remember that the specifics about how ChatGPT arrives at any particular answer is a “black box.” Take a beat to poke at the tool’s outwardly-polished “answers.”
ChatGPT is prone to hallucinations and reflects biases in training data. You should always cross-reference the outputs with other sources.
Think about what’s under the hood.
Namely, the Internet. (Unless you’ve fine-tuned the AI with other data — and yes, that’s more thinking involved.) ChatGPT outputs can be deceptively polished and confident. After all, the platform’s job is to deliver reasonable-sounding answers.
Our job as humans? In essence, to problem solve. Then, we take those solutions and iterate. We poke, prod, and refine our solutions when challenges and changes inevitably appear again.
I wanted to analyze the tickets WillowTree program directors and product architects created in Jira, our Agile project management system. I considered which elements of the Jira tickets I should focus on — descriptions, data points, and structures — to proactively find signs of tasks or projects that might be good candidates for additional user research.
To expand my thinking on how to accomplish this evaluation, I turned to my AI thought partner, ChatGPT:
ChatGPT responded with a generic list of steps mimicking the design thinking process. I decided to re-prompt the tool with a much more specific focus:
In this instance, ChatGPT returned a helpful list of Jira items and their relationship to user testing needs, such as bug reports and support tickets.
After further prompting, I started getting somewhere.
I inquired about Jira ticket metadata fields to examine why these fields might help me assess user research opportunities.
That’s where I started uncovering new possibilities. Take this response from ChatGPT I hadn’t considered:
I continued questioning — asking ChatGPT to extend the list and associate metadata with data types and risk factors. To cap it off, I requested the tool put these fields in an organized table.
Ta-da!
On the one hand, ChatGPT saved me time by helping me refine my idea and organize the output into an easily actionable asset.
On the other hand, the real work had only just begun. I reviewed ChatGPT’s suggestions with my project management colleague to discuss what’s helpful, missing, etc. In turn, this conversation sparked a flurry of new ideas. Moreover, I had to frame the problem so other stakeholders could quickly grasp it. No AI was telling me how to do that.
Whether or not I advance this concept in my work today, I unearthed a lot in the process. I discovered new ways to assess risk, new considerations of my colleagues’ thought processes, and new ideas for strategic, collaborative user research. In part, these learnings are thanks to ChatGPT. The tool helped me better analyze my initial idea and proposed an organized structure to socialize it.
But, the work of ascribing meaning to these insights and moving them forward is mine.
Outsourcing direct familiarity with data (like Jira tickets) or a domain (such as risk assessment) comes with potential trade-offs. Sure, you can automate detecting user sentiment in interview transcripts, for example. But that approach doesn’t always have the same persuasive impact as witnessing a flustered user try to navigate a problematic interface.
This conundrum leaves us with another question: how much mental work do we want to delegate? We must make these decisions purposefully, understanding what we’re giving up in return.
Once you succeed at generating boundary-pushing ideas or saving yourself some time, what then? Working through ideas — with or without AI’s help — is still work.
And with all of the free time AI creates for you, how will you use that extra time? To improve the digital products and tech solutions you create? To make life better for one another?
Avoid hyper-focusing on ChatGPT-enabled efficiencies. Instead, let’s think deeply about how we’ll spend our newfound time to make better customer experiences and products.
What are we freeing ourselves up to do, if not to make more substantial progress on achieving beneficial product outcomes versus a large number of superficial outputs?
For products and ideas to succeed, we must understand the end of a ChatGPT session as the beginning of more conversations and exploration. An effective prompt or quick output won’t solve the cross-functional silos, communication breakdowns, and lack of vision that commonly sink products.
And yet, tackling these interpersonal challenges propel teams toward more impactful, beneficial business outcomes. We must build collaboration time into our AI workflows to realize the tech’s potential.
Let’s welcome ChatGPT as a tool for more nuanced thinking. AI can and should create space for ourselves and colleagues to manage more weighty cognitive loads.
After all, we don’t develop our crafts by standing still. Taking on more mental challenges is a good thing.
To learn more about incorporating ChatGPT into your workflows, connect with us about our eight-week GenAI Jumpstart program and future-proof your company against asymmetric genAI tech innovation with our Fuel iX enterprise AI platform. Fuel iX was recently awarded the first global certification for Privacy by Design, ISO 31700-1, leading the way in GenAI flexibility, control, and trust.
One email, once a month.