Every few years, a new technology emerges that gets hyped up as the next big game-changer. Remember Flash? Blockchain? The metaverse? These all had their time in the spotlight, but for the most part have taken residence in the backs of our minds by now thanks to the latest shiny new tech, AI!
AI has been the talk of the tech world (and beyond) for the past few years, and user research is no exception. We've all seen the flashy demos and heard the bold promises about AI’s potential. But, as many of us know, some of these AI tools failed to deliver in real research situations. Now, as we move past the "Peak of Inflated Expectations" in the AI hype cycle, it’s time for a more grounded look at what AI is truly capable of today.
The Gartner Hype Cycle illustrates how tech trends go through a rollercoaster of inflated expectations before settling into reality. And this has certainly been the case for AI in UX research. As a result, funding has largely shifted to anything AI-related, leaving many product teams feeling pressured to integrate AI into their products and services in some way, shape, or form. Even when it doesn’t actually enhance the user experience. This has led to the appearance of AI features in some products that seem to serve little purpose other than checking the AI box.
Many early AI-driven research tools claimed to be able to do things like “automate analysis”. However, the results often ended up as vague, overlapping themes that left researchers doing more work to try and salvage the mess. That’s because AI, in its current state, just isn’t capable of reliably taking on the entire research process and producing meaningful results.
So, where does AI actually make a difference in UX research today?
It shines in supporting researchers by handling the repetitive tasks they’ve always dreaded, freeing them up to focus on discovery and analysis. Features that focus on specific, scoped tasks, which though less flashy, are genuinely helpful, practical, and time-saving.
One of the best examples is AI-powered transcription, which, along with auto-generated bookmarks, saves researchers hours of manual effort by making raw data more structured and easier to navigate. AI-generated summaries streamline analysis by surfacing key themes from various research evidence, including entire sessions. Researchers can revisit insights in one click without scanning full transcripts or rewatching recordings.
AI also plays a role in structuring insights. Rather than taking away the researcher’s control over the tagging process, it suggests contextually relevant labels to speed up categorization. Similarly, AI assists with tasks such as automated clustering, which groups insights by sentiment or researcher-defined themes to support synthesis.
Smaller, scoped tasks like these are where AI excels right now, as they provide enough context for AI to work effectively, reducing the risk of errors and hallucinations. The results produced are based on chunks of content that are small enough to be easy to review and edit.
So think of AI as a junior research assistant: eager, fast, but in desperate need of supervision. It can help with straightforward tasks, but should you trust it blindly? Absolutely not! AI-generated insights require human oversight to ensure accuracy and relevance, as hallucinations are an inherent part of working with AI. Hallucinations are really more of a feature than a bug. That’s why when used wisely, AI can enhance research workflows and boost efficiency. But it’s no substitute for real human judgment and expertise.
Contrary to some of the more inflated claims about AI tools, like AI being able to fully analyze usability testing videos on its own, the reality is that most AI tools still struggle with context and video comprehension. Trusting AI to automate research analysis risks oversimplifying complex findings and missing critical nuances.
The biggest danger of these exaggerated claims is that they can mislead teams into making executive decisions based on inaccurate insights. Leadership, in particular, may be drawn to the narrative of AI improving efficiency and being more cost-effective. But it’s easy to forget that researchers bring expansive knowledge about user behavior, company goals, and product strategy to the table. While AI? Not so much.
After all, research isn’t just about answering questions. It’s about helping businesses reduce risk. Companies are selling products and services to people, not to AI. So, blindly following AI’s direction can lead to costly mistakes and expensive course corrections if the insights turn out to be wrong.
„As long as UX work has been around, we've been dealing with people being like, ‘can you just give me the answer...isn't design just making things look pretty'? And unfortunately, AI is just further muddying those waters, adding to that confusion, and trying to profit from it.“
Moreover, overreliance on AI can erode your competitive edge. Truly innovative research helps businesses discover new ways to deliver value to customers—ways that competitors might not have thought of. In a world where everyone has access to AI tools, the questions and responses AI generates are likely to be the same for everyone. That’s a serious risk to standing out in the market.
As the AI hype train slows down, we're starting to see promising signs that the next wave of AI tools and features is scaling back from the overblown claims of previous years to focus on meeting real user needs.
AI will continue to evolve, and its use cases will simultaneously grow. However, UX researchers and product teams must be discerning about adopting AI in line with its current capabilities. This ensures that the insights AI generates are reliable and can genuinely inform business decisions.
This article is based on a UXR Meetup where Kate Moran (VP of Research and Content at Nielsen Norman Group) and Alexander Knoll (Co-Founder and CEO at Condens) shared their insights and observations on how AI (in its current state) can best support UX researchers and where it falls short.
You can watch the full video here: