Do We Still Need Taxonomies Now That We Have AI-Powered Search?

Do We Still Need Taxonomies Now That We Have AI-Powered Search?

August 26, 2025

Taxonomies are used to keep research data organized and have played a key role in making research repositories searchable. But they do have their downsides. For example, they can require a lot of manual effort to create and maintain. This is an issue which AI-powered search, with its ability to process natural language and find data easily, seems to have resolved.

So now that artificial intelligence and modern AI systems are transforming how we search for and organize research, the big question is, do we still need taxonomies in user research?

What Is a Taxonomy?

A taxonomy consists of a system of terms and structures that helps categorize and organize qualitative data. This is typically done using a set of tags (e.g., pain points, feature requests, usability issues, etc.) and meta information fields (e.g., date of an interview, job title of the participant, company size etc.).

Condens Taxonomy - Global Tags
Tags are applied to raw data, such as quotes from participants or observations.

A well-maintained taxonomy supports researchers with analyzing and making sense of qualitative data. Taxonomies also help uncover hidden semantic relationships within the data that can lead to deeper insights beyond surface-level patterns. It also offers a more precise way to resurface research evidence and insights beyond a basic keyword search.

The Downsides of a Taxonomy

As useful as taxonomies are, there are some downsides to working with them.

Not easy to create & maintain

For starters, it can feel intimidating to get started on creating a taxonomy since deciding on the categories and tags to use may not be clear-cut. As a result, it’s pretty easy to fall into the trap of trying to create a perfect taxonomy, which frankly isn’t possible.

As an organization evolves - perhaps developing different product lines, teams, and priorities - so does the taxonomy. Something that can be considered both a blessing and a curse.

On the one hand, there’s no need to try and create a perfect taxonomy. But on the other hand, a good taxonomy can steadily become a poor one if it isn’t updated and maintained according to changing needs. This makes taxonomies a tool that requires regular maintenance and oversight.

Tags traditionally require manual application

Adding tags and fields to your data is how you make it searchable. This is a task that has traditionally been done manually and can be rather time-consuming.

Additionally, as you can only fully make use of the data that has already been tagged, tagging requires a good amount of foresight and may require you to tag not only what you need for your current project, but also for future projects without knowing if they’ll ever actually be used.

Continuous training is necessary

To effectively use a taxonomy, it’s necessary to be familiar with the tags and fields and how to use them. This means that when introducing a taxonomy to your team, you need to teach them how and when to apply the various tags in the right way so that your taxonomy stays organized and effective.

It’s also likely that teaching your team about the taxonomy will be a task that needs to be repeated as your taxonomy develops and new members join the team.

AI and Semantic Search Are Reshaping the Way a Taxonomy Is Used

How AI powered search facilitates research

AI-powered search, or semantic search, is changing the way that we interact with qualitative data by making it easier to locate the information that we need using search engine technology. This means we no longer need to manually tag data up-front in order to use it for future topics and projects. And the likelihood of “losing” data if it isn’t properly tagged beforehand is also much lower.

Here’s an example: Your product team wants to improve a feature on your app. Before AI, you would have needed to preemptively apply tags to relevant evidence in order to resurface and use it later. Otherwise, you’d need to go back through past research and put a lot of work into finding the data you need and tagging it to make use of it.

Using a keyword search would also likely result in critical evidence and valuable insights slipping through the cracks because synonyms and variations in language would be overlooked.

But now with the help of AI-powered search, you can run text-based searches, or find interesting and relevant examples based on a few concrete examples using features like Find similar Highlights.

How AI powered search supports stakeholder engagement

AI search doesn’t just benefit the people who do research. Its ease of use makes it a great tool for helping stakeholders engage more with research!

It used to be the case that stakeholders needed to learn at least the basics of how a taxonomy worked in order to find what they needed. But now with the help of AI-powered search, all they need to do is type out what they’re looking for, and AI can direct them there.

Now having gone over the benefits of using AI to facilitate research and engagement, let’s revisit the big question.

If AI makes it so easy to interact with qualitative data, do we still need taxonomies at all?

In response, I’d pose a follow-up question that considers a different perspective: Can we fully rely on AI tools to help us find what we need in qualitative data?

I believe the answer is still no.

The Limitations of AI

Despite the downsides of working with taxonomies and the benefits of using AI for search and analysis, it’s worth remembering that AI still has its limitations.

For example, AI-generated search results can vary in quality. Even advanced learned relevance models, which use machine learning to improve ranking strategies, can struggle to achieve generalizable search relevance across diverse or unseen datasets.

This same limitation is also exhibited during AI’s synthesis phase, where its lack of critical context can produce surface-level results while missing deeper, less obvious connections.

AI-powered search is also liable to produce results that aren’t at all relevant to your query, while leaving out evidence that you know is relevant.

So although AI can be a big help, there is absolutely the need to:

  • Edit what it comes up with

  • Add what was missed

  • Remove the inaccuracies and misinterpretations.

And for this task of refining and verifying AI-generated content, tags can be very useful.

Tags as the Interface Layer Between Humans and AI

Tags as human verified searches

Let’s say you’re a product designer working on a new feature and want to keep track of a specific topic. In this case, it would still be better to use a tag to make sure that all the information you want is accounted for and easily accessible to you.

Here’s another scenario. You get some feedback from a customer call and realize that you have a key piece of evidence that you’d like to expound on. So you ask AI to find more related evidence and it comes up with 7 more. But only 3 are actually relevant. You keep those 3 and later recall another 1-2 points of evidence that AI didn’t come up with.

In this scenario as well, using a simple tag could be really helpful. By tagging the evidence that you know is important, especially the points that AI failed to recognize, you ensure that you’ll be able to find it again in the future. Whereas without a tag, it might be difficult to resurface again using AI-powered search since it didn’t recognize some of the evidence the first time around.

„AI-powered search often fails to detect evidence that isn’t obvious and explicit, or when it doesn’t have all the context. Difficulty detecting irony is a good example. “

Co-Founder and CEO @ Condens

When used in this way, tags function as 'human-verified searches', signaling that a piece of evidence has undergone at least one round of human quality control.

Now, to be clear, having a human check AI’s work isn’t a guarantee for quality, but it is an additional layer of scrutiny that is much better than trusting the AI-generated content straightaway.

Tags as anchors for research evidence

Tags also address another downside of AI that you’ve probably experienced yourself. When running the same search through AI multiple times, it can produce different results, especially as the data in your repository changes.

But by tagging the evidence that you know you’ll want to revisit, you ensure that it will be easily accessible when you need it again.

These are just a few of the ways tags and taxonomies continue to add value to the research process, demonstrating that the introduction of AI-powered search hasn’t made them obsolete. Rather, AI’s reshaped how they can best be used.

Now, instead of the former rigid way of applying predefined tag sets to all raw data, we seem to be moving toward a more flexible, organic approach where tags emerge as needed and can be selectively applied.

A New Framework for Tags and Taxonomies

But this begs the question: if tags are used in a more flexible and organic way, do we still need a framework to keep them organized? Or will teams simply create tags as needed with little need for maintenance?

As with many things: it depends.

There may be teams that prefer an unstructured approach. And that’s completely fine.

But if you use tags as “Human verified searches”, and would also like others to be able to use and reference them in the future, then taking an unstructured approach could prove problematic, as it will become more and more difficult to maintain a clear overview of the growing number of tags.

In this case, having an overall framework to keep the tags organized can be helpful. In Condens, this can be done using tag groups, which typically cover broader themes like feature requests, pain points, or usability issues. So when someone creates a new tag, e.g., “Bank transfer checkout hard to use”, they can place it under the usability issues group. These groups, i.e., the taxonomy, make it easier for everyone to see which “verified searches” already exist.

Meta Information Fields

We’ve now talked a lot about tags, but there’s also another aspect to taxonomies: Meta information fields.

Meta information fields provide context to individual research sessions (e.g., who is the participant, when the session took place, or which device was used), projects (e.g., which research method was used), and findings (e.g., which product it relates to, which personas it addresses).

Meta information is crucial for determining which data is important for a search query.
Meta information is crucial for determining which data is important for a search query.

We firmly believe that using meta information fields will still be required in a future with AI. This is because participants don’t usually mention all the relevant contextual information about themselves during interviews or usability tests. But this meta information is crucial to decide which data is relevant for a search query. Without this information, it would be much more difficult to verify the relevance of some search results based on various factors, e.g., the country the data is relevant for, or on which iteration of the product the data is based on.

Keep in mind that it's important to distinguish between tags applied onto the raw data to categorize an observation (e.g. “pain point”, “feature request”) and meta information fields that add context to a study (e.g. “country”, “research method”), a session (e.g. “device”) or a participant (”persona”, “Company size”).

Ensuring metadata fields are filled also makes it easier for AI to search through and retrieve the requested information. In fact, combining the flexibility of AI-powered search with structured metadata enhances the overall search experience by improving its efficiency and accuracy.

And to ensure that meta information stays consistent across multiple studies and teams, a taxonomy is necessary.

icon lightbulb

Pro Tip: In Condens, you can apply filters to an AI search to limit the scope of data that is considered and scanned. Thereby improving the relevance and accuracy of the results. For stakeholders who may not be aware of possible filters, there will be suggestions based on the search query they used.

Conclusion

AI is changing the way that we create and use taxonomies in significant ways. From reducing the amount of time needed to define, apply, and maintain them to providing a better search experience that’s easier to use.

However, AI won’t make taxonomies redundant. Especially since taxonomies help fill in the gaps for many of the issues that AI-powered search still has, like verifying output quality, making it replicable, and providing essential context. So in all likelihood, taxonomies will continue to stick around for the long haul.


About the Author
Alex Knoll

Alex is one of the Co-founders of Condens. He started his career as a product manager, and interviewed more than 160 UX Researchers from around the world to identify pain points and find opportunities for improving research processes before founding Condens. He is particularly passionate about Research Ops, creating effective Product teams, and he often speaks at UX Research events.


Want to receive UX Research Meetup & Event guides, new content and other helpful resources directly to your inbox? Enter your email address below!