Navigating Biases in User Research: A Deep Dive into Project Decisions  (Part 2)

Navigating Biases in User Research: A Deep Dive into Project Decisions (Part 2)

January 8, 2024

In the first part of this series, we've covered how cognitive biases can influence researchers when taking notes and analyzing data. But we don't work alone. We usually work with colleagues. Biases can influence our relationship with stakeholders and other participants in our projects at different levels, especially when it comes to making decisions. This second part covers biases influencing project decisions, such as escalation of commitment, endowment effect, false consensus, etc. I'll discuss their impact on stakeholders and colleagues in project management and effective strategies for mitigation.

Escalation of Commitment and Sunk Cost Fallacy

The escalation of commitment (also known as commitment bias) is the tendency to remain committed and keep investing money in a past decision or course of action, even when it’s clear it’s going to fail. People would rather stick to it and fail, than re-assess and change the course of action. People also tend to continue investing (time, money, effort) in something, even if it is no longer rational, and current costs might outweigh the benefits. This is the sunk cost fallacy. It happens because we are not purely rational decision-making creatures. Instead, we are highly influenced by our emotions. We might feel regrets (for investing) or guilt (for wasting money). So we fake rationalize this into “we might as well see this through”.

Did you ever work on a feature that stakeholders insisted on building? And while conducting research, you discovered that users don’t have any interest in that feature. But, since you started working on it, you are told that “we might as well push it to production”. That’s because of those biases. Even if continuing in that direction is not a rational idea, and might fail, people will do it because they’ve already invested so much. I gave you an example of a feature, but this can happen to strategies, investments into products, or anything where time and money get invested.

This can also happen to users and customers: you might continue watching a boring show, just because you’ve already invested time in it. You might keep a tool you are not satisfied with, because of all the money and time invested in it in the past.

The potential for poor project decisions is high here, so, you might wonder, how can we mitigate this? First, let me tell you, it’s tricky, since here, we go into money and psychological investment, but also into ego territory. But, there are still a couple of strategies you can try:

  • Bring data to an emotion fight: since we are often in this situation due to emotions, bringing data can help. How much did this cost already? How much do we still need to invest in this to complete it? What is the foreseen ROI of it? Sometimes, people don’t realize how much they can still lose if they keep going in that direction. Bring the focus on current and future costs vs (potential) benefits.

  • Once you know how much they still need to invest, you can ask them, “What else could we do with that money / time / resources that could bring more benefits to us?

  • You can also mitigate this before the project even starts, by building existing criteria from the start. Set criteria that, if met (losses, lack of progress towards the predefined goal), allow people to reassess the course of the project, or even, abandon it. And, do regular project checks and evaluations.

  • Seek external perspective: if the people running the project are too biased and emotionally attached, having someone external review the strategies can help. Often, this is why consultants are brought in: they have the expertise, but also the outside perspective.

The Endowment Effect

People overvalue what they possess. They will overvalue something, just because it belongs to them. Regardless of its real objective value. This is the endowment effect.

When people get overly attached to their projects, ideas, or strategies, it can lead to resistance to change, but also unrealistic expectations, and bad project decisions. A project owner might resist changing and adapting strategy because they feel a sense of ownership and investment in the original one they helped create.

I think we all were in a situation, where a stakeholder or a client won’t let go of an idea. No matter how bad, it objectively was. They will place a higher value on their own ideas and solutions, simply because those originated from them. It gets worse when the client invests a lot of money, especially personal money, to bring their idea, their vision to life.

Here are a couple of strategies you can put in place to mitigate this effect. They are quite close to mitigating the escalation of commitment.

  • Bring data, again, to an opinion fight. That date needs to emphasize actual user needs, to help people understand that the solution they love so much might not be the best one for users. If you are changing a tool, a design, or a solution, bring objective data about the improvement.

  • Evaluate using objective criteria. When evaluating ideas, and solutions, having a set of objective criteria, helps steer away from the affective attachment.

  • Bring in an independent reviewer, aka, someone who doesn’t have this ownership attachment. This will help get closer to the real objective value (of the idea, tool, solution) and take less biased decisions

  • Do your research (user research, market research), to gather this feedback that won’t be biased by this effect.

Negativity bias

People tend to recall better and give more weight to negative information, experiences, or feedback than neutral or positive ones. Those negative events then have a more significant impact on us than the positive ones.

You can have 130 positive comments on an article, but you will overly focus and give more importance to that ONE negative comment.

When it comes to a project, after a usability testing session, stakeholders might over emphasis on issues, problems, or negative feedback about the product. This will overshadow the positive ones, to the point that they might make decisions based on this. The team might spend an excessive amount of resources fixing a minor issue, instead of focusing on strengthening the positive points.

There are a couple of ways you can mitigate this:

  • Reinforce the positive: when reporting usability testing sessions, or during retros, don’t just list the negative, the issues. Find a balance and also share what worked, and the successes of this project/product.

  • Communicate results transparently with your stakeholders: providing clear explanations as to why something didn’t work and why you got that negative feedback can help. It doesn’t mean you need to downplay those. But, if you can explain them, it can bring more balance.

  • It’s also about company culture: encouraging a culture where people are allowed to share what didn’t work, but also, motivated to share what did work, the positive, the success. And a culture where, when sharing issues, and problems, people don’t get shamed or blamed. Instead, use those as lessons learned, to avoid future mistakes. And find solutions to improve.

False Consensus and Curse of Knowledge

People tend to project their own way of thinking and acting on other people. They assume everyone else thinks, believes, and acts like they do. This is the false consensus effect. People also incorrectly assume that everyone else around them knows as much as they know about a specific topic. It makes it hard for them to put themselves in the shoes of a less informed person and understand their perspectives on the topic. This is called the curse of knowledge.

Those two biases combined can lead to very bad project decisions. This is also why some teams think they don’t need to do user research or usability testing. The good old “I know what the users want/need” you often hear from clients or decision-makers on a project. They are so deeply convinced their ways are shared by other people, that they don’t see the value of investigating any further. The Museum of Failure is full of such products and ideas.

So, how do we avoid this?

  • Find ways to bring diverse opinions or behaviors to the stakeholders to help them understand that their behavior might not be the same as the generic population. This can be done by showing them some actual user feedback.

  • Test your assumptions: yes, that’s right, usability testing. But even before that: does this product really have a target audience? Conduct some market research if you are still in the discovery phase. Bring data!

  • Share past, current, and future research: it’s about filling the knowledge gap between people’s assumptions about users' needs, and behaviors, and their actual ones. If you don’t share your results, you are keeping them in that state where they might still think everyone behaves like they do.

  • Encourage questions and honest communication, with simple language (avoid jargon). This helps bring everyone up to speed on the project, and bridge potential knowledge gaps.

Survivorship bias

Survivorship bias (also sometimes known as survival bias) happens when the data set only considers the “surviving” observations. People will only focus on successful outcomes while overlooking failures. A very famous example of this is bullet holes in aircraft during World War II.

The problem: if we make decisions based only on what was successful, we might not understand what contributed to the success in the first place. How many times have you seen a client or stakeholder ask you to copy a feature from another company? “We want the same search as Amazon”, said once an eCommerce client (that has nothing to do with Amazon’s business). The argument is usually “Yes, but it works for them, look, they are successful”. Well, our dataset is incomplete. We see only the company for which this worked. We don’t see the ones for which it failed. Nor do we know why it worked for some, and why it failed for others.

It’s the same issue when I hear things like “We don’t need to make our site accessible to blind users because we don’t have blind clients”. Our dataset is biased here. We only have data from the people who managed to buy from us. What about all the blind potential clients, who tried, but could not (because, the site is not accessible)?

Since this bias can lead to biased decisions, it’s very important to try to mitigate it. A couple of things you can do, include:

  • Be critical of the data presented, and ask yourself “What is missing, what don’t we see?”. What data didn’t survive? Actively seek out the potential considered failures, to get a complete picture.

  • Diversify and verify the data source: ensure you don’t just make decisions on success stories.

  • Triangulation also helps: don’t rely only on one source and method of research, cross-reference quantitative and qualitative analysis, to get a broader picture that will help make better decisions.

Status Quo Bias

The status quo bias is the tendency to like things to stay relatively the same and be reluctant to change. The current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss.

This can impact the validity of our user feedback when we work on a redesign (or replace a tool with a new one), for example, since people might say they favor the existing familiar thing. Always be careful about what people say, vs what they actually do. This can also cause resistance when there are organizational or process changes put in place in companies, and can even lead to the failure or adoption of new systems or practices.

I was working on a project where we redesigned a whole site. During usability testing sessions, people would tell us how much they didn’t like the new version, comparing every single action to how they used to do it in the older version. We gave them access to a beta version and scheduled a second session a couple of weeks later. A lot of the people, who were at first reluctant to change, praised us on how usable the new interface was. We ended up verbatim like “It took me a couple of days to get used to it, but I won’t go back, the new version is way more user-friendly”.

This is a very complicated bias to overcome because humans are creatures of habits. A couple of things you can do, when trying to change the current status quo, to help with change management:

  • Be transparent in your communication: help people understand the reasons for the change.

  • If this is a new tool, especially in enterprise settings, you can offer training and support to help people transition. Find some champions to promote the benefits of your changes: you could actually “use” the bandwagon effect to your advantage, and find some stakeholders or powerful users who can help push those changes and guide people.

  • Ease the transition by implementing gradual change. A big change can trigger bias, so you might want to prefer small, incremental little changes. Like, change a couple of pages at a time. Let people keep both systems for a little while if you introduce new tools so that they can get used to it.

  • Acknowledge the concern, don’t just dismiss such feedback. Acknowledge the new is different from the current status quo, to help people feel understood and heard.

Wrap-up

In this second part, we learned how biases can affect decisions in UX projects, leading to poor choices and even project failures. From sticking to failing projects because of commitment bias to valuing our ideas too much in the endowment effect, biases play a big role. Negativity bias, false consensus, survivorship bias, and status quo bias also impact our choices. They can lead to resistance to change, unrealistic expectations, and decisions based on incomplete or inaccurate information. To mitigate these biases, UX researchers should employ strategies such as bringing more data to the decision-making process, seeking external perspectives, reinforcing positive feedback, encouraging diverse opinions, diversifying data sources, and being transparent in communication.

Bibliography and Resources


About the Author
Stéphanie Walter

Stéphanie is a UX Researcher & Designer based in Luxembourg. She has 12+ years of experience and specializes in enterprise UX and mobile. She teaches, speaks, and writes about design, UX research, accessibility, cognitive biases, design-dev relationship, etc. Besides that Stéphanie enjoys good tea, bike rides, and drawing illustrations. She says about herself: "My D&D alignment is chaotic neutral and I am better at keeping my teammates alive in video games than my plants."

Don't hesitate to reach out to her for questions about research and design!


Want to receive UX Research Meetup & Event guides, new content and other helpful resources directly to your inbox? Enter your email address below!