Navigating Biases in User Research: Focus on Note-Taking and Data Analysis (Part 1)

Navigating Biases in User Research: Focus on Note-Taking and Data Analysis (Part 1)

December 20, 2023

Understanding cognitive biases is crucial in user research. While many articles explore how biases impact researchers and users during studies, the research journey extends beyond. From collecting data (taking notes) to analysis and project decision-making with stakeholders, biases can subtly influence the entire process. This leads potentially to wrong data interpretation and inaccurate conclusions. In this two-part series, I'll explore biases through these lenses.

Part one is dedicated to biases that impact the researcher while taking notes and analyzing the data, such as confirmation bias, recency effect, etc. How do these biases shape research outcomes, and what strategies can we employ to mitigate them? The second part covers biases influencing project decisions, such as escalation of commitment, endowment effect, false consensus, etc. I'll discuss their impact on stakeholders and colleagues in project management and effective strategies for mitigation.

By recognizing and addressing these biases, UX researchers can improve the reliability of their research outcomes and contribute to more user-centered design decisions.

Biases affecting note-taking and data analysis

Cognitive biases can impact people who take notes during research sessions and continue to affect the analysis of collected data. Even if you gathered quality data during the research phase, those biases can subtly impact how you interpret information and take notes. It's important to recognize these biases and know how to mitigate their impact, to get more accurate and higher-quality research outcomes.

Confirmation bias and the concept of cherry-picking

Confirmation bias is the tendency to search for, interpret in favor, and recall information that will confirm our current assumptions. This affects researchers when they're setting up and doing studies, but it also plays a role in how they look at the collected data. If a researcher starts with predefined assumptions in mind, they might unconsciously influence the analysis. They might, for instance, be tempted to cherry-pick: select the data that only supports their assumption, while ignoring information that goes in another direction.

After a usability test, a researcher might, for example, unknowingly give more importance to the issues they found predictable and expected. At the same time, they might downplay the significance of new issues that challenge their initial assumptions. This could result in a less effective prioritization of which issues should be addressed first.

I once collaborated with a note-taker who only recorded positive feedback in their initial notes. He totally omitted the negative ones. I had to explain, that we don’t cherry-pick while taking notes. I understood their temptation to highlight only the positive and what confirmed our hypothesis, especially since the notes were accessible to others. However, this approach contradicts the principles of unbiased research. To help, I created a small “note taker guide” that explains what to take notes of. The guide emphasizes the necessity of capturing all feedback, both positive and negative. I give the guide to new note-takers before the first session, and we go through the different points together. This helps make notes less biased.

To mitigate confirmation bias during analysis and note-taking, consider the following strategies:

  • Be aware of, acknowledge, and list down your assumptions: sometimes, it’s hard to face assumptions, when we can’t even verbalize those. List them down. This helps keep a critical mindset, and even question your assumptions, when they peek into your analysis. Have colleagues hold you accountable for this.

  • Analyze the research as a team: this way, you mitigate individual expectations and biases. Again, this helps hold each other accountable.

  • Record and/or have someone take notes to avoid the researcher cherry-picking only notes that go towards confirmation bias.

Recency Effect and Primacy Effect

People tend to favor, better remember, and give greater importance to recent events, information, or items, overlooking earlier older ones. This is the recency effect.

The primacy effect is the opposite: the tendency to remember and favor the first piece of information over the information presented later can also apply.

During analysis, a researcher might base their conclusions on the most recent session or information they encountered, overlooking or undervaluing earlier data. They might also base the conclusion on the first session, which left a strong impression because it was the first.

I’ve seen this happen a lot, especially when note-takers (or observers) are not used to user research. After a usability testing session, the team tends to focus a lot on the usability problems found during THAT session. They rush to create tickets in the backlog to address the issues discovered in these last sessions, as soon as possible. While it’s good that the team wants to fix usability issues, there’s a risk of ignoring equally or even more important issues discovered during other sessions.

What could we do, to address those biases?

  • For note-taking: take notes during the session, not retrospective notes after. Record if you can, to avoid relying on memory.

  • Wait for the whole data to do the analysis. Don’t make any design, research, or product decisions, don’t draw any conclusion based on one single session.

  • When analyzing the sessions, don’t analyze them in chronological order. Randomize it to avoid putting more emphasis on the last ones.

  • Split the analysis between different members of the team, to balance out individual biases

Hindsight bias

People believe they know something is going to happen after it already happened. This is the hindsight bias. We overestimate our ability to predict or foresee the outcome once it has become known, making past events more predictable than they actually were.

This bias impacts researchers during the analysis: a researcher might believe after the research is conducted, that the result was predictable. They might think they knew a design solution would work or fail, even if they had no data to predict this. The famous “I knew this button would be confusing” at the end of the session. This can impact how researchers interpret the data. And lead them to overlook some insights.

Here are a couple of ideas to help you overcome this bias:

  • Document your predictions before the research. This way, you can compare what you expected vs. the actual outcome.

  • Check your researcher’s ego. Let’s face it: we love to be right, and sometimes, we just want to get a couple of “I told you so”. So, yeah, it’s important to check our ego as a researcher as well. And approach research with an open mind, to avoid skewing the data towards a certain direction.

Illusion of Validity and Clustering Illusion

The illusion of validity is the tendency, for a person, to overestimate their ability to interpret and predict an outcome when analyzing a set of data. This gets worse when the data shows a consistent pattern, aka, the data tells a coherent story. Sometimes, random events, even if they are frequent, are still random. But, humans love predictability, falling into the clustering illusion: the tendency to expect random events to appear more regular and uniform than they are, because we think that clusters and sequences can’t be random in data. Plot twist: they totally can, if you want to have fun, check “spurious correlations”, full of examples of random data that match, while being, well, pure coincidence. Those biases can lead to skewed and inaccurate interpretations of data, especially quantitative ones.

On a specific project, we check analytics to help us decide what pages to migrate first to the new interface. We didn’t have a detailed analytics solution, only a server backlog. While analyzing the data, there was a big trend toward the 10 most visited pages of the site. One of them was the documents page. This created the narrative: “The document page is super important for our users, so we need to migrate it in the first pages”. If we had stopped there, we would have made this a top priority. We triangulated the analytics data with user interviews. Turns out our prediction and interpretation of the data weren’t reflecting reality. While the document page was interesting to users, they mostly visited it, because it contained links to another tool they needed. And it was the only page with access to that link. On the new interface, we installed Matomo and had mode detailed analytics. This time, we would clearly see that most users who visited that page left the site because they clicked on the link to the other tool.

How could you mitigate this for your own project?

  • My main advice is triangulation. Don’t rely on only one source of data, or one method to draw conclusions

  • Also, be skeptical, and critical. If it looks too good to be true, if the data tells a story that sounds a little too predictable, maybe there’s more to it. Then, dig with a second, or even a third method.

  • Have multiple people analyze the data, like most biases that might affect analysis. Check if they draw the same conclusions or predictions.

Wrap-up

We've explored how confirmation bias, recency effect, hindsight bias, and the illusion of validity impact note-taking and data analysis. To mitigate these biases, researchers should employ strategies such as being aware of their assumptions, analyzing data as a team, conducting peer reviews, taking detailed notes, documenting predictions, triangulating data sources

Stay tuned for Part 2, where we will focus on biases influencing project decisions.

Bibliography and Resources


About the Author
Stéphanie Walter

Stéphanie is a UX Researcher & Designer based in Luxembourg. She has 12+ years of experience and specializes in enterprise UX and mobile. She teaches, speaks, and writes about design, UX research, accessibility, cognitive biases, design-dev relationship, etc. Besides that Stéphanie enjoys good tea, bike rides, and drawing illustrations. She says about herself: "My D&D alignment is chaotic neutral and I am better at keeping my teammates alive in video games than my plants."

Don't hesitate to reach out to her for questions about research and design!


Want to receive UX Research Meetup & Event guides, new content and other helpful resources directly to your inbox? Enter your email address below!