The light flicked on, “this is it” she said, as we entered the small room. I stared incredulously at the table, the walls and the floor. Every surface was scattered with thoughts made visual as if the Hanging Gardens of Babylon had been reconstructed from thousands of sticky-taped clippings and Post-it notes. She deftly sidestepped a tower of manilla folders and turned to face me over a parapet of cassette tapes. It was obvious how at home she was in this citadel of research. There were many things about it she could see and know that I simply could not.
That was a while ago now, but my first experience of a research professional at work in a project ‘war room’ has remained a vivid inspiration ever since. It also gave me faith that the human mind, wielded well, can achieve amazing things even when confronted with overwhelming amounts of information.
It’s been helpful because analyzing qualitative information from primary research can be a daunting prospect. Almost all researchers I know want to do more than simply report facts. We seek insightful truths that compel our teams to empathize before directing how issues might be addressed.
That’s a noble aspiration, but it begins to whither when we face a seemingly insurmountable amount of carefully collected details. How do experienced research professionals confront such a mountain? What kind of alchemy is it that helps them distill meaning from this madness?
If you’ve not done research analysis before, let’s plumb the depths to see what can often be involved.
The most dismaying aspect can be the sheer amount of reading that’s required to get our heads around the material. For some of us, this processing also means we will do a lot of writing. The average one-hour interview transcript might contain 10,000 words. We’ll have at least half a dozen of these, and that’s before we get to any workshop output, diaries / journals, visual documentation, or our own observation notes.
Even with a clear focus, and after using data reduction techniques, it’s possible to have more than 70,000 words that need to be digested – the length of some novels. We’re not only expected to read the text, but also fully understand and deconstruct it, all within a week or two.
Larger studies may involve three or four times that amount of information, and call for a corresponding increase in people and resources. But involving more people also adds overheads and complexity to our analysis because we then need to coordinate the synthesis.
Analysis can be described with a relatively predictable process, but before we’re able to apply a generalized approach, we need to understand the reasons why we do analysis in the first place and the principles that guide it. In this article, we’ll also define what a good insight looks like, and explore some common challenges faced when getting our head around the data we’ve collected.
Under pressure, it might be tempting for us to go directly from doing an interview to defining a solution to fit the perceived needs. In practice, that creates some problems. If done properly, good analysis will connect specific findings from our audience to a specific approach, with clear constraints, and even a roadmap for developing particular solutions.
We can’t shortcut how we unpack and understand our audience. Attempts to merely rely on our human memories and impressions from interviews are likely to introduce bias. And even if we did keep notes, when we consume raw data directly, we’re in danger of unconsciously giving weight to certain points. From there we’ll likely form misleading opinions that lead to impulsive decision-making, and eventually, take the whole team down a path that focuses on the entirely wrong outcome.
So, the most reliable way forward should aim to assemble the material and make balanced deductions based on the evidence. From there we can use the facts to tell a compelling story. It’s this foundation that accurately communicates the right findings and determines the ultimate quality of our research.
Overall, the insights we surface should always inspire action in the people receiving the research results.
‘Insight amnesia’ is a phenomenon that most researchers will recognize if they have a research project under their belt. After a ground-breaking presentation of the research results, team members and stakeholders return to their work bubbling with comments. Initially, they’re excited but then the attention fades and momentum is lost.
The best way a researcher can impact the direction of a product is to provide insights that speak the truth in a compelling and actionable way. Michael Morgan, Senior User Experience Researcher at Bloomberg, has reflected on what this actually means. He shared a succinct description of what makes a good research insight great in his column on UX Matters and says:
Of course, not all insights from user research will meet all six of these criteria.
Criteria such as:
It’s common to hear UX practitioners talking about their focus on helping define the ‘why that happens behind the visuals’.
‘Quantitative’ data, with its numbers and metrics, often exposes the symptoms of usability issues, or a mismatch between a product and the needs of the people using it. The strengths of quantitative measurements are in defining ‘how much’, and ‘how often’, and it’s relatively rare to get an answer about ‘why’ something is happening.
‘Qualitative data’, on the other hand, shows behavior and attitudes, and embraces the sometimes illogical nature of people to get much closer to understanding motivations and causative factors. However, it’s also not immune from missing the mark. It’s a common misstep in qualitative research to get excited that we’ve identified an issue or a particular behavior, and fail to pursue the reason why it is occurring. Not knowing the root cause, and the context around it, means we’re unable to suggest what needs to be done.
Many research repositories and libraries are structured around findings. We need to be mindful that the luxury of being able to remix findings, participate in a research democracy, or access intelligence over a longer timescale doesn’t excuse us from delivering meaningful and timely insights, only possible with a comprehensive meta analysis.
Without a systematic approach, we researchers can often find ourselves in a mire of mistakes. The idea is to stay organized and focused by following a reliable process.
According to Maria Rosala, a specialist in UX research with the Nielsen Norman Group, the most common challenges that researchers face when analyzing qualitative material are due to the large quantities of richly detailed and sometimes contradictory data.
Maria explains how superficial or selective analysis can be caused by skim reading.
Long transcripts and extensive field notes can be time-consuming to read; you may have a hard time seeing patterns and remembering what’s important.
Data reduction techniques and the confidence and support to launch in and proceed steadily can be helpful.
The wealth of detail itself can make it hard to separate useful and superfluous facts, and researchers become frozen with indecision.
The analysis simply becomes a regurgitation of what participants’ may have said or done, without any analytical thinking.
It takes discipline, collaboration, prioritisation, and perhaps—counter intuitively—some additional research to clarify what you’re dealing with.
A finding that contradicts another is a common occurrence. It might even be expected when dealing with multiple participants but things can get particularly confusing when a single participant says two different things. Inevitably this increases the difficulty in objectively reaching conclusive findings, because, as Maria puts it, when:
Participant feedback is conflicting, or, worse, viewpoints that don’t fit with the researcher’s belief are ignored.
To compensate, hold the conflicting findings in balance and look beyond their binary nature for other suggestions that might better describe what is happening. To do this, you may need to seek clarification through further research.
Other challenges might not be inherent in the data, but could stem from a lack of goal setting for the analysis itself. Maria says:
The aims of the initial data collection are lost because researchers can easily become too absorbed in the detail. The analysis lacks focus and the research reports on the wrong thing.
This is an awkward position to be in, as you’re already committed to what you’ve done. If this happens, the first step is to see if there is enough crossover in your research with what you should have been exploring. If there isn’t, you may need to go back and repeat the process, so the earlier you recognize the misalignment, the better.
By its most basic definition, analysis is about sifting through research material to identify facts and frame problems. To go beyond a mere assortment of facts and break through into an awareness of what we can conclude about these facts is synthesis.
The best research involves both.
Synthesis can emerge organically during analysis—if we recognize it, and allow it to happen. It’s apparent in those moments when we first glimpse patterns across data sets, or get an intuition about a thread of truth amongst a mishmash of options.
But relying on our instincts in an environment demanding evidence will be tricky.
Intuition is sometimes seen as a magical sixth sense or as something that emerges from an obscure inner force, but it is actually a mental process triggered by our perception. We might have a fleeting impression of a visual inconsistency, a fact out of place, a facial expression, a sense of tone, or some other thing that has registered without conscious awareness.
It might be more helpful to think of insights as ‘rapid cognition’, or as ‘condensed reasoning’ that uses shortcuts in our brain to link two disparate concepts.
For example, our brain loves to find meaning. It receives multiple simultaneous inputs from our senses, and when attempting to create connections it matches to the closest kindred pattern from amongst the vast repository of our memories and experiences. The definition of ‘closest’ is unique to each of us at any given point in time.
Intuition is therefore not the unconscious processing of cues, such as those used in first impressions. Neither is it the deliberate reasoning used by our forebrain. The non-conscious thinking of intuition has strong links to our subjective preferences and spontaneous feelings. Experience is encoded in our brains as an intricate web of both fact and feeling, and so our understanding and recall of memories is interwoven with emotion.
The seeking of our brains for kindred connections also means that the deeper and longer we go into familiarizing with the research data, the better. The more exposure we have to contexts and the more ways we slice and dice the information in our subsequent attempts to analyze it, the more reliable our intuitions will be. They will be surfacing from a richer array of collected patterns and experiences.
I often consider my researcher’s brain as a cauldron, and the experience of researching as some kind of pottage. Many ingredients can be put in, over and over again, to replenish the goodness over different seasons. Things change with time, and the longer things cook, the more balanced and nuanced the flavors.
As researchers, we’ve already spent hours with people having conversations, asking questions and listening to them, observing behavior and context, and building empathy. To a limited extent, we know them. If we therefore resonate strongly with a particular comment and can connect it to a pattern in other comments, then we’re probably onto something worth further exploration.
It’s important to emphasize the need to connect our hunches to similar comments or data points. It is wise to evaluate our intuition on balance with the evidence, and in the full light of factual data.
Without this kind of backing, our inkling may just be distracting us from reality. Without cross checking, we’re likely to latch onto the wrong details and pull up the wrong connected web of associations in our brain.
This is also where the importance of objective data collection comes in. We can only rely on our data points for balancing evidence if they are untainted by our opinion and judgments to begin with. The art of intuition is only valuable if we stand on a foundation of good data science.
It is the nature of good analysis and synthesis to hold opposites in tension. We live this daily when the messiness of human behavior conflicts with our clear data recording and decision making, or when the logic of structured thinking is juxtaposed with the unpredictability of obscured nuggets. Being able to work in objective facts while cultivating intuitive insights is just another skill we must balance.
Professional researchers and qualitative analysts have learned how to bridge these divides. At their core, they’re driven to understand the needs of real users. Their hunger for both accuracy and value take them beyond providing just facts and towards communicating meaningful insights. And they successfully harness both empathy and evidence to compel design teams into action, enabling the creation of truly useful products and services.
Ultimately, the best way to motivate action from your research is to help your team make this connection.
If we use these same principles in our approaches, maybe one day we too will find ourselves on the other side of the desk, astounding and inspiring the next generation of researchers with all the contradictions of complex data sets and human capability.
Analyze data, collaborate on insights, and build your research repository.Try free for 7 days
Use casesUsability testing analysisCustomer interview analysisNet Promoter Score analysisSurvey response analysisSupport ticket analysisRemote user research
© Dovetail Research Pty. Ltd.
Made in Australia