Have you ever spent time around a three year old child? They’re fairly communicative and they’re beginning to learn how things work. But they’re not experienced enough to be able to apply theories of cause and effect yet. So their favorite question is, “Why?”
There’s a part of user research that requires us to channel our inner three year old. We are deeply interested in what people do, but only because it helps us clarify why they’re acting that way. And until we understand the driving emotions behind an action, we can’t make recommendations for improving the experience.
You’ve probably heard that the two types of user research are linked to those questions respectively: quantitative research helps us understand what, and qualitative research helps us understand why.
For example, when I’m looking at heatmapping software, I can say with certainty, “70% of users who click in the text field for the email newsletter then also click the Sign Up Now button.” I can say that because I have tags tracking how many clicks occur in the email field and how many occur on the Sign Up Now button, and I can do some simple math to understand what the drop-off rate is. This is quantitative research -- it tells me what quantity of users are (or are not) performing a desired action on the site. The data are factual, numerical, repeatable, and provable.
What that data doesn’t tell me is what causes 30% of people to change their minds between entering an email address and actually committing to sign up for the newsletter. I really want to know why people are hesitant to get my clever emails, so the next time I’m at a convention, I set up an iPad at my booth and observe people going through the email sign up. When someone decides not to click Sign Up Now, I can intercept them and ask what they were feeling, what caused them to stop, what additional information might make them comfortable going through with the sign up. I get to employ qualitative tactics to explore the why behind what my heatmapping software told me. Qualitative feedback is subjective, emotionally driven, changeable, and unique to each subject being interviewed.
As a user researcher, “Why” is one of our best tools. In fact, there’s an interrogative technique made popular by Lean management theory and the car company Toyota called the 5 Whys or Root Cause Analysis. It proposes that you must ask why five times to fully unravel a problem.
I love using why internally with my stakeholders and product teams, too. When an executive makes a bold statement about “our users want…” I often ask, “Why do you think that?” (Or to soften it a bit, “That’s interesting. Can you tell me more about that idea?”) When a developer walks me through a new functionality, I often ask, “Why did you build it that way?” When a product manager outlines a new feature set they want to build, I ask, “Why did you choose that set?”
Why is a secret tool for getting information and insight without demanding or cajoling. Why is innocent, and most folks like to help educate others -- they’re often quite willing to explain further what they were thinking. In psychology, why is a form of reflecting -- it turns the attention back to the person making the bold claim or the decision and offers them space for more consideration of their own stance. In ancient Greek philosophy, Socrates famously answered questions with questions (which we now call the Socratic method) because it stimulated critical thinking and helped surface presuppositions or biases.
You can imagine how different the results would be if I stood up in that meeting with the executive and said, “You’re wrong! You don’t know what users want! You are not our users!” I’d be shown the door pretty promptly.
Working on a Data Science team brought up a lot of fun why puzzles. I had the privilege of working with several brilliant folks who were all trained in data analysis and critical thinking. But you can imagine -- as the qualitative representative on the team -- my input was less concrete than numbers and often sounded something like, “How do you think that will make people feel?”
One day, our data cruncher was grumbling over a set of numbers. Potential customers were calling our phone number, presented a menu of options in an IVR (interactive voice response) tree, and then channeled to the corresponding sales representative. We had over a year of data on the old IVR message and had decided to reduce the menu options -- callers now only heard three options instead of five. We believed we were making their lives easier and reducing cognitive load. Our data scientist had pulled numbers on the new menu, and he couldn’t make sense of them. He finally asked me to look at his tables.
“It looks like there are fewer options, but it’s taking callers twice as long to get to a sales rep.” We talked through hold times, volume of calls, any ad campaigns that might be running. Nothing out of the ordinary was going on and by all accounts callers should have gotten to a sales rep in record time.
The whole time I was turning over in my head how a caller might feel. “Why would I spend longer in the IVR?” I asked if he could see how many menu items they were selecting. And what we discovered was this: callers were selecting a menu item, then returning to the main menu and trying a second and sometimes even a third option before getting through to a sales rep. I said, “I think we took away menu options they were used to! They’re looping through the menu, looking for something familiar, and they can’t find it!”
The team agreed to kill the new IVR and try the old one again, and call-through times got faster immediately.