Myth 1: Data Means Numbers
A lot of the data about a site or app flows in from analytics, and analytics are basically tallies of who has come to your website, how they got there, how long they stayed, and what they clicked or tapped. Other data may come from how many clicked A and how many clicked B. More tallies. Then there are intercepts and surveys where scored responses are counted. Still more tallies.
Numbers represent the actions of real people with complicated lives. But rolling the behavior of millions of people into a single number is not always useful, or reliable. Even the most organized sets of numbers don’t answer a lot of the questions we still have about user experience, like why people take action or why they don’t, or how they felt about it, or what expectations they bring to the experience. Interviews, ethnographic studies, and usability tests fill in the gaps left by numeric data. But because qualitative insights are not numeric, they are often not considered data.
In other fields, like the social sciences and medicine, there is no question that qualitative is still data. Small numbers, or thick data, still count whether they remain as narrative or are quantified. This makes sense for data-informed design too.
The true defining characteristic of data that can be used to inform design, are the traces left behind by people, no matter the source.
Myth 2: Data Is the Objective Truth
Quantitative data typically tallies completed actions, and usually those tallies are compiled by software rather than humans. This makes quantitative data seem like hard fact.
Even if data is big, it does not mean it is objective. Bias is inherent in any dataset. Datasets are created by humans, who interpret them and assign meaning, even if a machine runs the numbers.
Signal bias—or the bias of omission, inclusion, and emphasis—is one common bias, especially for big data. For example, social media data shows only one segment of the population—those who are on Twitter or Facebook using a certain hashtag. Likewise, the subset of people who agree to be observed or participate in a study represents only a very small subset of the general population.
Another bias is something like the Heisenberg uncertainty principle, where results are changed just by observing. No matter how neutral the environment and the demeanor of the moderator in a lab usability test, behavior, once observed, is different. Very few studies, even contextual or ethnographic studies, leave it totally up to the user. This is the case with quantitative studies too, like surveys or intercepts, where we frame the experience in a certain way just by asking questions about it.
Further, when we compare online studies with no human contact, and in-person studies where the user is interacting with a moderator or with another user, we see a substantial positive effect on how people perceive a site or app when there is another human being involved. This extends to what they say, how they rate, and what they do or try. Call it the nice factor.
Big or small, no data is perfect. There are limitations and bias present in every type of data. Good data describes its biases, and always provides context.
Myth 3: Bigger Is Always Better
The hype around Big Data (note the CAPS) suggests it has the power to reveal all the secrets of humanity and accurately predict the future. And OK, sometimes bigger is better when it comes to data. If you are measuring something subjective, like emotional response based on self-reported ratings, more responses give you a greater confidence level in the results.
When we think bigger, we tend to think about tallies: the volume and velocity part of the big data equation. But big data is also about variety, and that means diverse sources. We can’t expect analytics to tell us everything we need to know about user behavior any more than we can expect a usability test to address every possible user behavior We have to get our data working together in a way that isn’t all about back-end integration. It’s more about creating meaningful categories—also known as metrics—to evaluate, understand, and keep track.
Data from multiple sources creates a more nuanced picture and, in the end, an actionable outcome. Broader, not bigger, is better.
Myth 4: Data Is for Managers, Not Designers
Data is often used to pass judgment on a site or app (e.g. “The data says that conversions are down after the latest redesign”). Of course, this strikes experience design practitioners as reductionist, opportunistic, or just plain wrong. Certainly, it’s tempting to look for data that proves the wisdom of a decision, whether to solve internal battles, counter a gut-feel approach, or just prove return on investment. Proving a point is only one part of the data story though.
If you are using data to inform design, you have three ways to look at things: proving, improving, and discovering. Using data to improve often means the data-informed equivalent of iteration—tracking across time and across versions or even against competitors. Using data to discover is about looking at data in relation to other data (big or thick) to explore patterns and trends.
Part of the issue here is that data use is siloed—the business leadership is working with one type of data and the UX team another. Each team has a different frame of reference, and may not be aware of, or simply discount, the data of the other.
Data is not just about proving who is right or wrong, it is about making improvements and discovering new possibilities. It’s another way to tell the story of the real people using technology.
Myth 5: Data Kills Innovation
Data is seen as the antithesis of innovation, in all sorts of ways. Well, actually in three ways:
1. Most data, whether analytics, survey data, or customer service data, is backward-looking. Although we can discover patterns and trends, it is not easy to make predictions based off of those discoveries.
2. Data is tactical rather than strategic. Think of Google’s 41 shades of blue testing. Because data-informed design is associated with A/B testing, it seems like a good way to tweak a design element but it’s not so great for creating an amazing experience.
3. Data, especially analytics, seems to skim the surface. Seeing what people clicked, how much they scrolled, or where they lingered can work well to form a picture about how to market a product. It does not work so well for informing design, because it lacks information about motivations, expectations, perceptions, or emotions.
There is some truth to all of these observations, of course. The core problem is not in the data itself, but in how it’s being used.
For any data to inform design it has to be calibrated to embrace complexity. The user experience is not a funnel.
Myth 6: There Is a Right Way to Use Data to Inform Design
So far, there isn’t one canonical way that works for every team in every organization. There are a few guidelines to start with though.
- Use data from a variety of sources to inform your design: analytics, A/B tests, social media sentiment, customer service logs, sales data, surveys, interviews, usability tests, contextual research, and other studies.
- Include numbers and context. Whether you call them quantitative and qualitative, studies and non-studies, or big data and thick data, you need the numbers and the context to tell the real story.
- Make sure data is sensitive to the complexity of the human experience. Use averages sparingly, infer with caution, corroborate liberally.
- Use data to track changes over time, explore new patterns, and dig deeper on problems, rather than just to prove who’s right or wrong.
- Decide on meaningful categories that let you make sense of the data and tell a story about the experience.
- Develop a way to share and discuss data in your organization, and start by defining the basics together.
Designing with data has to go beyond algorithms, automation, A/B testing, and analytics. Rather, the goal is to use all the data to develop a better understanding of everyday experience.
Author: Pamela Pavliscak