Winning With UX Research: ETR’s Guide For Digital Product Success
ExpandTheRoom
Kerrin McLaughlin,
Associate Experience Designer and Researcher
When starting a digital design project, there’s one question that should be at the forefront of your mind - how will we measure success? Yet many organizations start a major design or UX project - a website redesign, a brand refresh, a new venture - without seriously considering this question. Whether they think the outcome is obvious and doesn’t need definition or truly have no idea what they’re working towards, this question is overlooked.
It’s important to define and set up how success will be measured at the beginning of a design project for a number of reasons. A major reason is because having a clear definition of success allows you to measure if your design is working to achieve the goals you set, and what still needs to be improved. Another reason is to convince others with data that design is a worthwhile investment and to get buy-in from stakeholders for your project. Lastly, measuring the success of design allows you to verify your assumptions about what your audience needs and values rather than guessing.
At ExpandTheRoom, we have a phased approach to the design process called Purpose-Driven Design, and we incorporate design measurement throughout the entire process. Our work using UX metrics to measure improvement has been featured in the Nielsen Norman Group report, UX Metrics and ROI. This article will tell you a little bit about how we measure the success of our design work and how you can do the same.
Before you are able to start measuring design, you need to define the metrics, or data points, you will be tracking. To figure out what metrics to track, you need to ask yourself,
“What problem is our design trying to solve? What is the outcome we want to see?”
To give an example of this process we’ll use HeliNY, a helicopter tour company we partnered with. HeliNY offers award-winning sightseeing tours of the magnificent NYC skyline and charters in the NYC area, and they needed a new website. We started our process with stakeholder and user interviews to uncover the pain points of the existing website. We discovered that the booking process was considered difficult by customers. The tour information itself led to confusion and the booking form was difficult to fill out, leading to drop off. Additionally, HeliNY was concerned with the perception of their brand, and wanted the redesign to establish the brand as modern and luxurious. From these pain points, we devised different metrics to measure throughout the process: perceived ease of understanding of tour information, drop off rate and perceived ease of use for the booking form, and brand word associations. We’ll continue to reference how this project exemplifies our measuring process throughout this article, and you can find the HeliNY case study on our website.
It is important to remember when choosing metrics to track that you shouldn't craft your design solely to see a positive outcome on your metric. As Goodhart’s law states,
"When a measure becomes a target, it ceases to be a good measure”
Use metrics as a yardstick and not a goal post. There are countless metrics you can choose to measure for a project and choosing the right ones depends entirely on your organization’s unique goals. At ETR, we work with our clients to develop these unique project metrics. Google’s HEART framework is a good place to start brainstorming what metrics make sense for you.
After establishing metrics to track, the next step is to decide how you want to measure. There are many methods for measuring design, but in this article we'll talk about analytics, surveys, tree testing, and usability testing, which are particularly helpful when comparing before and after a change. The method you choose will largely depend on the type of metric you’re collecting, and at ETR we usually use a combination of these methods.
Analytics is the easiest of these methods to dive into. As long as you have properly set up and goal-specific metrics being tracked in a tool like Google Analytics, Adobe Analytics, Hotjar, etc, you pretty much just have to sit back and wait for traffic to come in, provided you have enough visitors to get data representative of your audience in a reasonable amount of time. The challenge comes later with learning how to interpret the data in a meaningful way.
Choose analytics when you want to measure user interactions on your product, such as acquisition (where they’re coming from), what content leads to goal completions, and whether they’re using your product's features.
Analytics can only take us so far — this method tells us who is visiting, what they’re interacting with, where they came from and where they’re going. What it can’t tell us is why they’re doing what they’re doing. Knowing why can help you further understand your users, their pain points, and how to make a better design moving forward. Because of this we recommend combining analytics with other methods to measure design from different angles, a process known as triangulation. The following three methods can be used in conjunction with analytics to help you triangulate your research.
Surveys are deceptively easy. As Erika Hall states, “Surveys are the most dangerous research tool — misunderstood and misused”. But when done right, surveys can be a valuable way to collect data on the perceptions of website visitors, especially when tools like Hotjar make installing on-site microsurveys a cinch.
Choose surveys when you want to measure a user’s perceptions of a product, such as feelings toward a brand, how easy they think a task was, or whether they’re finding the information they need.
With surveys, we are beginning to interact with our users more directly to get at the why. Surveys are good at measuring perceptions because they are subjective while analytics offer objective numbers. If you ask a user to rate how easy a task was, they will give you their perception of the ease, not an objective measure. This can be both useful and not depending on what you are after. The fact that what people say and what people do often don’t match up makes surveys a bad source for objective product feedback, ie “Would you use this feature?”. On the other hand, tracking how users feel about your brand over time such as with an NPS survey or brand perception survey can provide valuable insights. Adding an open-ended response field such as “Why did you give this response?” can help us understand user motivations, but don’t expect the same detailed answer you can get in a live interview or usability test.
Tree testing is a method we use specifically for testing navigation. Participants are presented with a text-based version of a website or product’s navigation and asked where they would go to find certain information. Tree testing helps us to determine if the labels and structure we have set up — the information architecture — makes sense to our target audience. We use Optimal Workshop for conducting our tests and analyzing the results.
Choose tree testing when you want to measure how usable your navigation is — do people know where to look for information, and how long does it take them to find it? A bloated navigation causes a bad user experience. If users can’t find what they’re looking for quickly, they’ll leave. Tree testing helps you figure out the paths your users take to find information, identify points of confusion, and work to make those paths more direct.
Just as with surveys, try to leave a space for open-ended feedback, such as with a final question, “Was anything confusing to you about this navigation?”
Usability testing is the most involved of the four methods, but can also produce the most insightful results. Usability testing involves recruiting participants from your target audience to complete tasks using your product while you monitor their process. From here you will be able to see not only if they are successful using your designs but also identify the problems they run into or the unique ways they use the product you may not have expected.
Choose usability testing when you want to measure how successfully users can complete a task, how long it takes to complete a task, and what kinds of errors they make. Usability testing is the best way to understand “why” your users make the decisions they do and what’s tripping them up. In a moderated usability testing session you can observe people using your design in real time and ask questions when something unexpected happens.
When measuring design with the other methods mentioned above you will often come across metrics that are hard to explain, such as “Why are our users spending so much time on this page?”. Usability testing can help to answer those questions by offering a firsthand observation of how users use your product. This is why triangulation can be a powerful way to measure your designs from different angles — different methods reveal different types of data, offering both questions and answers when paired together.
For our HeliNY project, we determined the best method for measuring our metrics would be a combination of on-page surveys to understand if users could find the information they were looking for as well as their perceptions of the brand, and analytics to monitor drop-off rates on the booking form.
After you know what metrics you’ll track and what methods to use, you can decide when to measure. Measuring at different stages of a project achieves different goals.
If you want to know what kind of impact you’ve had at the end of a design project, first you need to know where you stand today. At the beginning of a project, it’s very valuable to measure a baseline of your defined metrics. If you don’t have an existing product, you could measure your competitors instead and set their metrics as a goal to beat.
Measuring design at the beginning of a project also serves the dual purpose of helping you uncover usability issues in your existing product (or in a competitor’s) that your future designs can work to solve.
Once you have a baseline for your metrics, you will know exactly how much you need to improve to meet your goals. Many times even a slight improvement can be considered a success, but it is important to have a defined goal you can reach to determine what “success” means for your team.
Throughout the design process there are opportunities to test and validate your design solutions. For example, at ETR we often tree test a product’s existing navigation (providing a benchmark) against our first round of navigation changes. By comparing these two while still in the design phase, we are able to see if our changes were more successful than the benchmark as well as keep iterating on our design based on the new data.
Finally, when a project is complete and your new design is out in the world, you can measure the impact you’ve had. By running a measurement test with the same questions/format as your benchmark and a sample size large enough to produce statistically significant results, you can easily calculate the change between the two and produce a data-backed report of your success.
If the numbers aren’t as positive as you hoped, you didn’t fail. Having data on where you started and what your changes did is incredibly valuable — now is your chance to keep iterating, learn from what didn’t work, and keep improving. It’s better to know your product can be improved and keep working than to bury your head in the sand and ignore the data.
Try digging deeper into the data with segmenting to glean insights not found at the surface level. In what ways do users who complete a goal act differently on your site than those who don’t? Do users from a particular group make choices differently than others? How does taking a particular path affect the choices a user makes? These are just some examples of how to dive deeper into the data you collect.
For HeliNY, we saw significant improvements in our metrics after the redesign. By measuring and reporting this improvement, everyone felt they had a better understanding of the impact the project had had. We had a sense of achievement that we had improved the user experience of the site and achieved the project goals — with the numbers to prove it. Our goal is to perform this kind of measurement and analysis on every project we work on, and we always learn something unexpected and insightful from experimenting with real users. This is a core principle of our Purpose-Driven Design framework.
The metrics tracked for design success may not always be the ones that resonate with executive teams. If you're wary of ‘time on task’ as a metric that will be well received or understood in presentations, it is possible to ‘translate’ design-focused metrics to business goals.
For example, let’s say HeliNY had a booking form completion rate of 40% in the benchmark measurement, and after our UX work, the new completion rate was 70%. If we know the average booking value and the number of potential customers who start the booking flow, it will be easy to calculate the additional revenue our improved UX will be able to bring in. You can also use this method for predictive ROI calculations to gain buy-in for your project — if you can estimate the effect you believe your UX changes will have and can back that estimate with real revenue gains you have a much stronger argument for why the work should be done. Nielsen Norman Group has a great article and full course on this topic.
Hopefully you’ve found this article as a good starting point for measuring the success of design, a critical and often overlooked part of the design process. Of course our tips just scratch the surface and there is much more to this process — goal defining, picking the right metrics, recruitment, setting up measurement tests correctly, analyzing the data for actionable insights — the list goes on. If you’re looking for a partner that can provide you with the data behind design decisions and continue improving based on the results, reach out to ExpandTheRoom.
Kerrin McLaughlin
Associate Experience Designer and Researcher
ExpandTheRoom