How do you know when a design fails a usability test?

Default avatar.
March 01, 2017
How do you know when a design fails a usability test?.

A handy technique I learned from the wrong job… Years ago, I spent an awkward patch of my career as an instructional designer, creating courses for online learning. It was a bad fit and I moved on happily, but one part of that job has made me a better UX designer: learning objectives. Learning objectives are simply what you want the student to learn by the end of the training. If there’s a test, the test questions should be based on those objectives — otherwise, what’s the point of the test? The same approach comes in handy for figuring out whether a design has passed or failed a usability test. Just remember: it’s the design that’s being tested, not the participants. What does the test participant need to do or say for you to feel confident that the design has succeeded? Do they need to track three hours of time for a particular project? Generate an invoice to a client based on that tracked time? Send the invoice? That’s your test criteria. Of course usability testing is about observing how users complete tasks, but what will you get them to do, exactly? The beauty of these criteria is that they steer you away from vague testing goals like, understand how time tracking works.” How will you know they’ve understood it? You get them to describe it. And once they’ve described it accurately, you can say that aspect of the design was successful. Success criteria help you twice over: they clarify whether your design is really successful, and they make it easier to share those results.

Verbs are magical

The book that taught me about learning objectives, George Piskurich’s Rapid Instructional Design, offers a handy list of behaviours to start your success criteria. For example, the objectives for comprehension might be describe” or demonstrate”. Again, understand” is no good — you need them to say (that is, describe) or do (that is, demonstrate) something that proves to you that they’ve understood. And then, at a higher degree of difficulty, a participant might explain” or organize”; at a higher level still, they might create” or evaluate”. Whatever verb you choose to start your success criteria, the point is that you can observe whether or not a user has actually said or done whatever constitutes task success.

By the end of this session…”

So, when you’re planning your next usability test, and you’re working on tasks, start by asking, What should a user be able to do with (or say about) this design?” Then, you might write something like this: By the end of the session, the participant should be able to: 

  • track three hours of time for a particular project;
  • generate an invoice to a client based on that tracked time;
  • describe the difference between tracking time and logging time.

Now you have three success criteria and, based on those, you’ve also got a pretty clear sense of what tasks you’ll need to give the participants. One caveat: success criteria aren’t quite the same as tasks. Tasks have more context; they’re written to be read to the participant, and might include some context about the task, particularly if you’re steering them to find something in your prototype. For example: Success criteria: Generate an invoice to a client based on that tracked time Task: Now that you’ve tracked three hours on the Atlas project, show me how you would invoice Acme Products for your time.” Pretty similar, obviously, but success criteria are for you and your team; the task is for the participant in the context of the usability session. And you’ll notice that one of the success criteria above is about describing something, rather than completing a task. It might be a follow-up question to a task. These are handy for validating whether your design’s mental model is clear to users. I’ve seen users find their way through a task, but then describe to me a mental model of the app which is at odds with how it was designed. That’s task success for one participant, but more importantly there’s an underlying problem with matching that participant’s mental model. So, start with your success criteria, then write your tasks and follow-up questions based on your criteria.

Stakeholders love success criteria

Stakeholders don’t necessarily care about your process, but they really care about the results. And if your presentation of the results is vague, they will be rightfully irritated. The user managed to track a few hours, but we weren’t sure whether she understood that tracking time isn’t the same as logging it against a client…” Well, why aren’t you sure? Isn’t it your job to figure this out? You’re wasting their time, and not giving them clear direction on how to fix the UX problems — which is also your job, right? Success criteria help you twice over: they clarify whether your design is really successful, and they make it easier to share those results. We’ve had some success tracking success criteria in a simple table, and colour-coding the results. Like so: trackingWe whip up a colour-coded table of results (green = success, red = failure) on our wiki. In the top row, we list participants; in the left column, we list our success criteria. It’s ugly, but quick and useful. This is easy to scan, shows pretty clearly where the problems are, and grounds the results in the experiences of actual participants. We also list a bullet-point summary of results and a list of usability problems and recommendations just beneath it. We’ll zero in on those problems and iterate until we believe they’re solved. Your process might be a little different — maybe you’re a consultant handing over a report to a client, for example — but the benefits are the same.

Jeff Kraemer

Jeff Kraemer ran his first usability test back in 2001; this was before screen-recording software, so recording the test meant pointing a VHS videocamera at the screen. Since then, he’s spent time specializing in content strategy and instructional design, but he really loves being a UX generalist. Previously at Workopolis and Usability Matters, Jeff is now Principal UX Designer at FreshBooks.

Read Next

3 Essential Design Trends, June 2023

This month we are focusing on three trends within a bigger website design trend – different navigation menu styles and …

15 Best New Fonts, May 2023

The choices you make when selecting a typeface have more impact on your design than almost any other decision, so it’s …

10+ Best Tools & Resources for Web Designers and Agencies (2023 updated)

Having the ability to envision a tastefully designed website (i.e., the role creativity plays) is important. But being …

20 Best New Websites, May 2023

This month, there are tons of great new agency websites to get excited about. 3D animated prisms are a popular theme, a…

How to Find the Right White Label Website Builder for Your Agency

Web design agencies face a lot of obstacles in closing the deal with new clients. One of the most common ones is the ar…

Exciting New Tools For Designers, May 2023

There are hundreds of new tools for designers and developers released each month. We sift through them all to bring you…

3 Essential Design Trends, May 2023

All three of the website design trends here mimic something bigger going on in the tech space, from a desire to have mo…

10 Best AI Tools for Web Designers (2023)

It’s time to stop worrying if AI is going to take your job and instead start using AI to expand the services you can of…

10 Best Marketing Agency Websites (Examples, Inspo, and Templates!)

Marketers are skilled in developing strategies, producing visual assets, writing text with high impact, and optimizing …

15 Best New Fonts, April 2023

Fonts are a designer’s best friend. They add personality to our designs and enable fine typography to elevate the quali…

20 Best New Websites, April 2023

In April’s edition, there’s a whole heap of large-scale, and even full-screen, video. Drone footage is back with a veng…

Exciting New Tools For Designers, April 2023

The AI revolution is having a huge impact on the types of products that are hitting the market, with almost every app b…