Verbs are magicalThe book that taught me about learning objectives, George Piskurich’s Rapid Instructional Design, offers a handy list of behaviours to start your success criteria. For example, the objectives for comprehension might be “describe” or “demonstrate”. Again, “understand” is no good — you need them to say (that is, describe) or do (that is, demonstrate) something that proves to you that they’ve understood. And then, at a higher degree of difficulty, a participant might “explain” or “organize”; at a higher level still, they might “create” or “evaluate”. Whatever verb you choose to start your success criteria, the point is that you can observe whether or not a user has actually said or done whatever constitutes task success.
“By the end of this session…”So, when you’re planning your next usability test, and you’re working on tasks, start by asking, “What should a user be able to do with (or say about) this design?” Then, you might write something like this: By the end of the session, the participant should be able to:
- track three hours of time for a particular project;
- generate an invoice to a client based on that tracked time;
- describe the difference between tracking time and logging time.
Stakeholders love success criteriaStakeholders don’t necessarily care about your process, but they really care about the results. And if your presentation of the results is vague, they will be rightfully irritated. “The user managed to track a few hours, but we weren’t sure whether she understood that tracking time isn’t the same as logging it against a client…” Well, why aren’t you sure? Isn’t it your job to figure this out? You’re wasting their time, and not giving them clear direction on how to fix the UX problems — which is also your job, right? Success criteria help you twice over: they clarify whether your design is really successful, and they make it easier to share those results. We’ve had some success tracking success criteria in a simple table, and colour-coding the results. Like so: We whip up a colour-coded table of results (green = success, red = failure) on our wiki. In the top row, we list participants; in the left column, we list our success criteria. It’s ugly, but quick and useful. This is easy to scan, shows pretty clearly where the problems are, and grounds the results in the experiences of actual participants. We also list a bullet-point summary of results and a list of usability problems and recommendations just beneath it. We’ll zero in on those problems and iterate until we believe they’re solved. Your process might be a little different — maybe you’re a consultant handing over a report to a client, for example — but the benefits are the same.
Jeff Kraemer ran his first usability test back in 2001; this was before screen-recording software, so recording the test meant pointing a VHS videocamera at the screen. Since then, he's spent time specializing in content strategy and instructional design, but he really loves being a UX generalist. Previously at Workopolis and Usability Matters, Jeff is now Principal UX Designer at FreshBooks.
Searching for a tool to make cross-platform design a breeze? Desperate for an extension that helps you figure out the…
By Robert Reeve
As a creative professional, navigating the digital realm is second nature to you. It’s normal to follow an endless…
Remember when Merriam-Webster added Photoshop to the dictionary back in 2008? Want to learn how AI is changing design…
By Max Walton
Remember the screech of dial-up internet? Hold fond memories of arcade machines? In this list, we’re condensing down 30…