About a year ago, the  International Usability and User Experience Qualification Board (UXQB) asked me to work as an examinator for their certification programme in usability testing. I happily agreed: apart from the honor to be asked, it was a wonderful opportunity to reflect my own methodology, and to compare it with that of competent colleagues. First and foremost lesson learned: you can learn from everyone, from every single usability test session. Some of those learnings I’d like to share here – seemingly small details, but they can have a huge impact on your testing.

Setting Up the Test Station

As usability evangelists, we’re continuously preaching that you can test in a wide range of settings. You can, but do you have to? What if you have the choice?

Many usability tests are conducted with a moderator sitting down with test participants in front of a notebook. That’s a simple and valid setup, but is it a good one? Let’s step back for a minute and consider the setting: if the moderator wants to see anything, they need to get close enough to the screen – and consequently, pretty close to the test participant, right? I have seen many female test participants lean away from male moderators who didn’t have any questionable intentions but merely wanted to see what’s going on. Those situations became even more awkward when the moderator, attempting to help, grabbed the mouse from a test participant who didn’t want to let go immediately.

This is unnecessary. Just use a notebook with a separate monitor, keyboard, and mouse. The moderator sits in front of the notebook, with full control over the system via the notebook’s keyboard and touchpad – when needed. The test participant has a proper workstation where they can sit and work comfortably. As a moderator, you can control exactly when to share the screen and when not, simply by switching the display from the notebook screen to the external monitor and back. With a privacy shade mounted to the notebook’s monitor, you can set up the system exactly as needed for the next task, without granting the test participant a sneak preview. You can even exactly time when the screen becomes visible to the test participant. Just switch displays and wait the one or two seconds the screen takes to light up – and off you go.

For mobile device testing, you also need a way to observe the device screen without getting embarassingly close to the test participant. Screen grabber apps have the great disadvantage that they cannot show the test participant’s hand movements before they tap a button (or decide not to). As of today, a good solution is a webcam connected to the moderator’s notebook, where you can comfortably watch what’s going on, and use standard observation software for recording and streaming the picture to observers. For phones, we’re using a cheap webcam with manual focus (important: you want to focus on the screen, not the test participant’s hand) that is mounted together with the phone on a little acrylic glass sled the test participant can take in their hands. For tablets, we’re using a HD-resolution webcam mounted on a microphone stand that is set up to film the tablet from above. Here too, turn off autofocus and auto brightness – you want to see the screen, not the test participant’s head or hands.


A simple acrylic sled for mobile phone testing


HD webcam on a microphone stand for tablet testing


Presenting Tasks

Task wording is important – you need to exactly control which information you give to test participants. Words that appear on the screen are taboo: test participants will search first for the exact words you give them, not the ones they spontaneously would think of on their own. If you formulate  your task instructions on the fly, such words are likely to sneak into what you say to the test participant – you give them a strong cue to the task solution, which can seriously spoil the test.

This leads many moderators to read written task instructions to the test participant. But how do they know whether or not the test participant actually understood what they are supposed to do? Well, others let test participants read the task descriptions aloud. This makes sure they read the task, but did they understand it? Further, even in developed countries there are many weak readers who may be very embarrassed by having to read aloud in front of observers. This doesn’t mean they can’t read; they just can’t read aloud fluently.

The answer is simple – ask test participants to read the written task description silently, and then to describe in their own words what the task is asking them to do. If you both agree on the task, you can start – if not, you can clarify. More often than not, using this procedure myself, I was able to catch and fix unclear or ambigue task descriptions.

Sometimes, the terms on the UI are more straightforward and common than any alternative – still you should avoid them. If the user, when paraphrasing the task, uses a UI term, you are then free to use it too – after all, you can be sure the term was on the test participant’s mind without your interference.

Defining Starting Conditions

It is very tempting to “go with the flow” in a usability test and simply start the next task where you finished the previous one. Well, think about it. Designers spend a lot of effort providing guidance along the most likely paths users might take. Those paths begin somewhere, but most likely not where another activity ended. When starting a task where the last one ended, test participants may very well miss guidance that is in the perfectly right place, but not where the participant is right now.

This doesn’t mean that you always have to start from the home page. Just pick your task starting points consciously, to make sure they are valid.

Here it pays off when you set up your test station as described above: switch the display to notebook-only, and go to the defined starting point for the next task while the test participant reads the next task description. When you agree on the next task, switch back to the participant screen and resume testing.

Thinking Aloud

Asking test participants to think aloud while working on tasks is common practice. However, what sounds simple apparently isn’t. Many moderators ask test participants to describe their actions and comment on the UI. This puts the test participant in a “meta” mode – while struggling with the UI, they have to monitor their actions, and to formulate descriptions and judgments about their experience. Quite some multitasking, actually.

Hertzum, Hansen, and Andersen (2009) have demonstrated in a study that this kind of “thinking aloud” can actually affect the test participant’s behavior considerably. They take more time solving tasks, and experience higher mental effort than if no think-aloud instruction was given. When test participants were asked to simply speak out what they were thinking at the moment, the effect disappeared. So thinking aloud is simple, if you keep it as simple as that.

Reporting Issues – or What?

Usability tests are about usability issues, of course, and those you find you report. Right? Wait. What exactly do you report? What you observed, the underlying usability problem, or how to fix it?

It’s great if you can report all three – but always start from the left, and don’t miss a step. Many customers asking for a usability test actually expect design guidance, and many usability professionals happily provide just that. However, keep in mind that you do not always know all relevant requirements and constraints to decide on a solution – then your brilliant idea might be dead wrong. Also, designers may be well aware of a usability problem but need to trade it off with other constraints – then your root cause analysis would be redundant. The real difference you make when you do usability testing is in your observations – that’s certainly something to talk about. Then, explain them. Then, propose solutions. Only then.

Not logged in
  • Clare Johnson   12 months ago

    Thanks very much for the suggestion in “Presenting Tasks” to let test participants read the tasks to themselves initially. At international events, our test scripts are in English, which makes sense, but is often our participants’ second or third language. It has often felt somehow awkward and unfair to make these folks read aloud. Next time will be different.