In an agile, fast-paced development environment, getting user feedback for your design in time is a logistical challenge. If you want users to meet design in fast iteration cycles, you need two components just in time – users, and design. Suppose you have access to the former, say, because you arranged early for meetings with appropriate interviewees, or you are working on an internal project, how about the design part? When design and development processes proceed so fast that even building an electronic prototype would take too long to build, if not to test, what can you do to make sure the UI design meets the user’s needs? And, if not, how can you make sure the design team learns about this in time?

There is no simple solution to this question, since several factors come to play. However, there is a common key to minimizing turnaround times in getting user experience feedback: design always materializes in artifacts – screen sketches, wireframes, Photoshop images, etc. If you want quick turnaround, make sure you can re-use as much as possible of these artifacts for getting direct user feedback data. Here’s an example.

We’ve recently been running a number of early design validations on the basis of PowerPoint slideshows. The design team had created presentations to visualize design ideas. Each slide contained a life-size sketch of the screen together with annotations, what should happen when a user clicked here and there. Such slide shows are fairly common in Design Thinking projects, when design is being done along so-called user stories.

We simply took those slides, and removed the annotations. Whenever there was a sequence of screens, we would remove all pointers where to click. In addition – and this is the hard part in this exercise – we made sure all screens were consistent and coherent: the same button needs to have the same label and position on each screen (yes, when design moves fast, so do buttons), and all numbers on the screens need to be correct (that is, calculations, number of selected items, etc.). Also, as some might regret, you need to replace any funny names by realistic ones, not to distract interviewees.

The “testing” procedure for such a slideshow is simple. The designer intends something to happen when the user does something specific. Consequently, you can simply ask test participants what they would do to achieve a certain result on the present screen, and have them show you where they would click. If you set up your slideshow so that it doesn’t advance on click, you can safely let interviewees use the mouse for pointing (btw, this is also possible in remote phone interviews with screen sharing tools). Finally, we would write our questions into the notes section of the slides. When you run a PPT presentation from your notebook on an external monitor, you can see the questions (=notes) but the interviewee can’t. Ready to go and see users!

Too simple? Jeff Sauro from measuringusability.com reports that that when users’ first click is down the right path, 87% eventually succeed. When they click down an incorrect path, only 46% eventually succeed. So the information where people would click to achieve something is extremely valuable, and easy to get.

But his is only part of the story. The other, equally important part is to ask what interviewees expect to happen when they would click on a certain UI element. Be ready for surprises here, and take careful notes on what interviewees say – before telling them anything. The biggest risk in this interview method is to switch into demo mode, tell interviewees about this cool design – and spoil a wonderful opportunity for learning. Whenever something is unclear to interviewees, this is data – the very reason why you’re sitting there.

Another clear don’t is to ask interviewees how something should be designed. If they tell you anyway spontaneously, ask them why exactly they think their design idea would help them – and learn about the actual requirement behind it. For instance, an interviewee might suggest that something blinks. What they need is something that catches their attention – having UI elements blink is a common but very bad solution to that problem.

All user research and usability testing is worthless if your valuable insights are not communicated – in time! – to the design team. Well, in this approach, we already have a PPT presentation. We also have the questions we asked documented for each screen. All we need to do is put callouts on the screen, for instance, “6/7 clicked here” or “4/7 expected this to happen, 2/7 that, 1 had no idea”. And off you go to the results meeting.

Finally, let the design team make the decisions. It is good scientific practice to separate observation from interpretation, and you should follow this practice. It’s the design team’s responsibility to know the requirements and solution opportunities in their entirety, and to integrate your findings into the overall picture.

The good news here is that with this approach, you can catch a good deal of usability problems in the making, while the design team is still in design mode and can easily respond to user experience feedback. However, mind that this fast-track method doesn’t replace proper usability engineering. Having a usability method that catches 75% of potential problems is certainly cool. But seriously, would you drive a car with 75% wheels?

Not logged in