Do you need your users to work really fast with your UI? Have you reached the point where further automation is no option? Will your users be highly trained to the very end of the learning curve, able to operate the UI in their sleep, but still you need to squeeze out the milliseconds out of task cycle time? If so: Welcome to the realm of high-speed user interfaces!

In most applications, optimizing feature discoverability and overall learnability is key, because training users typically is way more expensive than designing and building self-explanatory UIs. Not so in what I’d call high-speed UIs: when users are really trained to the very end of the learning curve, they perfectly know where controls and functions can be found. Other design goals gain priority: you want to minimize the time needed not to find controls, but simply to operate them. This post is about methods to go about this.

The most important step in UI design is proper user research. In high-speed UIs as well, you need solid knowledge of your users’ needs, goals, and capabilities. In addition, here, you need a perfect understanding of the task. Typical tasks in high-speed UIs are intensive in data entry, clicking, and/or dragging operations. Cognitive operations may be part of the task, but typically users operate “on autopilot”.

Once you have done a proper task analysis, a fair number of design principles and tools can help you identifying the best design solutions. Caution: there is a lot of science and math involved. I’ll briefly outline it – and then point you to a tool that handles and hides all this complexity for you, as it should be.

Fitts’ Law

Already in 1954, Paul Fitts published what was to become one of the most well-known and best-researched predictive models in human-computer interaction. Fitts’ law describes the time needed to select (i.e., click) a target depending on its distance and size:

MT = a + b x log(2D/W)


  • MT is the average time to complete the movement
  • a and b are model parameters
  • D is the distance from the starting point to the center of the target, and
  • W is the width of the target measured along the axis of motion.

In design practice, this model often is paraphrased as “the larger and closer the target is, the faster it can be clicked”. While this is a truism, the work of Fitts and subsequent researchers goes way beyond – with a good quantitative prediction model, you can resolve tradeoffs between different design goals. For instance, you can decide whether cramming a smallish button close to the user’s current mouse cursor position is superior to placing a larger one where there is space for it. The decision whether to place a button into the context of usage, or in a toolbar is bread and butter to every UI designer.

Fitts’ law also explains a common interaction design trick for increasing point-and-click efficiency. When mouse pointer movement is restricted (for instance, by the edge of the screen), moving the mouse further does not change the cursor position. When you place a button right at the edge of the area where the mouse pointer can move, the user doesn’t have to care whether or not she is moving the pointer out of the target – the target size in this direction is virtually infinite. This maximizes W in Fitts’ Law, thereby reducing movement time MT.

Hick’s Law

In high-speed UIs, you want to provide shortcuts, and to avoid screen changes (see also below). Does this mean that you should cram as much functionality as possible into one view? Hick’s law says not always, why so, and when exactly you might be overdoing it.

Hick’s law describes the time needed to choose between alternatives of equal probability. Originally, the law was targeted at simple motor decisions, such as hitting a number key on a numerical keypad or hitting a “yes” or “no” button in a psychological experiment. In short, Hick’s law says:

T = b x log(n+1)

where T is the average reaction time required to choose between n equally probable alternatives; b is a constant.

Somewhat oversimplified, the reaction time increases with the number of choices, but at a logarithmic rate. Choices can be buttons, commands, menu items, or items on a navigation bar. When designing a high-speed UI, you need to trade off the benefit from adding a function or shortcut with its cost, as described by Hicks’ Law.

Common Efficiency Traps

Fitts’ and Hick’s laws deal with the time needed to operate controls on a single screen. Well, there are more things to consider that may cost a disproportional amount of time:

  • Switching the input medium, i.e. from keyboard to mouse and back, is extremely costly. Grabbing the mouse, wiggling it to find the pointer, getting your fingers on the buttons – or the other way around, placing hands on the keyboard, glancing down to find the right finger positions, glancing back up to re-locate the input focus and remember what you wanted to type in the first place – all this takes at least one second, each time, either way.
  • Screen changes require the user to re-orient, scan the new screen, and focus on what’s next. This process also takes a good second, if not more.
  • Making the user think – to collect their thoughts in the first place, recall data from memory and consider them, calculate, decide – is just about the worst cost factor in terms of time. Cognitive load theory makes an interesting distinction between intrinsic, extraneous, and germane cognitive load. Some thought processes are needed for the job (intrinsic), some are beneficial for getting insights (germane), and some are needed to deal with the material (extraneous). You want to minimize the extraneous part – things like searching buttons in unexpected places, understanding unclear labels, etc. This is nitty gritty usability work – experienced usability practitioners can help you a lot here, but there is no real alternative to usability testing with real users. In one of my next posts, I shall describe a methodology how to extract the relevant information out of task completion times, so you can at least determine which proportion of task completion time is due to cognitive load.

Putting it Together: CogTool

Fortunately, in order to make use of all these concepts and ideas, you don’t have to model and calculate all by yourself. CogTool is a free “cognitive crash dummy” developed at Carnegie Mellon University, and maintained by a lively community of researchers and UI design practitioners. Based on the most recent research in human-computer interaction, it calculates an estimate for the time a trained user would need to perform a specified task, on a specified UI.

CogTool modeling is surprisingly simple. First, you create a model of the UI – you can use screenshots of a real UI, or mere sketches from a design session. Relevant controls, such as input fields, dropdown menus, buttons etc., are added to the screens as “widgets”. Navigation between screens, or changes of a screen’s appearance, are modeled as “transitions”. Next, you can model the various task flows by going through the screens step by step, much like a user would do it.

CogTool takes the UI and task models and considers things like the relative positions and sizes of widgets (Fitts’ law), number of widgets and menu options (Hick’s law), input media switches, and screen changes. It makes realistic assumptions about users’ thought processes, which you can overrule with your own assessment, if you have better data. Then, taking into consideration that some processes can run in parallel (for instance, grabbing the mouse while shifting gaze towards a target), CogTool uses sophisticated algorithms to add up the time needed to go through the task.

Learning to use CogTool at a basic level takes less than a day. Modeling a UI with 2, 3 simple tasks may take around half a day, adding a task to the model some 15 minutes.

CogTool’s estimates can be surprisingly accurate when experienced modelers are at work. However, this is not the most important application case. The real power of CogTool is in comparing design alternatives. First, task modeling in CogTool forces you to really think about the respective task flows in detail, which is an extremely educational exercise for every interaction designer. Second, if the same person builds the models to be compared, all parameters that are at the modeler’s discretion (e.g., how long a thought process takes) will be applied consistently between alternatives. Third, once a model is built, exploring alternatives takes only a fraction of the time needed for the initial setup. So instead of lengthy design discussions, you add your ideas to the model, hit the button, and check the results.

Mind: user modeling in all its sophistication is never an excuse not to conduct proper usability tests. Every model is based on assumptions about users which may or may not hold. CogTool however is a tool that lays open many of these assumptions, that is based on solid empirical research, and produces extremely useful results in a reasonable amount of time.

Just as Kurt Lewin said: nothing is more practical than a good theory.

Not logged in
  • Anonym  5 years ago

    Loved the article . However , is there any research evidence that links these to more user access time or increase in the number of user hits to websites

    • Bernard Rummel   5 years ago

      Frankly, no.
      Optimizing websites with cognitive modeling will actually decrease access time, by reducing the time people waste on the site merely operating the controls. Makes you think of the value of access time as a web metric 😉
      Also, mind that CogTool models time-on-task for experienced users at the end of their learning curve. This is not the typical web case. If you want to increase hit and conversion rates, time would be my lowest priority to think of – when dealing with occasional users, discoverability of the main success path is the most critical factor to optimize.