For decades now, in several industries, automation has changed life tremendously, particularly work life. Yet, we still work, don’t we? Perhaps the greatest surprise in the last waves of automation is how much they changed, rather than replaced, human work. Jobs rarely simply disappeared but rather morphed into others (sometimes, dramatically), and new ones appeared.
While experts expect machine learning and artificial intelligence (ML/AI) technology to replace about every other job – in some fields even more –, there will for sure be people interacting with these new technologies in some way. It is well worth spending time on investigating what the experience of those people will be, and how they will adapt to the new situation introduced by intelligent systems. The process of this adaptation, and the various states along this process, will require constant, parallel changes also in the interfaces users will use to interact with technology. To build a solid user experience strategy, we’ll need to anticipate this evolution.
So how will work change? Where we don’t know yet, how can we find out? And what does this mean for user experience design?
No system ever works always
If one lesson can be learnt about automation over the last decades, it’s that no system will always work. While operative work may be largely reduced or even eliminated, activities to set up systems and monitor their operation occupy an increasing part of the work day. On top of that, in case of any problems, someone will need to troubleshoot the system, analyze failures, fix errors, step in to maintain operations – all this often under time pressure.
Consider your IT department. Its very purpose is business process automation. What they actually do however is set up, monitor, and troubleshoot systems – all day long, and often enough on weekends. As long as systems work, they don’t require much attention. This is very different when they don’t.
Ironically, users experience automated systems mostly when they don’t work.
The main consequence of automation for workers is that some parts of their tasks will disappear, but an increasing share of their work life will shift towards setup, monitoring, and troubleshooting tasks. The knowledge gathered in their trade will still be relevant as it needs to flow into smart tools and applications, the processes to organize and operate them, and how to interface tools with processes around and beyond them. Again, consider modern IT organizations: Business “power” users increasingly take over configuration tasks so they can directly and flexibly act on changing business requirements. While they don’t personally move towards IT, their job profile certainly does.
Setup, monitoring, and troubleshooting are therefore not mere edge cases, but instead form a crucial part of designing a satisfactory user experience. Let’s walk through each use case step by step:
Jill is a sales manager who wants her team to focus on deals with a high probability of closing so they can focus their energy on these cases, and increase their turnover. Machine learning can help her analyze deals early on, and estimate a closing probability. What does Jill need to consider to set up such a system?
• Obviously, a sufficiently-sized database of deals is required to train the system. Where does she get access to this data?
• How can Jill make sure the data doesn’t contain too much noise or even systematic biases?
• Some success factors are out of the sales person’s control, but they can influence or address some by changing their strategy. How can Jill make sure her team doesn’t give up too early? Which factors can inform their strategy beyond simple go/no-go decisions?
• Can Jill be sure that the factors predicting deal success are the same in the future as they were in the past? Which past cases need to be discarded because circumstances have changed and they are no longer valid?
Apparently, getting some data and training an algorithm once won’t be sufficient. What can Jill do to continuously update her case base? What tools and processes will she need?
Machine learning and artificial intelligence can play a core role in reducing setup work (through mining classification rules, process pathways, decision criteria, etc.) Human users in such systems, however, will still have to set these systems up, select and maintain training data, connect to systems, etc. Data stewardship – how to tell signal from noise, clean noisy data, deal with sampling bias, etc. – plays an increasingly important role.
Obviously, many research questions emerge here: How can you organize effective and efficient data stewardship? How can new data be accessed and turned into training data? How can users contribute to machine training with proper feedback mechanisms? Which tools are appropriate, and what are the skills and knowledge required? How can you avoid systematic bias? How can you ensure good quality of data and data processing? And eventually, how can you create tools and user interfaces to make this work effective, efficient, and satisfactory?
Ben is working at a local energy provider’s headquarters. Sometimes, meter readings reported to the provider look suspicious. Ben’s job is to figure out why – did something just get mixed up? Is a meter defective? Is someone trying to cheat? etc. ML can automate part of the process by identifying and evaluating possible causes from the context. Ben just needs to approve the system’s proposal. Will Ben’s job, which so far hasn’t been the most exciting in the world, now become excruciatingly boring?
Well, the answer will depend on the design of the system. Ideally, automation should focus on the part of the job that is boring, so Ben can focus on the interesting part. Humans are good at adapting to changing conditions, and quite often enjoy that. However, Ben needs to know exactly when he can rely on the system, when he cannot, and why. Note that this is a different information need from the ones he had previously.
Ben can only trust the system if he can tell whether it is within its operational range, and is operating correctly. Consequently, he will need to identify signals of impending trouble, both the ones modeled in the system, and those in the outside world. How can you efficiently communicate these signals to Ben? Ben will not always need to follow every step of the system’s reasoning, but he needs to understand whether or not he can trust it. We want to make sure Ben is neither too complacent nor too critical towards the system, so signals of reliable vs. unreliable operation need to be carefully balanced.
Trust in automated systems depends on a lot of factors, both psychological and empirical. Experience with this and similar systems, the system vendor’s reputation, the assumed intent of the system provider, can all play a role – again there are plenty of research topics to be addressed.
A more obvious research topic is to test whether, in a given situation, users are able to assess the system’s reliability. Note that for such a test a working system may not be necessary; often a Wizard of Oz test suffices. In Ben’s case, you might create a mock-up showing what various failure cases might look like in Ben’s display. Then, you can invite a bunch of test users and check directly whether they can correctly identify what’s wrong.
Whenever ML or AI functionality is introduced in a system, its capabilities change. When these capabilities don’t work as expected, again the humans involved must adapt to the new situation – sometimes under time pressure and high emotional stress. Supporting this process is a key task for user experience design.
Development teams rarely have anomalies in system operation on their radar before the system is actually deployed and in use. Fortunately, we can extrapolate experience from cases where this has already been done. Human Factors research in medicine and aviation has identified common patterns how humans respond to sudden failures or anomalies in automated systems, and many of these can be generalized to other domains.
Experienced surgical teams often develop specific routines to prepare for the failure or malfunction of automated systems. They go through these routines before each surgical procedure so they can react faster in case problems occur. Identifying and analyzing such procedures gives interesting insights into the task domain and shortcomings of the tools in use. Aviation crews use flight simulators for the same reason; in addition they also practice specific team procedures. Sometimes a user is not alone with their system: team communications show cooperation and communication needs that arise in spite of, in parallel to, or even because of automated systems. This shows that human users and automated systems need to be considered as a coherent functional entity, where each component assumes roles in response to the other’s capabilities.
It is instructive to consider the questions actual system users ask and say when anomalies occur, for answering each of those questions and issues is an explicit design task:
- “What is it doing now?“
- „What will it do next?“
- „How did I get into this mode?“
- „Why did it do this?“
- „Unless you stare at it, changes can creep in.“
- „Stop interrupting me while I am busy!“
- „How do I stop this machine from doing this?“
- „I know there is some way to get it to do what I want, but how?“
You will need to understand the context when those questions and issues may arise, and what exactly would be an answer that helps. Jill and Ben are in a very different context than a surgeon or a pilot, but they will have similar questions when their systems come up with strange classifications or even act on them in unexpected ways, creating business havoc. Taking over control in case of a crisis is a use case that requires careful research and design.
Note that a helpful answer to the questions above is not necessarily a conclusive one. Since troubleshooting often happens under time pressure, merely buying time, or safeguarding other, parallel processes, can be very helpful.
Is that all? Of course not!
Automation is a process. ML and AI systems become more and more capable, so the answers to the above questions will change over time (see also Vladimir Shapiro’s post about choosing the right level of automation). Also, the answers to these questions will change your understanding of the goals, scenarios and side conditions, and possible outcomes of your project, which may very well change your approach. Eventually, when solutions are found and products are being built, this will change the landscape of tools, people involved, and jobs to be done. So yes, you will have to start over, and iterate. The good news is that this process will help you build a product roadmap that is quite likely to last for years in today’s most innovative industry.