There’s a common misconception that artificial intelligence inevitably means 100% automation. Movie producers would have us believe that AI is going to control absolutely everything – a Skynet scenario à la Terminator. So, if you design and implement an AI system, should you fear this or just ignore it? Is there a roadmap for intelligent automation of a specific system?
Let’s take Paul for example. Paul is a manufacturing engineer of aircraft turbines. Each production order may take up to several weeks. There are a lot of changes on the way. There is a product engineer, who constantly tunes the design for quality and cost reasons; there are dozens of workers required to be routed correctly through the production hall; and, of course, there is a Big Demanding Customer setting deadlines, threatening penalties and changing requirements.
During his day, Paul makes hundreds of micro-management decisions concerning the changes: apply them or ignore them? If he applies them, should they be downstream or upstream or in both directions? Everything seems to be under Paul’s control, but at the end of the day Paul is very tired. Is there a way to help him?
The Skynet shadow
Imagine what happens in the case of full automation. Paul’s company just implemented a new software system which can “learn” from historical and environmental data and recommend a decision with a certain level of confidence.
No-go: A manufacturing engineer is notified of changes to a production order. The system is “smart” and takes over.
Wow. This is probably the typical scenario every customer we talked to is afraid of. The system delays the production of a complex aircraft engine and rolls back all operations due to the changes received overnight from the production team.
So, the system is 99.8% confident of its decision and redirects the workers to other orders; the lights are off; this evening is Paul’s regular sync with the Big Demanding Customer… Good luck, Paul!
If a customer calls you the next morning, what will you tell him? We delayed your production order because of… 99.8%?
We can do it better
The same morning in a parallel universe. Paul checks his system for changes.
Now Paul is confident enough to delegate trivial decisions to the system. He still wants to approve them but, as his trust in the system grows, he probably will only need to review them from time to time in the future.
Levels of automation
What we have done here? We have introduced two different levels of automation which were applied depending on the use case and the user’s trust in the system. They helped us to introduce and grow the AI complexity sequentially without excluding the user and losing his trust.
Of course, in other scenarios there could be more such levels. The amount and the final level to be reached is always defined by the company.
In the end, it comes down to the right balance between human control and the autonomy of an intelligent system. The correct mix fosters efficient collaboration and builds trust. Where full automation is not feasible, we should aim for greater efficiency. By combining automation with better use of existing information, transparency, and learning effects, we can help users to obtain the same result with fewer steps.
We’re just scratching the surface. So, stay tuned for more interesting topics and add your thoughts in the comments section below!