How to make AI understandable This blog is part of a series on intelligent system design. In our previous blog Explaining System Intelligence we looked at why it’s vital to explain the underlying models and reasoning behind AI algorithms to the user, and outlined what needs to be explained and when. Now we want to take […]
One of the guiding design principles for intelligent systems is to empower end users. If we want people to trust in machines, we need to share information about the underlying models and the reasoning behind the results of algorithms. This matters even more in business applications, where users are held accountable for every decision they make.
In the meantime, it's widely accepted that intelligent systems need to come with a certain level of transparency. There's even a new term for it: explainable AI. But that's just the beginning. As designers, we need to ask ourselves how explainable AI ties in with the user interaction. What do we need to think about whenever we explain the results and recommendations that come from built-in intelligence? And how can we make it a seamless experience that feels natural to users?
There’s a common misconception that artificial intelligence inevitably means 100% automation. Movie producers would have us believe that AI is going to control absolutely everything – a Skynet scenario à la Terminator. So, if you design and implement an AI system, should you fear this or just ignore it? Is there a roadmap for intelligent automation of a specific system?
A properly designed intelligent SAP system extends the cognitive capabilities of a human user. As with past generations of tools, our aim should be to empower users and improve the outcome of human work. Based on our experience in recent projects, we have elaborated several design principles which we would like to share with you.