One of the guiding design principles for intelligent systems is to empower end users. If we want people to trust in machines, we need to share information about the underlying models and the reasoning behind the results of algorithms. This matters even more in business applications, where users are held accountable for every decision they make.
In the meantime, it’s widely accepted that intelligent systems need to come with a certain level of transparency. There’s even a new term for it: explainable AI. But that’s just the beginning. As designers, we need to ask ourselves how explainable AI ties in with the user interaction. What do we need to think about whenever we explain the results and recommendations that come from built-in intelligence? And how can we make it a seamless experience that feels natural to users?