Tag: Explainable AI

UX for AI: Building Trust as a Design Challenge

How can I trust a digital assistant to perform tasks that are important to me and what could a trustful relationship between a business user and the digital assistant look like in practice?
2423  | 
Vladimir Shapiro

Explainable Artificial Intelligence: It’s (Not) All About Language

How to make AI understandable This blog is part of a series on intelligent system design. In our previous blog Explaining System Intelligence we looked at why it’s vital to explain the underlying models and reasoning behind AI algorithms to the user, and outlined what needs to be explained and when. Now we want to take […]
2028  | 
Annette Stotz

Explaining System Intelligence

One of the guiding design principles for intelligent systems is to empower end users. If we want people to trust in machines, we need to share information about the underlying models and the reasoning behind the results of algorithms. This matters even more in business applications, where users are held accountable for every decision they make. In the meantime, it's widely accepted that intelligent systems need to come with a certain level of transparency. There's even a new term for it: explainable AI. But that's just the beginning. As designers, we need to ask ourselves how explainable AI ties in with the user interaction. What do we need to think about whenever we explain the results and recommendations that come from built-in intelligence? And how can we make it a seamless experience that feels natural to users?
2256  | 
Vladimir Shapiro