A well-designed personality is not only key in getting users to accept a digital assistant, it also has fundamental implications for our increasingly digital lives. We build relationships with other humans through emotional connection rather than just mere information exchange. Research proves that we act and remain attached, not because of reason, but because of emotion. Clearly a digital assistant’s personality must be consistent across scenarios and channels. But on top of that, it must also forge an emotional bond with users and adjust to their personality and the circumstances of the interaction.

Understanding human conversation

How we talk to machines is influenced by how we have traditionally interacted with them. But this will evolve. Just as we change how we talk to match our human counterpart, humans and machines will learn to adapt to each other’s communication style.

Linguistic alignment is the tendency of humans to mimic their conversational partner. This is an important consideration when designing digital assistants. For example, a digital assistant can learn user-specific expressions or words and, in future interactions, understand and use that vocabulary again. Imagine a digital assistant asks a user, “Shall I create this leave request for you?” Instead of saying, “yes,” the user replies, “please.” The assistant, however, expects a yes or no answer. So, the assistant asks, “Do you mean yes or no?” If the user confirms with “yes,” then the assistant can learn that this individual uses “please” as a synonym for “yes” and in the future will not ask for clarification but immediately take “please” – in that “yes-or-no” context – as a confirmation. This is a relatively simple example.

When we look at a dialog between two people, several rules are (subconsciously) followed. These rules can be quite complex and subtle, but let’s use taking turns speaking as an example. In a conversation, we talk alternately, reacting to what has just been said. There are cultural differences regarding, for instance, how transition of turns is signaled, or how much overlapping is accepted. Think of the differences in conversational styles between Italians and Germans: Italians are famous for interrupting each other with gusto during a conversation, whereas in Germany, doing so would be considered very impolite. Another important consideration is how we adapt to the person we are talking to. In a “real” conversation between two humans, this happens not only with the words we use, but also with body language. Mostly this happens subconsciously without us noticing that we are doing it. The more we like our conversational partner, the more we mimic that person.

Taking these unwritten rules into account when building conversational UIs is so important because these phenomena make a conversation natural. And they only feel natural if the rules – especially the subconscious ones – are followed.

People will also employ linguistic adaptation automatically when talking to a digital assistant. We humans learn quickly how to adapt our speech to get exactly what we want. But an intelligent system needs to adapt as well. These can be rather “simple” adaptations like increasing the assistant’s vocabulary to match the language of the user.

However, to be successful, the mutual adaptation needs to go further than just words. The digital assistant should ideally adapt to the verbosity, the tone, the preferences and the context of the user. For example, if the user always asks why and tends to be interested in lengthy explanations, the digital assistant can offer the rationale automatically when answering the user’s initial question. If the user is creating a request for a sick day, the system should be brief and to the point, “knowing” that if the user is sick, getting the task done quickly, not verbosity, is more appropriate.

Finding the right balance

Mutual adaptation, however, has its limits. A digital assistant should remain polite, regardless of how the user behaves. For example, if you say something rude to Amazon’s Alexa, she is likely to reply, “That’s not very nice to say,” in a tone more sad than angry. Apple’s Siri replies, “I’m just trying to help.” You can see that instead of applying mutual adaptation, which would require an impolite answer, both assistants stay neutral.

Earlier versions of Siri or Alexa did not react at all when users said “thank you.” This has now changed. Thanking either assistant results in a more human response like, “You’re welcome” or “Of course.” This is an important change: We might have aligned with the assistants’ behavior by speaking to it in commands for the sake of expediency, thereby sacrificing politeness, but the assistants are evolving and part of this evolution is politeness.

Considering that digital assistants are poised to become our integral partners at work, in our homes and on the go, this is an important development. Not “just” user adoption, but also the evolution of our language – and our children’s language – are implicated in this equation.

As the pervasiveness of digital assistants spreads and the relationship between humans and machines intensifies, software vendors must acutely understand the mechanisms and implications of discourse to be successful.

Not logged in