Design for Multi-modality

There are a few things to keep in mind when you’re designing multi-modal interactions:

  • Know your users
  • What are the modality characteristics?
  • How to get started designing multi-modal interactions?

Know Your Users

Get to know your users before you create a digital assistants and select an interaction method. As the designer, you should understand user goals, preferences, and behaviors. In a multi-modal design, a problem can have various solutions and interaction methods. Showing users their options lets them choose their preferred path in the current context.

Multi-modal Response

In this example, when the user asks about her quarterly bonus, the digital assistant gives her several options.

Multi-modal Response
Multi-modal Response


Who and Where Are Your Users?

When you define a user persona, you create a user profile that includes age, job title, job responsibilities, goals, pain points, needs, and location. It’s important to know how users interact with the digital assistant on their devices. Where are they? Are they in the car, in the office, at the airport, cliff diving? Can they type or user voice commands? User locations determine what type of interaction they have with your digital assistant.

Voice
Voice

In the Car

In the car, the users hands are busy driving, their eyes are watching the road, their voice and ears are available to interact.  In the car, voice interface is best (since the user is a bit busy driving!).

Conversation
Conversation

In the Office

In the office, the user usually has their hands, eyes, and ears are free, but their voice might be restricted, depending on if they have coworkers sitting nearby. In the office, conversational interface is the user’s best bet, depending on the context and type of information.



Interaction Modalities

Depending on the situation, one interaction modality might be better suited than another. The digital assistant can respond by using various interaction modalities, such as voice, text, graphically, and more. Let’s see how various scenarios call for different interaction modalities.

Voice Interface

The user verbally asks the digital assistant a question. The digital assistant responds verbally to the user. It’s a simple question and response.

User speaks: “When is the next sales kick-off?”

Digital assistant speaks: “The next sales kick-off is from March 10-18 in Las Vegas, Nevada.”

Graphical Interface

The digital assistant responds to the user in a graphical way by showing how the user can get to the information they need.

Graphical Interface
Graphical Interface


Voice Interface

User speaks: “What are the total team sales for last quarter?”

Digital assistant speaks: “The total team sales went up by 15% compared to first quarter, but was down by 10% compared to the same quarter last year. Here’s the breakdown…”

Graphical Interface

The digital assistant responds to the user in a graphical way by showing team sales in a chart.

Graphical Interface
Graphical Interface


User Input

Users can input a request for a digital assistant to get information in a variety of ways, including voice commands and conversational text. Depending on the user input in a scenario, the digital assistant uses this input to match a correct intent and give the user the best answer in the best way.

Voice Interface

Users simply tell the digital assistant what they need.

User speaks: “Create a leave request for September 26.”

Digital assistant speaks: “I created a leave request for you for September 26.”

Graphical Interface

Users navigate through menus and pages to get to the information they need or to complete tasks. In this example, an employee is requesting time off.

User Navigates
User Navigates


Digital Assistant Output

The digital assistant can respond (output) to a user request using voice, text, and graphically. The context and type of information determines the digital assistant’s output type.

Voice Interface

Use voice interface for:

  • Small bites of information: meeting time, weather, vacation days
  • Listing up to 3 items

User speaks: “Hey, what time is my sales meeting today?”

Digital assistant speaks: “Your sales meeting is at 3:00pm today.”

Graphical Interface

Use graphical interface for:

  • Complex information: a comparison of sales figures in the past 2 years, reports, diagrams
  • Listing more than 3 items


Discoverability

This type of response can help users explore and learn by discovering new things. The digital assistant can tell users how to do something or show them graphically how to do it. Some contexts require a more visual approach.

User speaks: “How do I see my paycheck?”

Digital assistant speaks: “Go to the Fiori Launchpad, find, My Paystubs and select the tile.”

Graphical Interface

The digital assistant shows users, so that they can quickly see what to do by visually exploring the menus, navigation, and features.



User Focus

User focus indicates whether or not the user interface requires the user’s full attention.

Voice Interface

Doesn’t require full attention (maybe because they are baking), which allows the user to multitask.

User speaks: “How many ounces in a cup?”

Digital assistant speaks: “There are 8 ounces in one cup.”  

Graphical Interface

Requires most of the user’s attention (like a serious task).

User speaks: “How do I remove an appendix?” (don’t worry, this is a med student!)

Digital assistant speaks: “Let me show you a video of an appendectomy.”  



Get Started

When you are ready to get started creating your cool new digital assistant, keep a few things in mind.

  • Start with one digital assistant interaction type. Then introduce other interaction methods to supplement it or to offset the digital assistant’s limitations. This ensures that you are actually designing a conversational product and not just a graphical interface with voice added.
  • Conversational products can provide a multi-sensory experience, but it is not always necessary. Review the principles of calm technology to design effective, yet subtle interactions.
  • Explore new use cases and opportunities to interact with users. User research is key to successful conversational interactions.

Get moving on your new digital assistant. Ready, set go!

Building trust

Digital Assistants can potentially help make users’ lives much easier. Users have lots of expectations for digital assistants like Amazon Alexa, Apple’s Siri, and Microsoft’s Cortana. Consumers ask for information, reminders, the weather report and expect the digital assistant to be reliable and trustworthy.

Enterprise users also expect the digital assistant to have these same qualities so they can complete very important tasks in an enterprise environment. If users don’t trust the digital assistant, they won’t use it. If users won’t use the digital assistant, it’s possible that support costs can go up and customer satisfaction might go down.



Good conversational design gives users a positive feeling from using the digital assistant. An effective digital assistant:



Expectations

The first time people meet someone, including a digital assistant, it takes time to build trust with a digital assistant. Conversations usually start out as transactional. The user asks the digital assistant to do something and expects it to perform. With each successful interaction, trust between users and the digital assistant improves. Conversation design can take users to more complex, proactive conversations.

  • Introduce the digital assistant to the user.
  • Let the user know that the digital assistant is a a computer, not a live human agent.
  • Tell the user what the digital assistant’s capabilities are.
  • Set clear expectations for the digital assistant.
  • Suggest topics for the user to learn about.


Onboarding

Onboarding users when they first interact with a digital assistant is very important in building trust. As trust improves, users are more comfortable interacting with the digital assistant. As user information is collected, it’s important that the digital assistant tells users how information and knowledge will be used, so they can maintain and build trust.

  • Make the digital assistant visible and easy to access in your product or on your website.
  • Introduce the digital assistant and what types of questions it can answer.
  • Be sure the digital assistant easy to use, intuitive. If it’s easy to use, people will use it.
  • Digital Assistant should load quickly and understand what users are saying.
  • Give users a seamless experience.


Reliable

We all look for reliable friends, computers, cars, and more. Users also want reliable digital assistants and expect a correct and accurate response.

  • Make every interaction consistent, so that the user is comfortable with the digital assistant’s skills.
  • Keep a consistent tone and voice that matches your company’s brand.
  • Digital assistants should know and understand customers.
  • Make users feel successful when interacting with the digital assistant to complete their tasks.
  • Give users the information they need without sales pressure.


Trustworthy

Users need to to trust that the digital assistant correctly responds to their requests.

  • Give users a satisfactory experience every time, so users want to come back.
  • Be honest with users about how you can help them (and how you can’t).
  • Start with a simple statement that can help users feel at ease. Something like, “Hi, I’m SAP’s Digital Assistant. How can I help you?”.
  • The digital assistant’s conversations should be natural and not robotic—like the user is talking to a colleague or even a live agent.
  • If the digital assistant can’t answer the user’s questions, let users know how to contact customer support (live chat, email, phone).

Handoff to Human

Even after designing and testing a robust digital assistant experience, there might be times when it makes sense to hand off users to a human agent. It’s important to recognize the triggers for a handoff and how to properly set up a seamless handoff to a human.

Handoff to Human
Handoff to Human

Handoff Triggers

There are a number of reasons why a digital assistant might hand off the user to a live human agent. Here are some typical triggers for handing off users to a human agent.

Digital Assistant Limitations

Digital assistants have certain limitations and can’t handle every user request because it lacks knowledge or training, or the request is too complex and beyond the digital assistant’s scope.




Unknown Issue or Error

It’s possible that a digital assistant comes across an issue or an error it can’t handle. The digital assistant is unable to determine the user intent or various other issues.




User Requests Live Agent

Users sometimes prefer to talk with a human to address their problem and ask the digital assistant to transfer to a human.



Handoff Protocols

It’s always possible that a user gets handed off to a human during the digital assistant interaction. You want this experience to be seamless and pleasant for the user. Here are some best practices to use when handing the user off to a human.

Make it Easy for the User

Users should always be able to talk to a human agent quickly and easily at any time. Make it simple and seamless for users to ask for human help to avoid adding to their frustration when things aren’t going as expected with a digital assistant.




Begin the Handoff

Here are a few things to confirm before you hand off the user to a human.

  • A live human agent is available. If not, make sure the user’s needs are met by creating a support ticket or scheduling a callback.
  • The handoff request has the correct context so that it is routed to the appropriate human agent.
  • The human agent has a record of the conversation to understand the context of the handoff and better assist the user.



Review Handoff Triggers

Review handoff requests to see at what point in the conversation are most handoffs triggered. Analyze how you can prevent conversation triggers by understanding what went wrong, when, and how it went wrong. The goal is to reduce how often a handoff is triggered and improve the overall conversation experience for users.



Intro to Modality

Human-to-computer interactions are constantly evolving and require less hands-on interaction between users and the support team. Users can make requests with distinct interaction methods, including voice and other interaction types.

Users can type in a request (conversational interface) to the digital assistant. Later on, the user can switch to using voice commands (voice interface) because it is more convenient. This process should be seamless to the user.

Multi-modal Interaction

Multi-modal interaction types let users switch beween interaction methods, so that they have simple, natural conversations, and can freely interact with the digital assistant using voice, text, or gestures (or mindreading some day!).

Compare Interaction Modalities

There are pros and cons for each interaction type. Keep in mind the capabilities and limitations of each interaction and the user’s context. When you design conversational user experiences, integrate different interaction modalities to ensure that you give users the best experience. Compare how you can best serve your users and their request. What information do they need and in what format?



Voice Interface

In this example, the user verbally asks the digital assistant a question and the digital assistant verbally replies with some options for the user.

Conversational Interface

In this example, the user types in a question to the digital assistant. The digital assistant replies with a visual list of options and some actions the user can take.

User Speaks
User Speaks
Digital Assistant Speaks
Digital Assistant Speaks

Speech-to-Text

Speech-to-Text refers to the flow from users speaking to the Digital Assistant to the speech transcribed into text. Digital Assistant will respond to users’ speech requests as if the request was made via keyboard.

In this guideline, you will learn the best practices for the whole flow for users’ voice requests, which will save them effort compared to purely using a keyboard.

Context

The context where the Speech-to-Text behavior happens is on SAP’s web client. The Digital Assistant panel where voice can be triggered is placed on top of Fiori Launchpad.

Happy Path

The “Happy Path” defines the interaction flow when the Digital Assistant can transcribe and understand users’ speech input correctly.

1. Trigger speaking mode

Once the Digital Assistant Panel is opened, the user can click the microphone on the bottom to trigger speech input, and there will be a panel sliding from the bottom of the window, replacing the text input area, indicating that Digital Assistant is listening.

2. Real-Time Transcription

As long as the user starts to speak, there will be a real-time text transcription showing, followed by a cursor, indicating that the transcription is ongoing.

3. Automatic Submission

After the user pauses the speech for 2 seconds, the request will be submitted to the Digital Assistant automatically in the format of text.

Exit speaking mode

If users want to exit speaking mode while they are speaking, they can click the “X” icon on the bottom right, then the panel on the bottom will disappear and be replaced by the text input area.

Exit speaking mode
Exit speaking mode

Error Correction

In case the Digital Assistant cannot transcribe the user’s speech correctly and make several mistakes, the user has two options to correct the error.

Start Over

First, users can click the “start over” icon on the bottom left to restart the speech request. Once clicked, the original text transcription will be wiped out, and the user can start the voice input again.

Start over
Start over

Edit Manually

Also, users can directly click on the text transcription, after which they will exit the speaking mode and see the text in the text input area, where they can manually edit the text and submit it.

Edit Manually
Edit Manually

Application Integration

Application integration is the backbone of enterprise conversational products. Whether providing insight from the underlying database or enabling users to complete specific actions powered by the supporting system, it is essential to connect the bots to other applications.

There are more than the technical aspects to consider when it comes to integrations to ensure the delivered experience is desirable. In this guideline, you will learn the best practices to help you design a streamlined, coherent experience that aligns with the user’s mental model.

UX Patterns

There are two primary patterns enabling users to fulfill the needs with the aid of the integrated applications.

1. Translate into conversational interactions

Users fulfill their needs by conversation interaction with the bot only. The entire experience does not require users to navigate to other applications to complete further actions.

2. Redirect to origin application

Users further navigate to the integrated application to complete the task. Helps users to complete the task that goes beyond pure conversational interaction can support. As a result, users tend to recognize the difference between applications with more cognitive friction.

How to decide between patterns

Can the feature be translated into pure conversational interaction?

  • Is it feasible to build the features?
  • Can the bot complete the action for users?

Which pattern will fulfill the user’s needs with a better experience?

  • Which pattern enhances efficiency for users?
  • Which pattern allows users to complete the task with less friction?

Examples

Let’s take the use cases of customer satisfaction surveys and peer review as examples. Although both use cases serve the similar user’s goal of providing feedback, there are some nuances to consider for the UX pattern.

For the use case of the customer satisfaction surveys, clients provide short feedback, which can be smoothly captured in a quick conversation with the bot. In comparison, users need a longer time to plan and write the peer review, which directing users to the dedicated application enhance the overall experience.

Translate into conversational interactions example - Customer satisfactions survey supported by Qualtrics
Translate into conversational interactions example - Customer satisfactions survey supported by Qualtrics
Redirect to origin application example - Peer review in Successfactors
Redirect to origin application example - Peer review in Successfactors

Best practices

Design the end-to-end experience

It is crucial to consider how the bot would continue interacting with the users throughout the workflow. Even users navigate to the application to complete specific action, users still expect the bots to provide support on the side.

Ensure consistency across applications

Use the same terminology of the integrated applications to deliver a smooth and consistent experience.

Enable users to navigate with flexibility

Users switch between the bot and the integrated application. So if the bot redirects users to a web application, open the app in a new tab to allow users to navigate between the bot and application easily.

Example of designing the end-to-end experience - by actively delivering the congratulations after users signed the offer letter in the application, the bots can further support users with advanced onboarding topics.
Example of designing the end-to-end experience - by actively delivering the congratulations after users signed the offer letter in the application, the bots can further support users with advanced onboarding topics.

Use Case Request Form

Do you have an idea for a conversational experience and need more specific guidance? Feel free to reach out to the SAP CUX team or submit a use case request form.

Interaction Types

Human-to-computer interactions are constantly evolving and require less hands-on interaction between users and the support team. Users can make requests with distinct interaction methods, including voice and interaction types.

Users can type in a request (conversational interface) to the digital assistant. Later on, the user switches to using voice commands (voice interface) because it is more convenient. This process should be pleasant and seamless to the user.

Multi-modal Interactions

Multi-modal interaction types let users switch beween interaction methods, so that they have simple, natural conversations, and can freely interact with the digital assistant using voice, text, or gestures (or mindreading some day!).

Compare Interaction Types

There are pros and cons for each interaction type. Keep in mind the capabilities and limitations of each interaction and the user’s context. When you design conversational user experiences, integrate different interaction types to ensure that you give users the best experience. Compare how you can best serve your users and their request. What information do they need and in what format?

Voice Interface

In this example, the user verbally asks the digital assistant a question and the digital assistant verbally replies with some options for the user.

User Speaks
User Speaks
Digital Assistant Speaks
Digital Assistant Speaks

Conversational Interface

In this example, the user types in a question to the digital assistant. The digital assistant replies with a visual list of options and some actions the user can take.

Design Considerations

Here are some things to think about when you’re designing multimodal interactions:

Know Your Users

It’s essential to know your users before creating digital assistants and selecting interaction methods. As the designer, you should understand your users’ goals, preferences, and behaviors. In a multi-modal design, a problem can have various solutions and interaction methods. Showing users their options lets them choose their preferred path in the current context.

Multi-modal Response

In this example, when a user asks about her quarterly bonus, the digital assistant gives her several options.

Who and Where Are Your Users?

When you define a Persona for your users, you create a user profile that includes age, job title, job responsibilities, goals, painpoints, needs, and location. It is important to know how users interact with the digital assistant on their devices. Where are they? Are they in the car, in the office, at the airport, cliff diving? Can they type or user voice commands? User locations determine what type of interaction they have with your digital assistant.

Voice
Voice

In the Car

Your user is hands busy, eyes busy, voice free, ears free. In the car, voice interface is best (since the user is a bit busy driving!).

Conversation
Conversation

In the Office

Your user is hands free, eyes busy, voice restricted, ears free. In the office, Conversational Interface is the user’s best bet, depending on the context and type of information.

Interaction Types

Depending on the situation, one interaction type (also called modalities) might be better than another. The digital assistant can respond by using various interaction types, such as voice, text, graphically, and more. Let’s see how various scenarios call for different interaction types.

Voice Interface

The user verbally asks the digital assistant a question. The digital assistant responds verbally to the user. It’s a simple question and response.

User speaks: “When is the next sales kick-off?”

Digital assistant speaks: “The next sales kick-off is from March 10-18 in Las Vegas, Nevada.”

Graphical Interface

The digital assistant responds to the user in a graphical way by showing how the user can get to the information they need.

Graphical Interface
Graphical Interface

Voice Interface

User speaks: “What are the total team sales for last quarter?”

Digital assistant speaks: “The total team sales went up by 15% compared to first quarter, but was down by 10% compared to the same quarter last year. Here’s the breakdown…”

Graphical Interface

The digital assistant responds to the user in a graphical way by showing team sales in a chart.

Graphical Interface
Graphical Interface

User Input

Users can input a request for a digital assistant to get information in a variety of ways, including voice and conversational text. Depending on the user input in a scenario, the digital assistant uses this input to match a correct intent and give the user the best answer in the best way (for example, voice or graphical interface).

Voice Interface

Users simply tell the digital assistant what they need.

User speaks: “Create a leave request for September 26.”

Digital assistant speaks: “I created a leave request for you for September 26.”

Graphical Interface

Users navigate through menus and pages to get to the information they need or to complete tasks. In this example, an employee is requesting time off.

User Navigates
User Navigates

Digital Assistant Output

The digital assistant can respond (output) to a user request using voice, text, and graphically. The context and type of information determines the digital assistant’s output type.

Voice Interface

Use voice interface for:

  • Small bites of information: meeting time, weather, vacation days
  • Listing up to 3 items

User speaks: “Hey, what time is my sales meeting today?”

Digital assistant speaks: “Your sales meeting is at 3:00pm today.”

Graphical Interface

Use graphical interface for:

  • Complex information: a comparison of sales figures in the past 2 years, reports, diagrams
  • Listing more than 3 items
Graphical Interface
Graphical Interface

Discoverability

This type of response can help users explore and learn by discovering new things. The digital assistant can tell users how to do something or show them graphically how to do it. Some contexts require a more visual approach.

User speaks: “How do I see my paycheck?”

Digital assistant speaks: “Go to the Fiori Launchpad, find, My Paystubs and select the tile.”

Graphical Interface

The digital assistant shows users, so that they can quickly see what to do by visually exploring the menus, navigation, and features.

User Focus

User focus indicates whether or not the user interface requires the user’s full attention.

Voice Interface

Doesn’t require full attention, allowing the user to multitask.

User speaks: “How many ounces in a cup?”

Digital assistant speaks: “There are 8 ounces in one cup.”  

Graphical Interface

Requires most of the user’s attention.

User speaks: “How do I remove an appendix?” (don’t worry, this is a med student!)

Digital assistant speaks: “Let me show you a video of an appendectomy.”  

Get Started

When you are ready to get started creating your cool new digital assistant, keep a few things in mind.

  • Start with one digital assistant interaction type. Then introduce other interaction methods to supplement it or to offset the digital assistant’s limitations. This ensures that you are actually designing a conversational product and not just a graphical interface with voice added.
  • Conversational products can provide a multi-sensory experience, but it is not always necessary. Review the principles of calm technology to design effective, yet subtle interactions.
  • Explore new use cases and opportunities to interact with users. User research is key to successful conversational interactions.

Get moving on your new digital assistant. Ready, set go!

Handoff to Human

Even after designing and testing a robust digital assistant experience, there might be times when it’s logical to handoff users to a human assistant. It’s important to recognize the triggers for a handoff and how to properly set up a seamless handoff to a human.

Handoff to Human
Handoff to Human

Handoff to Human Triggers

There are a number of reasons why a digital assistant might hand off the user to a live human agent. Here are some typical triggers:

Digital Assistant Limitations

Digital assistants have certain limitations and can’t handle every user request due to lack of knowledge or training, or the complexity of the request is beyond the digital assistant’s scope.

Unknown Issue or Error

It’s possible that a digital assistant comes across an issue or an error it can’t handle. The digital assistant is unable to determine the user intent or various other issues.

User Requests Live Agent

Users sometimes prefer to talk with a human to address their problem at any time during the conversation with the digital assistant.

Handoff Protocols

It is always possible that a user gets handed off to a human during the digital assistant interaction. You want this experience to be seamless and pleasant for the user. Here are some best practices to  use when handing the user off to a human.

Make it Easy for the User

Users should always be able to talk to a human agent quickly and easily, at any time. Make it simple for users to ask for human help to avoid adding to their frustration when things aren’t going as expected with a digital assistant.

Begin the Handoff

Before you hand off the user to a human, make sure:

  • a live human agent is available. If not, make sure the user’s needs are met by creating a support ticket or scheduling a callback.
  • the handoff request has the correct context so that it is routed to the appropriate human agent.
  • the human agent has a record of the conversation to understand the context of the handoff and better assist the user.

Review Handoff Triggers

Review handoff requests to see at what point in the conversation are most handoffs triggered. Analyze how you can prevent conversation triggers by understanding what went wrong, when, and how it went wrong. The goal is to reduce how often a handoff is triggered and improve the overall conversation experience for users.

Building trust

Digital Assistants can protentially make our lives easier. Users have lots of expectations for digital assistants like Amazon Alexa, Apple’s Siri, and Microsoft’s Cortana. Consumers ask for information, reminders, the weather report and expect the digital assistant to be reliable and trustworthy.

Enterprise users also expect the digital assistant to have these same qualities so they can complete very important tasks in an enterprise environment. If users don’t trust the digital assistant, they won’t use it. If they won’t use the digital assistant, support costs go up and customer satisfaction goes down.

Good conversational design gives users a positive feeling from using the digital assistant. An effective digital assistant:

Expectations

Just like when people first meet strangers, it takes time for users to build trust with a digital assistant. Conversations usually start out as transactional. The user asks the digital assistant to do something and expects it to perform. With each successful interaction, trust between users and the digital assistant improves. Conversation design can take users to more complex, proactive conversations.

  • Introduce the digital assistant to the user.
  • Let the user know that the digital assistant is a bot, not a live  human agent.
  • Tell the user what the digital assistant’s capabilities are.
  • Set clear expectations for the digital assistant.
  • Suggest topics for the user to learn about.

Onboarding

Onboarding users when they first interact with a digital assistant is very important in building trust. As trust improves, users are more comfortable interacting with the digital assistant. As with a human relationship, it’s important that the digital assistant tells users how information and knowledge is used, so they can maintain and build trust.

  • Make the digital assistant visible and easy to access in your product or on your website.
  • Introduce the digital asssistant and what types of questions it can answer.
  • Be sure the digital assistant easy to use, intuitive. If it’s easy to use, people will use it.
  • Digital Assistant should load quickly and understand what users are saying.
  • Give users a seamless experience.

Reliable

Ever had a friend who says I’ll pick you up in 5 minutes and comes late or not at all? Do you have a go-to friend who you know you can rely on in an emergency? These aren’t qualkities we want in a good friend. Users expect the reliability from digital assistants. They want the digital assistant to be consistent and reliable. No one likes an unreliable friend!

  • Make every interaction consistent, so that the user feels comfortable in the digital assistant’s skills.
  • Keep a consistent tone and voice that matches your company branding.
  • Digital assistants should know and understand customers.
  • Let users feel successful when using the digital assistant to complete their tasks.
  • Give users the information they need without sales pressure.

Trustworthy

When we have friendships, we’d like to know that we can trust them. If they lie to us or say they will do something and don’t fulfill their promise, we don’t have a good relationship. Or we aren’t friends anymore. Same goes for digital assistants. Users expect digital assistants to be consistent and reliable. No one likes an unreliable friend!

  • Give users a satisfactory experience every time, so users want to come back.
  • Be honest with users about how you can help them (and how you can’t).
  • Use some small talk to help users feel at ease.
  • The digital assistant’s conversations should be natural,  not robotic. Like the user is talking to a friend or colleague.
  • If the digital assistant can’t answer the user’s questions, let them know how to contact customer support (live chat, email, phone).