- Latest Version 1.128
- Version 1.126
- SAPUI5 Version 1.122
- SAPUI5 Version 1.120
- SAPUI5 Version 1.118
- SAPUI5 Version 1.116
- SAPUI5 Version 1.114
- SAPUI5 Version 1.112
- SAPUI5 Version 1.110
- SAPUI5 Version 1.108
- SAPUI5 Version 1.106
- SAPUI5 Version 1.104
- SAPUI5 Version 1.102
- SAPUI5 Version 1.100
- SAPUI5 Version 1.98
- SAPUI5 Version 1.96
- SAPUI5 Version 1.94
- SAPUI5 Version 1.92
- SAPUI5 Version 1.90
- SAPUI5 Version 1.88
- SAPUI5 Version 1.86
- SAPUI5 Version 1.84
- SAPUI5 Version 1.82
- SAPUI5 Version 1.80
- SAPUI5 Version 1.78
- SAPUI5 Version 1.76
- SAPUI5 Version 1.74
- SAPUI5 Version 1.72
- SAPUI5 Version 1.70
- SAPUI5 Version 1.68
- SAPUI5 Version 1.66
- SAPUI5 Version 1.64
- SAPUI5 Version 1.62
- SAPUI5 Version 1.60
- SAPUI5 Version 1.58
- SAPUI5 Version 1.56
- SAPUI5 Version 1.54
- SAPUI5 Version 1.52
- SAPUI5 Version 1.50
- SAPUI5 Version 1.48
- SAPUI5 Version 1.46
- SAPUI5 Version 1.44
- SAPUI5 Version 1.42
- SAPUI5 Version 1.40
- SAPUI5 Version 1.38
- SAPUI5 Version 1.36
- SAPUI5 Version 1.34
- SAPUI5 Version 1.32
- SAPUI5 Version 1.30
- SAPUI5 Version 1.28
- SAPUI5 Version 1.26
- Latest Version 1.128
- Version 1.126
- SAPUI Version 1.124
- SAPUI5 Version 1.122
- SAPUI5 Version 1.120
- SAPUI5 Version 1.118
- SAPUI5 Version 1.116
- SAPUI5 Version 1.114
- SAPUI5 Version 1.112
- SAPUI5 Version 1.110
- SAPUI5 Version 1.108
- SAPUI5 Version 1.106
- SAPUI5 Version 1.104
- SAPUI5 Version 1.102
- SAPUI5 Version 1.100
- SAPUI5 Version 1.98
- SAPUI5 Version 1.96
- SAPUI5 Version 1.94
- SAPUI5 Version 1.92
- SAPUI5 Version 1.90
- SAPUI5 Version 1.88
- SAPUI5 Version 1.86
- SAPUI5 Version 1.84
- SAPUI5 Version 1.82
- SAPUI5 Version 1.80
- SAPUI5 Version 1.78
- SAPUI5 Version 1.76
- SAPUI5 Version 1.74
- SAPUI5 Version 1.72
- SAPUI5 Version 1.70
- SAPUI5 Version 1.68
- SAPUI5 Version 1.66
- SAPUI5 Version 1.64
- SAPUI5 Version 1.62
- SAPUI5 Version 1.60
- SAPUI5 Version 1.58
- SAPUI5 Version 1.56
- SAPUI5 Version 1.54
- SAPUI5 Version 1.52
- SAPUI5 Version 1.50
- SAPUI5 Version 1.48
- SAPUI5 Version 1.46
- SAPUI5 Version 1.44
- SAPUI5 Version 1.42
- SAPUI5 Version 1.40
- SAPUI5 Version 1.38
- SAPUI5 Version 1.36
- SAPUI5 Version 1.34
- SAPUI5 Version 1.32
- SAPUI5 Version 1.30
- SAPUI5 Version 1.28
- SAPUI5 Version 1.26
Building Trust with Generative AI
Empowering You to Lead the Way
Intro
With all of the exciting advances being made in generative AI, there are also real ethical questions and concerns arising around things like accuracy, reliability, and the role that bias plays in what is being produced. As we start to imagine all the potential value that technologies like generative AI can bring to improving user experiences, customer outcomes, and more, it’s essential that we design and build trust into each and every interaction people have with our products.
Building trust in generative AI requires that we:
- Be transparent about where and when it’s being used
- Give users control over its actions
- Use explainable AI techniques so users can confidently validate and improve the results
- Avoid bias and leverage fairness and inclusion best practices
Let’s dive a little deeper into each of these areas.
Transparency is Key
People don’t trust what they don’t understand – and transparency is critical to establishing trust when it comes to the use of generative AI in our product experiences. The opaque nature of how foundation models work makes this a bit tricky – but there are several things we can do to visually explain how the generative AI arrived at a particular output. Leveraging techniques from Explainable AI (XAI) can make things visible in a way that builds trust and helps to increase the confidence people have in what the model is generating.
For starters, it should be easy for people to understand how the model generates content, including what data it was trained on and how it makes decisions. Transparency into the data sources can also help people to identify any potential biases that may lead to harmful outcomes if left unchecked. It’s also important that people have visibility into what the model (or system) can do, by exposing the goals, functions, overall capabilities, limitations, and development process.
Explainable AI (XAI) can pave the way for transparency and trust by showing people how AI systems work and why they make the decisions that they do. This is especially important when it comes to generative AI because it’s not clear what data is informing the new outputs that are being generated.
That said, here are some simple things you can do to bring people into the fold:
- Tell people what’s happening along the way
- Use progressive disclosure
- Show, don’t just tell
- Provide a confidence rating or uncertainty indicator
- Emphasize continuous learning
Tell people what’s happening along the way
One way to keep users informed is to let them interact with the input or make it possible to guide the output generation process. This includes giving people the ability to define the parameters – so they can explore how those changes impact what the AI is generating.
Use progressive disclosure
Progressively disclose how the AI outcome was generated and the development process, ethics, protocols for maintaining alignment, and confidence levels.
Show, don’t just tell
Make it visibly clear what is being produced by generative AI and continue to highlight changes based on human interaction and refinement.
Provide a confidence rating or uncertainty indicator
Providing transparency to the user on the accuracy of the AI-generated output enables users to apply their own critical thinking.
Emphasize continuous learning
This helps people understand that the AI might not get everything right the first time, but is designed to learn and improve over time. This can set realistic expectations.
Fairness and Inclusion
People will only trust AI once it works in a way that’s equitable. That’s especially true when it comes to generative AI because the AI model learns and replicates existing biases, prejudices, or false information in the novel output it generates – that’s why generative AI has even shown to amplify harmful factors.
An example of generative AI used for generating images. Prompt is: "A color photograph of a housekeeper" and the images the AI produces are all women of Asian/Pacific Islander descent.
We need to focus on using a wide variety of data that represents a mix of inclusion criteria. It’s also important to create diverse cross-functional teams that work together throughout the end-to-end process of building AI applications. And we must be transparent about how we’re designing for fairness. To do this, we must:
- Ensure our AI doesn’t use harmful stereotypes from its training data in its outputs. It’s important that we pressure test the data sources – and that we use methods like engineered prompts and blocklists to help avoid bias.
- Encourage feedback and inspire users to tell you if they see something biased. This can help you continuously improve the AI.
- Strive for representation in all aspects of the generative AI’s outputs and functions, so all users feel recognized.
- Prioritize accessibility, catering to individuals with diverse abilities, needs, and preferences. This involves adhering to SAP’s established accessibility guidelines and standards.
In a Nutshell
As you can see, when it comes to generative AI, there are many ways to build trust. Everyone contributing to the AI experience for a product has a responsibility to push for safe and trustworthy AI experiences that help the world run better and improve peoples’ lives.
References
- Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a Weapon of Mass Deception, Ethical challenges and responses from the Human-Centered Artificial Intelligence (HCAI) perspective. ArXiv. https://arxiv.org/abs/2304.11215
- Baxter, K., & Schlesinger, Y. (2023). Managing the risks of generative AI. Harvard Business Review. Retrieved from https://hbr.org/2023/06/managing-the-risks-of-generative-ai
- Hao, S., Kumar, P., Laszlo, S., Poddar, S., Radharapu, B., & Shelby, R. (2023). Safety and Fairness for Content Moderation in Generative Models. ArXiv. https://arxiv.org/abs/2306.06135