AI Design Glossary

To provide you with some quick help on AI terminology used in or referred to within our guidelines for ‘Designing Intelligent Systems’, you can find a set of basic terminology below.


Terms are sorted in alphabetical order and grouped into five categories:

  • Artificial Intelligence
  • AI Ethics and AI User-Centered Design
  • Generative AI
  • Machine Learning (ML)
  • Natural Language Processing (NLP)
Information

In writing this glossary, we extensively researched SAP and global definitions, including those by UNESCO, OECD.AI, and other institutes.

In the process, we leveraged ChatGPT to simplify these explanations so that anyone, regardless of their previous knowledge, can build their understanding of AI.

We thoroughly reviewed, validated and edited those outputs with other human AI experts at SAP, and will continue updating and refining them.

Your SAP AI Design team

List of Terms

Artificial intelligence (AI)

AI is typically defined as a machine’s ability to perceive, reason, learn, talk, and solve problems. It’s a big field, so keep on reading!


Artificial general intelligence (AGI) 

Not to be confused with generative AI. The concept behind AGI is that AI can learn to accomplish tasks that would usually require general human cognitive abilities.


Artificial superintelligence (ASI)

The stuff of science fiction: self-aware AI, which has surpassed the abilities of the human brain.


Human-in-the-loop (HITL)

An actual human being is involved in training, testing, and optimizing an AI system. Think: A child learning about apples might mistake pears for apples. An adult would correct this, thus teaching the child the right labels.


Learning-based AI  

Systems in which humans define the problem and goal, and systems “learn” how to accomplish this goal and get better at it. 


Narrow AI (ANI) 

AI solutions designed to perform specific tasks within a limited domain. Also called weak AI.  


Rule-based AI (business rule AI) 

Systems that use rules made by human experts, also known as “symbolic” or “expert” systems.  



Accountability  

Taking responsibility for the actions and outcomes of AI systems.


Bias  

Systematic errors or unfairness in AI outputs that can result from biased training data or algorithmic decision-making. 


Bias mitigation 

The process of identifying and addressing biases in AI systems to promote fairness, avoid discrimination, and ensure equitable outcomes. 


Data ethics 

Ethical considerations and responsible practices related to data collection, storage, and use in generative AI systems. 


Ethical AI 

AI ethics is a set of values, principles, and techniques that employ widely accepted standards to guide moral conduct in the development, deployment, use, and sale of AI systems.


Explainability 

Enabling users to understand and interpret the decisions and outputs of AI systems, ensuring transparency and user trust.


Explicit feedback 

Clear and direct feedback provided to generative AI models to guide and influence their future output. 


Fairness

The process embedded into the algorithm to prevent bias. Because a biased approach to building something with biased data leads to biased output.


Implicit feedback 

Indirect feedback that is collected from user interactions, preferences, or behaviors, and used to refine generative AI models. 


Interpretability 

Ability to interpret and understand the inner workings and decision-making processes of AI models. 


Privacy

The right to have your personal and business data protected.


Progressive disclosure 

The technique of gradually revealing information or functionality to enhance user understanding. 


Responsible AI (RAI)

Ethical and responsible development, deployment, and use of AI systems to make sure they are fair, transparent, and respectful of people’s rights.


Robustness 

The ability of generative AI systems to perform consistently and reliably in various scenarios and handle unexpected inputs or conditions. 


Transparency

Providing information about how the AI system makes decisions so users can understand and challenge them.


User empowerment 

Designing AI experiences that empower users by giving them control, customization options, and opportunities for meaningful engagement. 


User feedback loops 

Mechanisms for users to provide feedback on generative AI outputs, enabling iterative improvements and personalized experiences. 


Augmenting 

Enhancing or enriching generative AI outputs by adding supplementary information, context, or data. 


Base prompt

A core set of instructions given to the large language model (LLM) that serves as a foundation for generating responses or completing tasks.


Blocklist

A list of specific words, phrases, or types of content that are filtered out to prevent the AI from producing inappropriate outputs.


Custom prompt

An instruction for the generative AI model that the user writes from scratch in natural language.


Embeddings

Embeddings enhance prompts by searching a knowledge base for context, providing a semantic representation of relevant documents, and improving the ability of a large language model (LLM) to find semantically similar information.


Emergent abilities 

Unintended or unexpected patterns, behaviors, or capabilities that arise from the interaction of complex AI systems like generative AI models. 


Fine-tuning

Fine-tuning large language models (LLMs) is the resource-intensive process of customizing a pre-trained language model on specific tasks or datasets to make it more proficient and accurate in generating relevant text.


Foundation models  

Deep learning models trained on large volumes of unlabelled data using self-supervised learning. Applicable to a wide range of tasks.


Generative adversarial networks (GANs) 

Machine learning model with two neural networks that compete with each other to make better predictions. 


Generative AI 

A form of artificial intelligence that, when instructed by the user, can create novel content based on its training data, including text, images, sound, or video.


GPT (generative pre-trained transformer) 

A type of generative AI model that utilizes transformer architecture for tasks like language generation. 


Grounding 

Limiting the scope of generative AI models by connecting the generated content with specific real-world data or references to ensure the generated outputs align with the intended purpose. 


Guided prompt 

Settings and options that the user selects to create precise instructions for the generative AI model.


Hallucinations 

Generated AI outputs that sound plausible but are either false or unrelated to the given context, making them difficult for humans to detect as errors.


Hidden prompt

A hidden prompt is like a secret instruction guiding a language model—like the puppeteer behind the scenes.


Metaprompt 

Instructions given to a generative AI model to guide its behavior or output in a desired direction. 


Parameters 

The internal settings or variables of a generative AI model that control its behavior, output, and learning process. 


Probabilistic 

Situations with multiple possible outcomes, each having varying degrees of certainty. 


Prompt 

An instruction that users give to generative AI models to guide their output.


Prompt engineering 

The process of designing and refining instructions to guide the behavior and output of generative AI models. 


Quick prompt 

Presets provided by the system and expertly crafted by prompt engineers, eliminating the need for users to write their own prompts.


Response filtering 

Selecting or filtering generated responses from a generative AI model based on specific criteria or quality measures. 


Retrieval augmented generation (RAG) 

A model retrieves relevant information from a pre-existing dataset or knowledge source to generate more accurate and contextually appropriate outputs. 


Style transfer  

A technique that applies the visual style of one image to another, combining content with specific aesthetic characteristics. 


Transformer model 

A neural network architecture that learns context and relationships in sequential data. It enables generation of new content, such as text or images, based on patterns and examples provided during training. 


Deep learning 

Machine learning with three or more layers of the neural network. Most suited to processing massive quantities of data.  


Machine learning (ML) 

Subset of AI that enables systems to learn from data and improve without explicit programming. 


Machine learning model 

A computer program made of algorithms and mathematical equations. It can learn independently by recognizing patterns in data. 


Neural networks 

Computing systems inspired by the structure of the human brain. They are made of layers of algebraic equations called artificial neurons (or nodes); the first layer receives the data, and the last outputs the results. 


Reinforcement learning 

The system learns by being placed into an environment where it figures out what is possible and what’s not through experience and reward, without human involvement. 


Supervised learning  

The machine finds similarities and differences in the untrained dataset by learning from pairs of data: input and output labeled by humans.  


Unsupervised learning 

Data isn’t labeled in unsupervised learning. The system studies the dataset, looks for patterns, and suggests how to group things. 


ChatGPT 

An AI language model developed by OpenAI that focuses on generating conversational responses. 


Large language models (LLMs) 

A subcategory of foundational models that can learn to predict the next word in a text through analysing vast amounts of text available on the internet.


Natural language processing (NLP)

The field of AI concerned with understanding and processing human language, including tasks like speech recognition, text analysis, sentiment analysis, and natural language generation. 


Sentiment analysis 

Determining the sentiment in a text and classifying it as positive, negative, or neutral.

Building Trust with Generative AI

Intro

With all of the exciting advances being made in generative AI, there are also real ethical questions and concerns arising around things like accuracy, reliability, and the role that bias plays in what is being produced. As we start to imagine all the potential value that technologies like generative AI can bring to improving user experiences, customer outcomes, and more, it’s essential that we design and build trust into each and every interaction people have with our products.

Building trust in generative AI requires that we:

  • Be transparent about where and when it’s being used
  • Give users control over its actions
  • Use explainable AI techniques so users can confidently validate and improve the results
  • Avoid bias and leverage fairness and inclusion best practices

Let’s dive a little deeper into each of these areas.

Transparency is Key

People don’t trust what they don’t understand – and transparency is critical to establishing trust when it comes to the use of generative AI in our product experiences. The opaque nature of how foundation models work makes this a bit tricky – but there are several things we can do to visually explain how the generative AI arrived at a particular output. Leveraging techniques from Explainable AI (XAI) can make things visible in a way that builds trust and helps to increase the confidence people have in what the model is generating.

For starters, it should be easy for people to understand how the model generates content, including what data it was trained on and how it makes decisions. Transparency into the data sources can also help people to identify any potential biases that may lead to harmful outcomes if left unchecked. It’s also important that people have visibility into what the model (or system) can do, by exposing the goals, functions, overall capabilities, limitations, and development process.

Explainable AI (XAI) can pave the way for transparency and trust by showing people how AI systems work and why they make the decisions that they do. This is especially important when it comes to generative AI because it’s not clear what data is informing the new outputs that are being generated.

That said, here are some simple things you can do to bring people into the fold:

  • Tell people what’s happening along the way
  • Use progressive disclosure
  • Show, don’t just tell
  • Provide a confidence rating or uncertainty indicator
  • Emphasize continuous learning

Tell people what’s happening along the way

One way to keep users informed is to let them interact with the input or make it possible to guide the output generation process. This includes giving people the ability to define the parameters – so they can explore how those changes impact what the AI is generating.


Use progressive disclosure

Progressively disclose how the AI outcome was generated and the development process, ethics, protocols for maintaining alignment, and confidence levels.


Show, don’t just tell

Make it visibly clear what is being produced by generative AI and continue to highlight changes based on human interaction and refinement.


Provide a confidence rating or uncertainty indicator

Providing transparency to the user on the accuracy of the AI-generated output enables users to apply their own critical thinking.


Emphasize continuous learning

This helps people understand that the AI might not get everything right the first time, but is designed to learn and improve over time. This can set realistic expectations.

Fairness and Inclusion

People will only trust AI once it works in a way that’s equitable. That’s especially true when it comes to generative AI because the AI model learns and replicates existing biases, prejudices, or false information in the novel output it generates – that’s why generative AI has even shown to amplify harmful factors.

An example of generative AI used for generating images. Prompt is:
An example of generative AI used for generating images. Prompt is: "A color photograph of a housekeeper" and the images the AI produces are all women of Asian/Pacific Islander descent.

We need to focus on using a wide variety of data that represents a mix of inclusion criteria. It’s also important to create diverse cross-functional teams that work together throughout the end-to-end process of building AI applications. And we must be transparent about how we’re designing for fairness. To do this, we must:

  • Ensure our AI doesn’t use harmful stereotypes from its training data in its outputs. It’s important that we pressure test the data sources – and that we use methods like engineered prompts and blocklists to help avoid bias.
  • Encourage feedback and inspire users to tell you if they see something biased. This can help you continuously improve the AI.
  • Strive for representation in all aspects of the generative AI’s outputs and functions, so all users feel recognized.
  • Prioritize accessibility, catering to individuals with diverse abilities, needs, and preferences. This involves adhering to SAP’s established accessibility guidelines and standards.

In a Nutshell

As you can see, when it comes to generative AI, there are many ways to build trust. Everyone contributing to the AI experience for a product has a responsibility to push for safe and trustworthy AI experiences that help the world run better and improve peoples’ lives.


Information
SAP focuses on embedding AI that is relevant, reliable, and responsible by design.

Check out our SAP AI Ethics Guiding Principles, our external AI Ethics Advisory Panel, our SAP AI Ethics Steering Committee and our SAP AI Ethics Handbook.


References

  • Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a Weapon of Mass Deception, Ethical challenges and responses from the Human-Centered Artificial Intelligence (HCAI) perspective. ArXiv. https://arxiv.org/abs/2304.11215
  • Baxter, K., & Schlesinger, Y. (2023). Managing the risks of generative AI. Harvard Business Review. Retrieved from https://hbr.org/2023/06/managing-the-risks-of-generative-ai
  • Hao, S., Kumar, P., Laszlo, S., Poddar, S., Radharapu, B., & Shelby, R. (2023). Safety and Fairness for Content Moderation in Generative Models. ArXiv. https://arxiv.org/abs/2306.06135

Design Principles for Generative AI

Intro

Generative AI can transform entire industries, and it’s already changing how we live and work. Its ability to generate novel content, whether text, image, data, or video, is a game changer. This technology is rapidly changing how we design, build and interact with software. 

As we race to bring the power of generative AI to our customers, we must do so intentionally, so they have a cohesive, high-quality experience across all SAP products. The generative AI design principles aim to enable and ensure this alignment as more and more teams incorporate this new technology by expressing a shared vision for what makes great generative AI experiences for our users. 

Approach

These guidelines and principles are customized to speak to the unique aspects and design considerations for generative AI. They have been created in alignment with other key SAP guidelines including: 

  • SAP Purpose & Values: Generative AI will have a tremendous impact on individual lives, broader society, and the environment. SAP’s sustainability mission to help the world run better and improve people’s lives is a north star that should guide us in choosing valuable generative AI use cases and ensuring that humans are always in the driver’s seat.
  • SAP’s Guiding Principles for AI, SAP’s Global AI Ethics Policy and our Guiding Principles for Designing Intelligent Systems: As a subset of AI, these principles also apply to generative AI. The generative AI guidelines are intended to provide guidance around the unique opportunities and considerations required when using this specific type of technology.

How generative AI design principles are tied to SAP's overall purpose, AI policies, standards, and generic principles.
How generative AI design principles are tied to SAP's overall purpose, AI policies, standards, and generic principles.

Generative AI Design Principles

We’ve defined five principles to inspire and guide you when designing generative AI features:

Each principle comes with concrete actions that you can use as criteria for evaluating your designs. 


Empower and Inspire

Use AI to enhance human capabilities and improve outcomes, not to replace human intelligence. Facilitate empowering and inspiring interactions with AI that will improve human lives.   

How we do this

  • We prioritize generative AI use cases that will bring the most value and differentiation to users and companies. 
  • We design interactions that foster collaboration and iteration with generative AI systems to amplify human intelligence, creativity, and curiosity. 
  • We frame generative AI results as inspiration, not fact. 
  • We proactively help users in verifying and improving generative AI results. 

Maintain Quality

Ensure high-quality input from the AI and the user to confidently derive the best possible results. Help users understand, analyze, and validate the output generated by AI. 

How we do this

  • We maintain contextual awareness of user needs and priorities. 
  • We provide education and guidance to users through embedded and interactive support. 
  • We enable users across diverse levels of literacy and technical expertise. 

Show the Work

Enable human evaluation of AI collaborations by capturing the user’s iterative journey towards the desired result.

How we do this

  • We identify AI-generated content. 
  • We cite the data sources informing generative AI results. 
  • We enable users to capture, save, edit, and share their prompt history and embed or summarize it into exported results. 

Continuously Assist

Assist users in avoiding errors. Educate them on the limitations of generative AI so they can spot and correct inaccurate results. 

How we do this

  • We clearly identify AI generated elements of content that require human validation. 
  • We provide progressively disclosed explainability behind AI generated outcomes. 
  • We enable the validation of results within the user experience. 

Humans Hold the Keys

Make it clear that people are in charge by identifying when and where AI is being used to generate novel content. 

How we do this

  • We explain how the AI works and the data it uses. 
  • We provide controls for customizing the AI’s parameters or turning AI enhancements off entirely. 
  • We clearly state the organization’s ethical criteria for using generative AI tools. 
  • We specify a user’s responsibility when using generative AI tools.

Designing for Generative AI

Intro

Generative AI and other emerging AI capabilities will have a significant impact on global businesses and society in the upcoming years. SAP has responded to the opportunities and risks of this huge technological innovation with a clear strategy for SAP Business AI. We’ve made it a priority to add business value by embedding relevant, reliable, and responsible AI across our entire portfolio.

In September 2023, SAP also announced Joule, SAP’s generative AI-based co-pilot. Joule will be embedded throughout SAP’s cloud enterprise portfolio to deliver proactive and contextualized insights that span our entire solution portfolio and also integrate third-party sources.

We envision this to transform the user experience of SAP solutions massively – so that you can define the desired business outcomes, and SAP systems generate insights and optimizations by connecting the relevant business knowledge and process data. We believe that generative AI can make businesses around the world more productive, more efficient and more resilient and has the potential to delight SAP’s end users in a totally new dimension so that they can achieve more and focus on what matters most.

Furthermore, we believe that … humans continue to play a key role in the decision and reasoning processes of enterprises. Therefore, we intentionally design our AI solutions to keep humans in the loop to carefully review and approve AI-generated information.

– Rahul Lodhe, Senior Director SAP Artificial Intelligence, 07/2023 in AIM

Information
We are releasing this early version of our generative AI design guidelines to share our discoveries early on with our broader SAP and UX community. The design guidelines in this section will evolve iteratively based on research, input from the community, and feedback from stakeholders.

Coming soon:

  • Designing Safety into Generative AI
  • Designing Sustainable Generative AI Experiences
  • Designing Effective AI Prompts

Design Guidelines for Generative AI

The UI and interaction design is a crucial element of SAP’s generative AI offering. How well we guide and empower users, surface the right information at the right time, and explain our AI results will ultimately determine whether or not users trust and embrace the capabilities of generative AI.

Our Approach

The content for the generative AI design guidelines is rooted in real use cases. All of our design concepts are being co-created and continuously improved in collaboration with SAP product teams and generative AI power users.

Creating best practices for user experiences fueled by generative AI will require extensive human-centered research and design to make it valuable and enjoyable for users. Getting started is the hard part – especially for a completely unique interaction model that’s still in its infancy. With this in mind, we’re taking the following approach to creating guidelines and patterns:

  • Talk to users to understand their needs
  • Develop an informed hypothesis
  • Learn, iterate, and improve as we go

The UI and interaction design is a crucial element of SAP’s generative AI offering. How well we guide and empower users, surface the right information at the right time, and explain our AI results will ultimately determine whether or not users trust and embrace the capabilities of generative AI.

Get Started!


Check out our first set of guidelines generative AI.


Find explanations for key generative AI terms in our AI design glossary and check out helpful AI design resources.


Contact the SAP AI Design Team.