Updated: February 23, 2024

AI Design Glossary


To provide you with some quick help on AI terminology used in or referred to within our guidelines for ‘Designing Intelligent Systems’, you can find a set of basic terminology below.

Terms are sorted in alphabetical order and grouped into five categories:

  • Artificial Intelligence
  • AI Ethics and AI User-Centered Design
  • Generative AI
  • Machine Learning (ML)
  • Natural Language Processing (NLP)

In writing this glossary, we extensively researched SAP and global definitions, including those by UNESCO, OECD.AI, and other institutes.

In the process, we leveraged ChatGPT to simplify these explanations so that anyone, regardless of their previous knowledge, can build their understanding of AI.

We thoroughly reviewed, validated and edited those outputs with other human AI experts at SAP, and will continue updating and refining them.

Your SAP AI Design team

List of Terms

Artificial intelligence (AI)

AI is typically defined as a machine’s ability to perceive, reason, learn, talk, and solve problems. It’s a big field, so keep on reading!

Artificial general intelligence (AGI) 

Not to be confused with generative AI. The concept behind AGI is that AI can learn to accomplish tasks that would usually require general human cognitive abilities.

Artificial superintelligence (ASI)

The stuff of science fiction: self-aware AI, which has surpassed the abilities of the human brain.

Human-in-the-loop (HITL)

An actual human being is involved in training, testing, and optimizing an AI system. Think: A child learning about apples might mistake pears for apples. An adult would correct this, thus teaching the child the right labels.

Learning-based AI  

Systems in which humans define the problem and goal, and systems “learn” how to accomplish this goal and get better at it. 

Narrow AI (ANI) 

AI solutions designed to perform specific tasks within a limited domain. Also called weak AI.  

Rule-based AI (business rule AI) 

Systems that use rules made by human experts, also known as “symbolic” or “expert” systems.  


Taking responsibility for the actions and outcomes of AI systems.


Systematic errors or unfairness in AI outputs that can result from biased training data or algorithmic decision-making. 

Bias mitigation 

The process of identifying and addressing biases in AI systems to promote fairness, avoid discrimination, and ensure equitable outcomes. 

Data ethics 

Ethical considerations and responsible practices related to data collection, storage, and use in generative AI systems. 

Ethical AI 

AI ethics is a set of values, principles, and techniques that employ widely accepted standards to guide moral conduct in the development, deployment, use, and sale of AI systems.


Enabling users to understand and interpret the decisions and outputs of AI systems, ensuring transparency and user trust.

Explicit feedback 

Clear and direct feedback provided to generative AI models to guide and influence their future output. 


The process embedded into the algorithm to prevent bias. Because a biased approach to building something with biased data leads to biased output.

Implicit feedback 

Indirect feedback that is collected from user interactions, preferences, or behaviors, and used to refine generative AI models. 


Ability to interpret and understand the inner workings and decision-making processes of AI models. 


The right to have your personal and business data protected.

Progressive disclosure 

The technique of gradually revealing information or functionality to enhance user understanding. 

Responsible AI (RAI)

Ethical and responsible development, deployment, and use of AI systems to make sure they are fair, transparent, and respectful of people’s rights.


The ability of generative AI systems to perform consistently and reliably in various scenarios and handle unexpected inputs or conditions. 


Providing information about how the AI system makes decisions so users can understand and challenge them.

User empowerment 

Designing AI experiences that empower users by giving them control, customization options, and opportunities for meaningful engagement. 

User feedback loops 

Mechanisms for users to provide feedback on generative AI outputs, enabling iterative improvements and personalized experiences. 


Enhancing or enriching generative AI outputs by adding supplementary information, context, or data. 

Base prompt

A core set of instructions given to the large language model (LLM) that serves as a foundation for generating responses or completing tasks.

Custom prompt

An instruction for the generative AI model that the user writes from scratch in natural language.


Embeddings enhance prompts by searching a knowledge base for context, providing a semantic representation of relevant documents, and improving the ability of a large language model (LLM) to find semantically similar information.

Emergent abilities 

Unintended or unexpected patterns, behaviors, or capabilities that arise from the interaction of complex AI systems like generative AI models. 


Fine-tuning large language models (LLMs) is the resource-intensive process of customizing a pre-trained language model on specific tasks or datasets to make it more proficient and accurate in generating relevant text.

Foundation models  

Deep learning models trained on large volumes of unlabelled data using self-supervised learning. Applicable to a wide range of tasks.

Generative adversarial networks (GANs) 

Machine learning model with two neural networks that compete with each other to make better predictions. 

Generative AI 

A form of artificial intelligence that, when instructed by the user, can create novel content based on its training data, including text, images, sound, or video.

GPT (generative pre-trained transformer) 

A type of generative AI model that utilizes transformer architecture for tasks like language generation. 


Limiting the scope of generative AI models by connecting the generated content with specific real-world data or references to ensure the generated outputs align with the intended purpose. 

Guided prompt 

Settings and options that the user selects to create precise instructions for the generative AI model.


Generated AI outputs that sound plausible but are either false or unrelated to the given context, making them difficult for humans to detect as errors.

Hidden prompt

A hidden prompt is like a secret instruction guiding a language model—like the puppeteer behind the scenes.


Instructions given to a generative AI model to guide its behavior or output in a desired direction. 


The internal settings or variables of a generative AI model that control its behavior, output, and learning process. 


Situations with multiple possible outcomes, each having varying degrees of certainty. 


An instruction that users give to generative AI models to guide their output.

Prompt engineering 

The process of designing and refining instructions to guide the behavior and output of generative AI models. 

Quick prompt 

Presets provided by the system and expertly crafted by prompt engineers, eliminating the need for users to write their own prompts.

Response filtering 

Selecting or filtering generated responses from a generative AI model based on specific criteria or quality measures. 

Retrieval augmented generation (RAG) 

A model retrieves relevant information from a pre-existing dataset or knowledge source to generate more accurate and contextually appropriate outputs. 

Style transfer  

A technique that applies the visual style of one image to another, combining content with specific aesthetic characteristics. 

Transformer model 

A neural network architecture that learns context and relationships in sequential data. It enables generation of new content, such as text or images, based on patterns and examples provided during training. 

Deep learning 

Machine learning with three or more layers of the neural network. Most suited to processing massive quantities of data.  

Machine learning (ML) 

Subset of AI that enables systems to learn from data and improve without explicit programming. 

Machine learning model 

A computer program made of algorithms and mathematical equations. It can learn independently by recognizing patterns in data. 

Neural networks 

Computing systems inspired by the structure of the human brain. They are made of layers of algebraic equations called artificial neurons (or nodes); the first layer receives the data, and the last outputs the results. 

Reinforcement learning 

The system learns by being placed into an environment where it figures out what is possible and what’s not through experience and reward, without human involvement. 

Supervised learning  

The machine finds similarities and differences in the untrained dataset by learning from pairs of data: input and output labeled by humans.  

Unsupervised learning 

Data isn’t labeled in unsupervised learning. The system studies the dataset, looks for patterns, and suggests how to group things. 


An AI language model developed by OpenAI that focuses on generating conversational responses. 

Large language models (LLMs) 

A subcategory of foundational models that can learn to predict the next word in a text through analysing vast amounts of text available on the internet.

Natural language processing (NLP)

The field of AI concerned with understanding and processing human language, including tasks like speech recognition, text analysis, sentiment analysis, and natural language generation. 

Sentiment analysis 

Determining the sentiment in a text and classifying it as positive, negative, or neutral.