In the first part of this two-part blog, we saw that design principles and digital ethics are helpful to build trust in AI. A good question is: In which areas will we find those ethical challenges? A “checklist” would be helpful to identify the most relevant pitfalls. Such a list must be defined for individual AI tasks. It must separate planning, implementation, operation, and procedures needed to end an AI system. Most of the following questions are applicable for more than one of these phases and should be useful to form individual AI specific checklists.

Let’s start with some questions about human involvement in AI driven processes.

Human involvement

  • List the user roles involved and state their individual ethical requirements.
  • In which contexts will the users work? Create parameters.
  • How is the current state of a user evaluated and used (e.g., traveling, stressed)?
  • What is the cultural environment of the user? Which ethical and social values do the users share?
  • Will there be situations in which the user needs to or wants to completely or partly turn off AI functions (“escape door”)?
  • If explicit user feedback is requested, how will it be evaluated? How will it be used for machine learning?

One requirement of business AI is to be transparent and explain why a certain result was achieved. This can be challenging or even impossible for some AI technologies like neural networks. Here are some checklist items concerning technological boundaries.

Algorithms & boundary conditions

  • What will the machine learning process look like? How can “learn and forget” – processes be monitored (e.g., in case of outdated data or legal deletion obligations)? How will humans be involved here (e.g., in the form of an “AI auditor”)?
  • What are the rules for AI “learn and forget,” how can they be customized? By whom?
  • How can the quality of information used for learning be monitored?
  • How will machines evaluate when a human must be involved in a decision or advice?
  • What are the limits for decisions or proposals created by the system (process and data perspectives)?
  • In which cases is a process comparable to a 4-eyes-principle or are other trust-building measures required?
  • Can computational results – and how they were achieved – be understood by humans (“transparency”)? Is it possible to understand immediately or after which action?
  • Can computational results be reproduced (without and after additional learning steps)? If not, how will the users be informed about this?
  • In which process steps are “escape doors” required or reasonable? To which extent should they stop AI functions? How can they be tested?
  • Is the data used for learning free of bias? If not, state the sources of bias. Can the level of bias be analyzed and quantified?
  • To what extent does bias affect the system’s computational results?
  • How can the level of bias be shown to the user or other systems using the results?

Legal compliance and data security are topics strongly related with digital ethics. Let’s look at some fundamental questions without going into details.

Compliance and security of learning systems

  • Which legal compliance topics must be covered (including data privacy)? Which internal company policies are applicable? How can compliance be monitored during development and operations?
  • Self-regulation of AI providers: Are the AI development plans compliant with the company’s own standards?
  • Will there be possible liability issues resulting from AI-made decisions? In which situations?
  • Would it be possible to teach the system the wrong things intentionally? How can security breaches, including incorrect teaching, be uncovered?
  • How can the implementation of backdoors be avoided and detected?
  • How can potential hidden override directives be unveiled?
  • Will user behavior be monitored or logged? How will such data be secured?
  • In which cases must the system life-time be ended? What happens with the learned information if the system life-time ends?

Following Amara’s law, we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Therefore, it makes sense to take a look at potential long-term ethical questions.

Potential long-term impact

  • How can a long-term risk assessment be done? What will the impact on society be, if any, when tasks are automated on a large scale?
  • If the system is widely used, would humanity lose knowledge or capabilities?
  • How can “behavioral changes” of the AI system be detected?
  • How can unintended or irrational computational results be monitored – and be avoided or filtered?
  • How can we detect if the system starts working independent of any human-defined task? How will we react in such a case?

Running through the checklist helps to find the requirements for a business system that acts in a sustainably ethical way. The most important checkpoint was not mentioned yet: The list must be reviewed and updated on a regular basis. With a growing level of AI intelligence, checklist items probably must become more granular.

Upcoming technological steps must be foreseen to avoid AI developing faster than the human ability to define and implement the ethical foundation. We have already learned that concepts, such as bias handling, escape doors, and AI auditors are required. Most probably, there are many more pitfalls and challenges out there waiting before we can add a real “ethical conscience” to what is called artificial intelligence.

Not logged in