In my first blog in this series, I discussed why digital ethics matter. Here I will address reasons why it will be a long and difficult road to define digital ethics that will ensure quality of life for humans, regardless of the role that AI plays in future.

Digital ethics are not only required to handle questions of life and death (like when a self-driving car cannot avoid an accident and must choose which of two people to run over). The need for ethics beyond legislation is already here. There have been cases of children ordering absurd stuff using home assistants. If those “accidental” purchases are extraordinarily expensive, shouldn’t someone or something verify the request with the account owner first? Should AI at an online poker platform tell the user after a series of losses, “Ok, that’s enough!”? Many situations will require digital ethical attention as soon as we have robot companions – especially for the elderly and children. How should AI react in a case when a sick person refuses urgently needed medication?

Nobody knows the degree to which AI will enhance machines with intelligence and self-awareness. If AI can develop its own infrastructure, sustainable digital ethics will be required as the fundamental basis of its community – to avoid a catastrophic outcome and ensure a future worth living… at least for humans. From a holistic point of view, three complementary ethical frameworks must exist in an AI-permeated world: one for human society, one for the coexistence and interaction of AI-driven machines and humans, and one for a potential AI society.

Today, we are far from one global understanding of ethics even just for us humans. Different countries have very different views about the death penalty, euthanasia, gender equality and children’s rights. In an accident at sea, many of us would expect to hear “women and children first!” So, our culture seems to accept that different kinds of lives have different value. Life insurers have their own views on the topic – and lots of calculation models. The more humans agree on a universal ethical framework, the simpler it will be for developers to create intelligent machines to deal with ethical dilemmas.

Developers have already started to create AI focused on specialized tasks that work in a very narrow context. An apparently simple question reveals the problem of such “single-context” AI: Ten birds sit on a fence. You shoot one. How many are left? Obviously, it is simple for an AI to calculate nine. However, there is more than simple math going on here. First, a shot is loud; second, birds fly away when they hear an unexpectedly loud noise. A “multi-context” AI would thus come to a different answer.

If a “single-context” AI has well-defined tasks, it is easy to add task-specific routines that end in a simulation of human-style ethical behavior for certain, foreseeable situations. But just collecting such “island” ethics will not help to create a holistic framework of digital ethics. As we saw in the birds example, aspects from several contexts need to be combined to come to reasonable results.

On the other hand, aiming for a “multi-context” ethical framework would allow us to take out specific topics and assign them to specialized AI. Only with an overarching ethical framework, so to say an “AI conscience,” can any specialized AI come to reasonable decisions when the situational context is larger than that covered by its individual program.

Let’s face it, once we get to the point where machines can out-think us, they might not want us around anymore. As ludicrous as it might seem today, we do urgently need to take steps now to address the potential of our own extinction by the AI Superintelligence some are in the process of creating. That’s why using AI mainly for stock trading, cyberattacks or autonomous weapons is probably not a good starting point for a holistic “AI conscience.” The same goes for the political interest that some leaders have in AI. Vladimir Putin stated in September 2017, “Whoever becomes the leader in this sphere will become the ruler of the world.” Avoiding the mindset of using AI as a tool to gain power is crucial for the future of humanity. With growing intelligence and responsibilities, an AI that is made to rule – and above all win – will not be interested in mutually beneficial collaboration with humans.

An AI conscience must serve as the ethical reference system for a post-singularity Superintelligence or society of intelligent machines – if that becomes reality. Ethics purely for intelligent machines would need additional considerations because “machine society” would be inherently different from human society. For instance, immortality and the ability to instantly create clones would likely influence the value of a machine’s own existence. Social behavior based on family or leisure activities would not be applicable. Inspiration by religion or philosophy would not exist. In fact, any thinking pattern of a new intelligence will be completely different from any human thought. We should never forget that, especially when movies paint a picture of human-like intelligent robots. The “intelligent and self-driving” car KITT from the 1982 TV series Knight Rider surely told the truth when it said: “Companionship does not compute.

To combine all three ethical frameworks – for humans, intelligent machines and human-machine-interaction – and to develop it step by step, a set of congruent basic values will be needed and must be discussed early and frequently. Sustainability, respect for life, and striving for knowledge must be in the forefront. Breaking down the content of such terms into guidelines will not be easy and will challenge people’s willingness to change established points of view. We will need ethical and legal concepts that are more flexible than today’s definitions. One example is the protection of rights of all sentient beings: Self-aware AI would require help from a rule set similar to the “Universal Declaration of Human Rights” – both coexisting and derived from higher level regulations like a “Universal Declaration of Rights of Sentient Beings.”

Another good question is: Will intelligent machines (intentionally) breach such rules? At first glance, this looks like a ridiculous question. But sticking to rules slows down innovation and our human world is not free of contradicting rules. So, what should happen in such cases? Will we need to “punish” AI? What form should that take? Punishment works because it restricts access to things people want and need like freedom of movement, money or food. The final challenge when defining sustainable digital ethics might be to establish a set of “machine needs,” serving as a foundation for existential goals – and as a touchstone for ethical behavior.

Humans quite often express their needs based on emotions and intuition – implying our ethical values. One characteristic of human emotions is that they change over time and thereby support a variety of decisions, which strengthen the ability to innovate. At least for the first generation of AI, features like emotions may not be relevant, but we should not be too sure about claiming “intuition” as a purely human capability. After AlphaZero defeated the former, best chess-playing program, Gary Kasparow remarked that it had used a human-like approach instead of using brute force strategies like other systems before. Demis Hassabis’ commented, “It doesn’t play like a human, and it doesn’t play like a program. It plays in a third, almost alien, way.”

AlphaZero is the first system that shows something comparable to intuition. So, it looks like that capability can’t be reserved for sentient beings only any longer. On the other hand, AI capable of intuition or even emotions would ease interaction with humans and drive further innovation. Giving AI such human “characteristics” can be helpful because it brings humans and AI closer together. It sounds far out, but this could be a foundation for the integration and mutual development of humans and AI.

In my next blog “A Step-by-Step Approach to Digital Ethics” I will discuss a step-by-step approach that will be helpful to finally achieve digital ethics that ensures a mutually beneficial coexistence of humans and AI.

Many thanks to Esther Blankenship for reviewing and editing the texts in this series.

Not logged in