In the two previous blogs in this series, “Why Digital Ethics Matter” and Challenges for Sustainable Digital Ethics”, I talked about why a consistent approach to ethical questions around human-machine interaction really matters and what some of the key challenges are. Let’s now look at an approach that should be helpful.

Will self-aware, super-intelligent, artificial intelligence (AI) become a reality? Whether it does or not, AI is clearly making more and more decisions. To make sure the results of those decisions are and will continue to be beneficial to humans, it would be wise to define “digital ethics” and to proactively regulate AI capabilities.

“Proactively” in this case means that we need to implant principles of ethics in AI before it is activated. Therefore, we need to anticipate the next steps in AI development and new ethical requirements. Once that is done, regulation must be established either by industry itself or by legislation – which typically takes longer. Achieving compliance to such rules will be challenging for several reasons: Economic pressure often forces corporations to bring new products to market without considering the ethical implications. Various competing industry standards for AI development will probably be established simultaneously – using different implementations of ethical values.

Nevertheless, we need to plan for an incremental approach to digital ethics, proactively based on the technological development of AI. A vast number of small improvements and “go live” steps for AI-related inventions is to be expected, but we can foresee some major developmental steps.

  1. Isolated and specialized AI in advisory roles: Highly specialized AI as a tool and advisor without responsibility for any decision or action, for example search engines, stock trading analysis or navigation systems already exist today. Digital ethics can be helpful to ensure that the results conform with human ethical expectations. One example is the list of results of a search engine and its order – humans should not be misled for hidden commercial reasons.
  2. Isolated and specialized decision-making AI: At this level of AI, systems make decisions in specific areas, e.g. traffic control, financials, healthcare. Humans can mostly understand the system’s decisions. This is already happing today, and as this kind of decision making increases, it will change human society deeply. To ensure intelligent and humane decisions, digital ethics should cover a simple system of human values like the value of life.
  3. Isolated and task-agnostic AI: AI makes decisions, taking several aspects of a situation into account. For example, an autonomous car should not only react to obstacles that suddenly appear but also “know” that a paper bag in the wind is not a reason to slam on the brakes. With an increasing number of tasks that can be fulfilled, we get to what is frequently named “Artificial General Intelligence” – machines that can perform all intellectual tasks a human can. Digital ethics using a generic system of values is required as humans may no longer understand the reason a machine made a certain decision. Most experts think this will become a reality within a few decades.
  4. Independent and cooperating AI: Several specialized AIs interact to come to a broad variety of decisions, e.g. to optimize legal advice or projects for environmental sustainability. A complex value system addressing all aspects of human society is required for this. The scenario is quite probable since several, parallel efforts to develop AI are to be expected.
  5. Self-organizing AI improving itself: In the future, a dominant system or an AI society independent from humans may appear. Humans will not be able to understand the results of AI-driven science and decisions. This is what is frequently called “artificial superintelligence.” The digital ethics given to AI before it appears must be robust enough to ensure long-term survival of human society including human values and the fulfillment of human needs.

How can we proactively breathe ethics into AI at the different levels of its development? At the beginning, isolated AI digital ethics must consist of ethical rules placed at the very root of the AI’s computation processes, resulting in behavioral patterns in accordance with human ethics, such as security, privacy, legal compliance, and the value of life. A good example of AI in an advisory role would be a tool that analyzes “terms of use” or similar contractual texts. In a first version, it would identify risks based on regulations that do not fit the needs of the user or customer. Here, human decision makers can still overrule the AI proposals. It would be extremely helpful to leverage such cases to train the AI – and to openly discuss reasons for the different assessment, feeding the results back into an iterative definition of digital ethics.

For independent AI, we must have a broader spectrum of decision-making authority, more generic algorithms and value sets, for instance, ethics in human culture, communication, science, etc. The handling of human intercultural challenges and different, maybe contradicting human legislation, needs additional attention.

When AI is supposed to come to decisions based on complex contexts, reasons for departing from ethical rules must be defined as well. Here, the difference between machines just following predefined rules and sentient beings following their conscience or higher goals might become obvious. A radical example would be a situation in which an AI aiming for environmental sustainability concludes that earth must not have more than one billion human inhabitants. Hopefully, that AI would not be able to execute the decision and instead start searching for other solutions because its ethical conscience is real and not only based on a set of rules.

Keeping the “hierarchy of needs” of humans and AI synchronized (or at least ensure they are not contradicting each other) is probably the most relevant task for cooperating and maybe someday self-organizing AI. Of course, humans need food and safety, but they also need social belonging, esteem and self-fulfillment. This will not change. A potential world shaped by AI must still allow people to meet these needs.

We do not know today what a self-organizing, super-intelligent AI will strive for in the end, but it is up to us to introduce from the beginning a basic set of needs suitable for a machine-based existence which neither impedes nor conflicts with the human hierarchy of needs. But, of course, it’s a good question: What motivates an AI? And, as soon as a Superintelligence awakens, humans will no longer be able to change its goals or actions. The codex must be established in the very basic coding of any lower level AI that might develop into a Superintelligence. The awakening of a Superintelligence may happen – because of some recursive self-improvement – in a day or even in hours, and humans might not notice until it is too late.

There might be a moment in time when Superintelligence just happens, or when humans need to decide to allow or even to foster some AI-internal equivalent of human “culture.” The sooner we aim for a generic implementation of basic ethical values based on a complementary hierarchy of needs, the easier it will be to proactively support and steer any technical development.

Outlook for key elements of comprehensive digital ethics

The development of digital ethics is just beginning. It looks like there is an analogy between the expected development of digital ethics and the ethics underlying all legislation within human societies. Examples for ground-breaking legislation in human history are the Ten Commandments of the Christian doctrine, the Ten Commandments in Islam or Buddha’s Ten Paramitas (perfections). Today, we have a much more detailed legal framework, reflecting more complex and intermeshed societies.

On the AI side, today we have the “Three Laws of Robotics”, defined by Isaac Asimov in 1942 as part of one of his science fiction writings. We must expect an increase in complexity within digital-ethics-based rules similar to the enhancements from a few early laws to today’s legislation.

In any case, it would be helpful for humans to harmonize their understanding of ethics and their usefulness. Ethical values cannot only be transformed into legislation to govern the everyday behavior of humans and machines; they form the foundation of our democratic order. If self-organizing AI becomes a reality, society’s structure must be flexible enough to include such new intelligences. The Superintelligence must still allow humans to fulfill their needs, and be smart enough to avoid unintended consequences. Let’s imagine for a moment that we did it; we built a Superintelligence and successfully implanted the rule that human life must be preserved. The Superintelligence could conclude that there are two more ways to fulfill this rule besides the intended cooperative behavior: First, the AI could calculate that it is itself the biggest threat to humanity – and thus destroys itself. Second, it could calculate that humans themselves are the biggest threat to each other – and so puts each person in a self-contained cell.

Keeping this in mind, we can dare to predict some cornerstones of sustainable digital ethics:

  • “Freedom” and “integrity” of (human and AI) individuals are two of the highest values. They include the right to fulfill needs on all levels of the pyramid of needs.
  • Any of the present definitions of “equality” may no longer be applicable. It should be broadened and ready to serve more than one self-aware species. The definition needs more detail to cover more situations (remark: partly, we have that already, for example if we think about “women and children first” in the case of a maritime disaster).
  • “Dignity” is a universal right of any being, independent of its evolutionary history.
  • “Diversity” is helpful to keep social exchange running, and on a higher scale it is an evolutionary driver.

Such principles need broad agreement. Clearly there will be a long period of discussion of many details. Taking a different, broader point of view on our own situation can help people to master personal challenges. Looking beyond the current system of human ethics may help us to prepare for an unknown future, and more clearly see similarities and common goals within and between today’s various human groups on earth. A clear picture of human ethics would ease all discussions about AI ethics. Even if a final, globally-accepted policy of human ethics can’t be expected, humanity might come to some helpful insights as a side effect. Some obvious examples are: It is not necessary for humans to completely claim the right to shape the world. Sustainability considerations may cover more than the lifetime of one individual human. Competition in gathering goods is not sustainable on a larger scale.

Of course, even if ethical principles are defined, any comprehensive regulation of AI development will not be easy to achieve – if it can be done at all. We need industrial self-commitment as well as global legislation comparable to the Geneva Convention or those for genetic research. A first step could be to establish an “AI ethics quality seal.” Humanity should soon agree on the importance of coordinated AI development following the rules of such a seal. One basic aspect could be that companies providing AI-based services need a “Digital Ethics Committee.” All enterprises should add commonly agreed upon rules about digital ethics to their code of business conduct.

At the “Web Summit” in November 2017 in Lisbon, Stephen Hawking proposed: Perhaps we should all stop for a moment and focus our thinking on not only making AI more capable and successful, but maximizing its societal benefit.” Sounds like a good idea. Alignment of ethical values and their implementation is long overdue.

Many thanks to Esther Blankenship for reviewing and editing the texts in this series.

 
 

Not logged in