This series of 3 blogs invites you to explore a future coexistence of people and intelligent machines – and pinpoints what we need to do to make sure AI benefits, rather than harms, humanity. This first part discusses the importance of “digital ethics,” the second blog is about the challenges that we need to address before sustainable digital ethics can be defined and the third blog covers a potential approach for defining sustainable digital ethics.

Artificial intelligence (AI) is both exciting and unsettling. Where technology experts and movie producers are excited by AI’s potential, many others are unsettled by it because it cracks open a door to an unimaginable future. Will this technology propel us into a Terminator-like apocalypse or send us on a Star Trek-like journey?

What changes will AI have on our minds and bodies? What effect will it have on our relationships with each other, our interaction with machines and the state of the environment? How radically different will our children’s lives be when they are adults compared to our lives now? Will our kids scoff at us in 15 years for not technologically enhancing our brains with AI the way many of us shake our heads today at our parents’ ineptitude with computers?

Shaping the future for the best possible outcome is why defining digital ethics is an imperative for us today. In two continuative blogs I will talk about challenges and a potential step-by-step approach.

The unwritten future of AI

Let’s first consider some positive effects of AI: Self-driving cars mean less time driving in traffic; digital assistants relieve us of the need to sift through piles of paper to find one important document; robots can take over heavy physical work; and powerful new tools allow scientists to gain helpful and important insights from large amounts of IoT or marketing data.

Then there is the scarier side: automation means the loss jobs as we know them. In contrast to similar forces on the labor market during the first industrial revolution, this time white-collar workers will be hit as well. For the moment, let’s assume our society can balance that out with new kinds of jobs, working just a few hours a week, or a universal basic income. But this is not the most important reason why the likes of Stephen Hawking and Elon Musk have warned against the incalculable risks for humans that could be caused by AI. Some philosophic and macro-evolutionary reflections reveal that indeed there are more relevant risks: humans should watch out. It looks like it is up to the current generation of technology providers and politicians to ensure a future worth living.

Based on Gordon Moore’s theory about the exponential growth of computational power and Alan Turing’s definitions about how AI can be identified, Raymond Kurzweil formulated some predictions concerning the long-term future of intelligent machines[1]. Kurzweil maintained that a computer will pass the “Turing Test” by 2029 and therefore must be called “intelligent.” A survey conducted by V. Müller and N. Bostrom in 2013 unveiled that 50% of AI experts believe that AI with human-like capabilities will be developed by 2040; 90% of those experts think it will be available by 2075 at the latest.

According to Kurzweil, the real change will happen around 2045, when machines begin to construct themselves without any help from human engineers or software developers. Kurzweil calls this event a “Technological Singularity.”  He feels that like what happens when crossing boundaries of physical singularities like black holes, it is in principle not possible to predict what will happen afterwards.

Movie makers have of course speculated about the future. There are “good guy scenarios” like the Star Trek universe or the nice robots in Bicentennial Man – where the machines always follow the humans’ orders. Humans here are (almost) always in control of the interaction with machines. On the other hand, Hollywood also created “bad guy” scenarios in which machines follow their own agenda. Perhaps best known is the scene from 2001: A Space Odyssey, in which the spaceship’s computer, the HAL 9000 (or just “Hal” for short) murders one member of the 2-man crew and attempts to do the same to the remaining astronaut. In Terminator and The Matrix  machines are clearly out to do more than save their own skins; instead, they aim to dominate humanity. Other movies, like Transcendent, explore transhumanist goals of overcoming human limitations (like aging, computational skills and memory) through science and technology. And yet, the ability to upload our minds to a machine might not be a future which everybody finds attractive. Kurzweil sees even more steps down this path, for example nanobots which could transform the human body more and more into a transhuman machine. Kurzweil believes, “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.”

Can there be other post-singularity scenarios beside machines being the “good guys” or “bad guys”? Everything could become crazy like in the novel “Quality Land”[2] where algorithms  control human life, not always with the intended results. But let’s trust in some reasonable evolution and have a look at how things developed during nature’s evolution.

Beyond extremes

From the first moment after the creation of the universe, things increased in complexity. Fundamental particles, atoms, molecules, unicellular organisms, multicellular life and finally self-aware life evolved from one level of complexity to another, as Kurzweil stated[3]. Interestingly, this process seems to continue also in human’s history of innovation. From the first flint stone to the CERN research facility – probably the most complex closed system machine currently on earth – complexity grows. The same goes for emerging structures like the internet or social media, which evolve even faster than any physical machine.

A second macro-evolutionary driver can be called “game changing events.” The sudden extinction of the dinosaurs or the oxygen-crisis 600 million years ago were events that had a major impact on evolution. Other events initiated an ongoing reshaping of the earth: the appearance of life or the rise of self-awareness, for example.

Good questions are: Will there be more “game changers” in the future? Can there be further evolutionary levels? A short answer is: There is no reason why not. And: We need to be aware that there is no guarantee that humans will remain at the forefront of evolution. But maybe humans can offer something to prepare and initiate the next major evolutionary step. Unfortunately, there is no law saying that the result of that step needs to be based on humans, or even on biological life. What we call “artificial intelligence” – a term first used by MIT Professor John McCarthy in 1956  – might require the most complex machinery ever seen, but it could potentially open the door to a new level of evolutionary abstraction – frequently called “superintelligence.” In this context, a question worth thinking about is: “Is there anything ‘artificial’ in what WE call ‘artificial intelligence’ ?“ Why, for instance, don’t we call planes “artificial birds?” Obviously, for some reason, we don’t want to give a potential new intelligence on earth a name – yet.  

Elon Musk said humans might just be “the biological boot loader for digital superintelligence.” What can developers and investors learn from such a prospect? Today, the most intelligent algorithms are mainly used to optimize advertisements, financial trade and autonomous weapons. Such areas of operations have one thing in common: The goal is to win. But, to create a peaceful joint future of machines and humans this should not be the first and foremost value that we teach a budding “new intelligence.”

In fact, we need the mindset of a parent or teacher. Ethical principles are needed for people and a “new intelligence” to coexist. To mitigate the risks inherent in this new order, we must proactively agree on these ethics. Elon Musk said in July 2017: “AI is a rare case where we should be proactive in regulation. By the time we are reactive in AI regulation, it is too late.”

The first steps, like the Partnership on AI, are underway. Yet, they must be extended and enforced on a global scale. Furthermore, to create ethics which serve not only humans and intelligent machines but especially their coexistence, we need to have a clear picture of which ethics will be required before the technology is developed. It sounds like an epic challenge, and it is. But the journey already began when we started to let machines make decisions, and evolution does not wait.

In my next blog “Challenges for Sustainable Digital Ethics” I will talk about why it will be an onoing effort to define sustainable digital ethics. The third blog of the series “A Step-by-Step Approach to Digital Ethics”will discuss how we can finally achieve digital ethics that ensures a liveable coexistence of humans and AI.

[1] Ray Kurzweil, 2005, “The Singularity Is Near: When Humans Transcend Biology”, ISBN  978-0739466261

[2] Marc-Uwe Kling, 2017, “Quality Land”, ISBN 978-3550050237

[3] Ray Kurzweil, 2012, “How to Create a Mind: The Secret of Human Thought Revealed”, ISBN 978-0670025299

Many thanks to Esther Blankenship for reviewing and editing the texts in this series.

Not logged in