This article can be downloaded here (pdf, 758 kB).

If You Are Rethinking Your Ethics

in line with developments in digitalisation, artificial intelligence, superintelligence and singularity

In this article, subtitled ‘The Ethical Singularity Comedy’, we will be exploring two central questions which also serve as the key sections of the text:
Which ethical premises underlie predominant forces behind the development of digitalisation, artificial intelligence, superintelligence and singularity?
What challenges to current thinking and ethics could the supervisory and executive boards of organisations around the world feel obliged to address and resolve at the level of corporate ethics before it is too late?

The Ethical Singularity Comedy

Table of Contents

Introduction
Section 1 Which ethical premises underlie predominant forces behind the development of digitalisation, artificial intelligence, superintelligence and singularity?
1.1 Technological singularity prior to ethical singularity: Have we put the cart before the horse?
1.2 Digital transformation and ethical stagnation: Why is the argumentation of ethical legitimacy still circular?
1.3 Unde venis et quo vadis, contemporary ethics
1.4 Ethical integrity: Why have we invented such an impediment to ethical transformation?
1.5 Uninventing ethics: Is the development of AI and SI challenging ethical integrity?
1.6 Ethical transformation: Lying in wait for centuries until the advent of AI?
Section 2 What challenges to current thinking and ethics could the supervisory and executive boards of organisations around the world feel obliged to address and resolve at the level of corporate ethics before it is too late?
2.1 The Challenge of Legitimacy: Corporate Ethicality Questions
2.2 The Challenge of Multi-Ethicality: Singularity Tragedy or Singularity Comedy?
2.3 The Challenge of Ethical Transformation: Corporate Options and Corporate Questions
2.4 The Challenge of Ethical Foresight and Accountability: Transformational Contingency
Concluding Reflections

Introduction

According to Joseph Carvalko, who is Professor of Law, Science and Technology at Quinnipiac University of Law,

… Artificial Intelligence (AI) and bio-synthetic engineering will be perfected to the degree that androids will closely resemble humans and biosynthetically engineered humans will resemble androids. (Carvalko, 2012)

In a similar vein, David Pearce, co-founder of what was originally named the World Transhumanist Association, writes:

Posthuman organic minds will dwell in state-spaces of experience for which archaic humans and classical digital computers alike have no language, no concepts, and no words to describe our ignorance. Most radically, hyperintelligent organic minds will explore state-spaces of consciousness that do not currently play any information-signalling role in living organisms, and are impenetrable to investigation by digital zombies. In short, biological intelligence is on the brink of a recursively self-amplifying Qualia Explosion – a phenomenon of which digital zombies are invincibly ignorant, and invincibly ignorant of their own ignorance. Humans too of course are mostly ignorant of what we’re lacking: the nature, scope and intensity of such posthuman superqualia are beyond the bounds of archaic human experience. Even so, enrichment of our reward pathways can ensure that full-spectrum biological superintelligence will be sublime. (Pearce, 2012)

Citing from an interview with David Hanson, founder of Hanson Robotics based in Hong Kong, John Thornhill writes in the Financial Times:

By developing “bio-inspired intelligent algorithms” and allowing them to absorb rich social data, via sophisticated sensors, we can create smarter and faster robots, Mr Hanson says. That will inexorably lead to the point where the technology will be “literally alive, self-sufficient, emergent, feeling, aware”.
He adds: “I want robots to learn to love and what it means to be loved and not just love in the small sense. Yes, we want robots capable of friendship and familial love, of this kind of bonding.”
“However, we also want robots to love in a bigger sense, in the sense of the Greek word agape, which means higher love, to learn to value information, social relationships, humanity.”
Mr Hanson argues a profound shift will happen when machines begin to understand the consequences of their actions and invent solutions to their everyday challenges. “When machines can reason this way then they can begin to perform acts of moral imagination. And this is somewhat speculative but I believe that it’s coming within our lifetimes,” he says.
(Thornhill, 2017)

Projections such as these trigger fundamental thoughts about the human condition which we will go on to explore below. The idea of autonomous humanoids and biosynthetically engineered humans also raises significant questions about the legitimacy and responsibility-ownership of their actions and suggests the need for, or the advent of, paradigmatic changes in the fields of law and ethics, e.g.

  • Will ‘human rights’ have to be modified to rights for all forms of ‘autonomous intelligence’ – and, if so, how should the latter be defined?
  • How should vulnerable groups be protected by law from the dangers of entering into affectional connections with robots, including sex robots?
  • Will a specific legal status need to be created for autonomous robots as having the status of ‘electronic persons’?
  • Will judges and juries have to sit in front of an AI artefact and justify their decisions to it before – or even after – sentence is passed on a human-being?

More fundamental still are questions in the field of ethical premises such as

  • Which or what type of institution or intelligence is legitimised – or legitimises itself – to pose and answer the above questions, and based on what premises of legitimisation? – or will ‘legitimacy’ become an obsolete concept and, if so, what will emerge around the space which it currently occupies?
  • How can responsibility for value-creation or value-damage be assigned in situations where ‘emergent’ behaviour through what humans presume to have been collusion between e.g. an unquantifiable number of AI-artefacts has arisen, i.e. behaviour which could be proven to have been unpredictable for any of the programming bodies, even if the colluding artefacts could be identified?
  • Also, if ‘slight contacts’ (Chodat, 2008) (p. 25) with other human or non-human minds can be identified as the veritable sources of value-creation or value-damage, how should ownership or perpetration be assigned?
  • In other words, how will the space evolve which is currently occupied by concepts such as ‘agency’, ‘ownership’, ‘perpetrator’ and ‘value’?

As remarked by Amartya Sen in ‘The Idea of Justice’:

Judgements about justice will have to take on board the task of accommodating different kinds of reasons and evaluative concerns. (Sen, 2009) (p. 395)

or, returning to the field of human morals:

  • Should we, certain humans or organisations in a subset of human society, consider undertaking the creation of humanoids to whom we, or others, would be – or might feel – morally committed before we provide adequate care for the millions of co-humans who are suffering to the point of death from the lack of basic requirements such as food, water and shelter?
  • Will the financiers of developments in artificial intelligence (henceforth ‘AI’) and superhuman intelligence (henceforth ‘SI’) be made answerable and liable by their own artefacts for their provenly immoral allocation of resources and profits – which they generated through exploiting the data of their consumers?

Putting the last questions temporarily to one side, we note that the Civil Law Rules on Robotics of the European Union includes the commissioning of a ‘Charter on Robotics’ which should not only address ethical compliance standards for researchers, practitioners, users and designers, but also provide procedures for resolving any ethical dilemmas which arise (European Parliament, 2017). We will return to issues surrounding the content of the Civil Law Rules on Robotics in the second section of this paper. At this point, we note that the wording of the document makes it apparent that the European Parliament has recognised that the development of AI is leading jurisprudence and ethics into uncharted territory at a very high speed; not only this, and crucially for the focus of this paper, the chosen wording shows attentiveness to the consideration that the European Union cannot allow the legal regulations and societal ethics contained in its Charter of Fundamental Rights (European Union, 2017) to put its members’ national economies at a disadvantage in the global race towards digital supremacy. In other words, ‘business ethics’ may factually override ‘societal ethics’ in determining the veritable ethics of the development of digitalisation, AI, machine intelligence and machine learning (both of which we will henceforth subsume for the purposes of this paper under ‘AI’) and also of SI. For a more detailed treatise of the concept of ‘business ethics’ – including the ‘whole ethical package’ within which business-managers factually function – and the position of ‘ethical neutrality’ which underlies the writing of this article, the reader is referred to a previous paper entitled ‘Interethical Competence’. (Robinson, 2014)

In an article published by Deloitte University Press, estimates for worldwide spending on cognitive systems are cited to reach $31 billion by 2019, and the size of the digital universe is estimated to double each year to reach 44 zettabytes by 2020 (Mittal et al., 2017). Even if these estimates turn out to be only partially accurate, the facts are that the world economy and society in general are undergoing change of unprecedented dimensions and that only a minority of bodies have any significant measure of influence and control over these evolutionary changes. As one example, national economies now find themselves left with no choice but to ensure that the infrastructure and expertise is in place to ensure that their population can sustainably enjoy ubiquitous connectivity of the most up-to-date technical standards. Another example lies in the fact that India and China have gained a key competitive advantage by currently graduating at least 15 engineers for each one who graduates in the US; in 2020, the European Union will lack an estimated 825’000 professionals with adequate digital skills to avert global economic insignificance for the Union (European Parliament, 2017).

As we will discuss in more detail in Section 1 of this paper, economic motives, competition and various forms of aspiration towards supremacy lie at the core of contemporary AI-development-ethics. Assuming that business and other forms of ethics do, and will, factually override societal ethics, there will be a faster and more fundamental impact on the way in which most businesses are run than is generally anticipated.

According to former Professor of History at Indiana University, Jon Kofas:

Multinational corporations see the opportunity for billions in profits and that is all the motivation they need to move forward full speed, advertising AI research and development even now to prove that their company is decades ahead of the competition. (Kofas, 2017)

A further set of motives and competition underlying contemporary digital developments is of a politically and ideologically hegemonial nature. Increasingly, the opinion is being voiced that AI is turning into the next global arms race, i.e. the race for supreme control and the protection of sovereignty so as, at least, not to be controlled by others. This means that the ‘ethics of global supremacy’ could factually override both business ethics and societal ethics – that is, for as long as ‘artificial ethics’ or superintelligence do not factually take the upper hand.

In an open letter to the United Nations on 21st August 2017, with one hundred and seventeen notable C-level signatories, the Future of Life Institute declares that it:

… welcomes the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems

and warns:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. (Future of Life Institute, 2017)

In a BBC Interview, Professor Stephen Hawking makes reference to the darker side of influence, control and hegemony, where further codes of ethics are active:

the Internet has now turned into a command centre for criminals and terrorists. (BBC, 2014)

In relation to an intervention into the workings of the dark web by the Federal Bureau of Investigation, the US Drug Enforcement Agency, the Dutch National Police and Europol, Dimitris Avramopoulos, European Commissioner for Migration, Home Affairs and Citizenship, remarks:

The Dark Web is growing into a haven of rampant criminality. This is a threat to our societies and our economies that we can only face together, on a global scale. The take-down of the two largest criminal Dark Web markets in the world by European and American law enforcement authorities shows the important and necessary result of international cooperation to fight this criminality. (Europol, 2017)

Out of the reach of public accessibility are also the exact developments which are being made in the field of genetics, including major projects to discover the genetic basis of human intelligence in institutions such as the Cognitive Research Laboratory of the Beijing Genomics Institute. The little information which is indeed publicly available can leave us wondering about the extent to which national or racial eugenic motives and policies might be involved or could indeed already be shaping which super-species will one day be posing and answering questions such as those laid out above, i.e. not artificial ethics or superintelligence, but ‘super-species intelligence’ or other forms of phenomena which have not yet emerged.

If the ethics of global supremacy and power are indeed an inevitable motive behind the exploitation of contemporary exponential global economic and technological developments, if financial and intellectual resources do indeed remain key elements of this form of power and if the ethical super-dice do thereby remain factually unturned from the 20th and far into the 21st Century, then the consequences of inequalities of influence will foreseeably increase in similarly exponential dimensions both internationally and intranationally. Moreover, if bodies such as the United Nations continue to try to promote the mono-ethical stance of democracy and equal opportunity in a universalistic manner, if e-democracy continues to be used to promote democratic behaviour and to impact on electoral processes and referenda on a global scale, if cryptocurrencies continue to be used for anarcho-capitalist motives, if organised crime, cybercrime and commercially disruptive technology continue to undermine the activities of both business and politics at home and around the globe, then it is likely that international strife will not be lessened, but exacerbated even further than it is today – as we see in widely differing geopolitical issues involving nation-states such as North Korea, Iran, Myanmar, Syria, Iraq, Afghanistan, Ukraine and Yemen.

Stephen Hawking goes even further: he warns us of dangers which extend beyond the tensions of international criminal, racial and ideological supremacy:

I think that the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. (BBC, 2014)

As noted by John Thornhill writing in the Financial Times, the phenomenon of AI-artefacts generating dangerous information, political statements and actions which are unpredictable and incomprehensible for their architects, so-called ‘emergent’ behaviour, is already a reality. (Thornhill, 2017)

Viewed in the light of these developments, it is not surprising that the prevalence of the use of the word ‘anxiety’ appears now to be peaking at a level not seen since the first half of the 19th Century. (Google, 2017a) Almost two centuries ago, European society was coming to terms with the consequences of the Industrial Revolution and empowering itself through what has been termed the ‘People’s Spring’. The latter was a fundamental political disruption which began in France and triggered the replacement of feudalism with democratic national states on a pan-European scale. It was at this historical juncture that significant precursors were laid for modernist societal ethics in Europe, as in the works of Charles Darwin (1809-1882), Karl Marx (1818-1883) and, not least, Friedrich Nietzsche (1844-1900), a philosopher whose influence on 20th and 21st century ethics in the East and West we will discuss below in Section 1.

The parallels between what was taking place in the first half of the 19th Century and contemporary, closely interwoven developments in technology, economics, politics, ethics and society at-large are self-evident. Just as then, manual work and skills are currently being replaced by technology. This is now happening on such a grand scale and with such speed that, for example, what just a few years ago was regarded in India, Pakistan and Bangladesh as a ‘demographic dividend’ has metamorphosed into a ‘demographic nightmare’. In these three countries together, around 27 million people currently work in the clothes manufacturing industry for some of the lowest wages in the world. This region of the world alone is expected to bring another 240 million low-wage manual workers to the labour market over the next 20 years; but within 10 years, due to developments in robotics, clothes manufacturing is likely to be fully relocated to the countries where the clothing is needed, thus creating massive unemployment in South Asia (Financial Times, 2017): that is, unless these economies possess the resources to take the further development of digitalisation, AI and SI out of the hands of their current owners or unless the ethical super-dice are re-thrown and there is a fundamental change in global ethics.

In Section 1 of the paper, we will examine contemporary ethics in more detail with a view to illuminating, in Section 2, some of the major challenges to current premises which underlie corporate visions, strategies, cultures and ethics. Resolving these challenges will include addressing the matter of personal and collective accountability for the ‘ethical footprint’ (Robinson, 2014) which senior managers and their organisations leave behind.

For reasons of focus and relevance, the following discussion will not primarily address issues such as how robots can be programmed top-down with humanitarian law, i.e. to behave in an exemplary manner according to human ethical standards. Such matters are handled by numerous scientists and authors including Alan Winfield, who is Professor of Robot Ethics at the University of the West of England. Instead, we will investigate the nature of the ethical premises which underlie contemporary mainstream commercial and engineering developments in digital technology, taking information from the Singularity University which is based in the Silicon Valley area of the USA as a concrete example; we will also focus on the deep-reaching influence on contemporary ethics of the works of one philosopher in particular, Friedrich Nietzsche, again as a specific and relevant example – and with no intention to discount the influence of other philosophers such as Immanuel Kant (whom we shall briefly mention below), Bertrand Russell, Peter Strawson, John Dewey or others. In focussing on the works of Friedrich Nietzsche, we will also draw specifically on the significant ethical legacies of Homer, Dante Alighieri, Giacomo Leopardi and Emil Cioran.

As the article progresses, we hope to illuminate why, in an article published in Forbes and entitled ‘The Forces Driving Democratic Recession’, Jay Ogilvy cites the fears of Francis Fukuyama that the global democratic recession may turn into a global democratic depression and then writes the following about one of the founders of the Singularity University, also Director of Engineering at Google:

The artificial intelligence-driven, post humanist future promoted by Ray Kurzweil and others is a cold, cold place. (Ogilvy, 2017)

We will also shed light on why Alan Winfield, in his widely recognised status as a robot ethicist, as mentioned above, feels compelled to state the following in a BBC interview:

I hope that no-one loses out. (Winfield, 2017)

By examining the historical and contemporary ethical backcloth to statements such as these, we hope to contribute to the thought processes and decisions which will be made in the further development of AI and SI by scientists, software engineers, senior business managers and regulatory bodies.
In our Concluding Reflections, we will look to AI and SI as a source of inspiration for radical ethical transformation.

1. Which ethical premises underlie predominant forces behind the development of digitalisation, artificial intelligence, superintelligence and singularity?

1.1 Technological singularity prior to ethical singularity: Have we put the cart before the horse?

Figure 1: Graphical representation of the concept of singularity from the field of physics. (Britschgi, 2018)

In this image (Britschgi, 2018), we see a graphical representation of the concept of singularity from the field of physics where it can be understood as follows:

In the centre of a black hole is a gravitational singularity, a one-dimensional point which contains a huge mass in an infinitely small space, where density and gravity become infinite and space-time curves infinitely, and where the laws of physics as we know them cease to operate. (Mastin, 2017)

The use of the term ‘singularity’ is to be found half a century earlier in relation to conversations which took place between the two mathematicians, Stanislaw Ulam and John von Neumann:

One conversation centered on the ever- accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. (Ulam, 1958)

Today, we find that the definition of the term ‘technological singularity’ cited in Wikipedia includes the term ‘artificial superintelligence’ as follows:

The technological singularity is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. (Wikipedia, 2017)

When one enters into the website of the Singularity University (Singularity University, 2017), which is based in San Francisco, one is confronted with a vocabulary which – viewed in its totality – expresses the set of ethical premises which the website contributors and the founders of the Singularity University apparently share, or at least those which they find appropriate to communicate to the outside world. What is salient about the vocabulary and the premises which they express is that, from an ethical point of view, little, if anything, seems to have changed since the beginning of the 20th Century – which is possibly partly indicative of the inherent paradox in which AI-theorists and practitioners, as also philosophers, now find themselves: lacking concepts which are yet to be conceived and might even be inconceivable for the human species, one is compelled to use terms and concepts which will predictably one day become obsolete or, at most, ‘historically interesting items’.

The fact that nothing of major significance seems to have changed from an ethical point of view could, however, be indicative of something which we hypothesized above: namely that the ethical super-dice will factually remain unturned for centuries to come. The evolution of ethical-political history may indeed have reached an end, in one part of the world at least, with the advent of modernism, of liberal democracy and of life in general – including politics, health and education being driven materialistically by the market economy. This thesis was proposed by Francis Fukuyama in ‘The End of History and the Last Man’ (Fukuyama, 1992) and partly revised in his later works including ‘Our Posthuman Future, Consequences of the Biotechnology Revolution’ (Fukuyama, 2002). This leads us to examine the ethical super-dice hypothesis in more depth using the example of the Singularity University’s website. At the time of writing, ethically salient vocabulary and phrasing include,

… we empower
… be exponential
… creating a life of possibility
… we have a massive transformative purpose
… using business as a force for good and defining success both in terms of revenue and social impact
… we place a premium on igniting a massive transformative purpose over our bottom line
… we seek to be a model for others on their journey to creating measurable impact
… building an abundant future together
… diversity creates a better future

These and very similar phrases were used with high frequency by the majority of key speakers at the Singularity University’s Global Summit 2017 (Singularity University, 2017) in San Francisco:

… here at the Singularity University, we empower leaders
… we want to improve humanity
… to create more augmented intelligence
… to make better people
… we want the best, the brightest and the most brilliant
… find out who we are and who we are meant to be
… we want to ensure a future of prosperity for our entire planet
… moving this planet forward
… we want to solve the world’s biggest challenges
… we need to be exponential and to have an abundant mindset
… using the leverage and the fire-power which we have
… what is good for the world is good for business
… if you promote diversity in your organisation, it’s good for business
… AI is your interface to anything you want
… those who don’t embrace AI will find themselves passed by
… we must help to prevent authoritarianism

In terms of ethical premises, these declarations and affirmations can be understood to imply

  • that the human condition is – or should be – a state of physical and psychological autonomy whereby human-beings
    • assume self-responsibility,
    • create and select their own values and priorities and
    • act in accordance with their self-assumed self-responsibility, values and priorities.

The main body of statements made at the 2017 Singularity University Global Summit (henceforth SU-GS) arguably implied the related core ethical premise, i.e.

  • that there is an imperative for assuming a self-legitimising, proactive form of self-determination and human will in order to create and sustain individual and collective mindsets of existential certainty.

Taken together, these ethical premises presuppose that human-beings possess a faculty for autonomy, for the creation and reception of imperatives – as also posited by Immanuel Kant with his concept of the ‘categorical imperative’ (Kant, 1871) – and for willing, doing and achieving. Importantly for the purposes of this paper, the cited affirmations can infer the self-legitimised creation and selection of ethical premises along the following lines:

  • there is no higher ethical authority than the members of human societies who,
    • in necessarily recognising not only self-ness but also other-ness within a given community/society,
    • possess the faculty of ‘will’ to empower themselves and each other
    • with the ethical imperative of attaining standards of thoughts, intentions and behaviour which,
    • in being agreed in the community/society,
    • means that the attainment of those standards acquires and deserves societal recognition, i.e. through being seen to be doing/thinking ‘the right thing’.

The above would imply that there ‘is’ no ethically pre-ordaining, judging, life-determining, omnipotent, theistic or other form of metaphysical authority or force who/which oversees human thoughts, intentions and behaviour. Consequently, nothing stands in the way of pursuing the self-defined standards which were proclaimed at the SU-GS and involved the advancement of digitalisation, SI and AI – such as

making better people
augmenting human intelligence
eradicating all human illnesses
creating abundance in all things
increasing human longevity
and, in general,
improving humanity.

Whilst assuming a form of BEING which is based on some of the key premises upon which the worldviews of modernism and liberal democracy have been built – including atomistic reason, logic and empiricism as well as a universalistic aspiration for egalitarianism and self-determination – factually, of course, the speakers were making their declarations in a global context. Humanity comprises not one, but multiple worldviews whose respective constellations of key premises fundamentally differ. There are, for example, numerous contemporary societies which assume a form of BEING which is based on a pre-determined, theistically-based human condition in which one’s thoughts and acts are overseen and judged by a higher authority; the latter preordains the code of ethics by which human thoughts, intentions and behaviour are judged.

If we disregard for one moment the notion that only one worldview can be universally valid (in the sense that those who adhere to a different worldview to one’s own are regarded as ‘underdeveloped’ or ‘sacrilegious’ etc.), if we disregard the theoretical possibility of a global state let alone a global democracy (Sen, 2009) (p.408) and if we integrate the full implications of multi-ethicality into our view of the human condition, then the question raises itself as to the legitimacy of the ethical premises which underlie not merely the affirmations cited above, but also the whole field of the development of digitalisation, AI- and SI. Arguably, there should have been global agreement on these ethical premises before the latter developments were initiated.

1.2 Digital transformation and ethical stagnation: Why is the argumentation of ethical legitimacy still circular?

At this point in the discussion, we propose that implementable and sustainable answers to the question of legitimacy are required, perhaps urgently so, given the exponential, world-engulfing dimensions and aspirations of these developments and given that the premise of legitimacy (to which we will return below) is currently a crucial ingredient for the human condition and human ethics as many of us have grown to know them. However, finding solutions between differing ethical standpoints by means of verbal argumentation generally proves highly challenging, if not impossible, as the following example shows.

If a person ‘A’ genuinely believes in God, or a god, who pre-ordains what is ethical or not, then, as a theistic human subject, ‘A’ cannot assume the legitimacy to programme any ethical premises which are not theistically pre-ordained into robots or any other forms of AI or SI. It follows that if, for example, ‘A’ believes that killing any human-being is in absolute contradiction with divine will, then ‘A’ cannot and will not partake in the development of drones which could potentially kill human-beings.
If a person ‘B’ is a genuine atheist living in a democratic community which upholds the cardinal right to self-determination, then ‘B’ can indeed legitimise him/herself to programme either his/her own self-determined ethics or the capacity for ethical self-determination into robots or into any other forms of AI or SI, as long as any laws to which ‘B’ chooses to bind him/herself are not violated. It follows that ‘B’ could indeed partake in the development of autonomous lethal weapons for defence or other purposes.

Whilst the ethical standpoints and behaviour of ‘A’ and ‘B’ are each intrinsically consistent , they are virtually impossible to reconcile with each other because of their respective circularity, the latter being a crucial ‘built-in’ feature for the sustainability of any ethical standpoint. Without tight circularity , a given ethical standpoint serves no purpose and has no ‘raison d’être’, rather like an undefined working hypothesis.

It is not surprising, therefore, that we find the phenomenon of circularity in the argumentation of legitimacy concerning innumerable ethical issues in society. Examples of never-ending ethical dispute include whaling, badger-culling, fox-hunting, genetic engineering, intensive farming, wind farms, ethnic homogeneity, population control, euthanasia, homosexuality, contraception, climate control, organic agriculture, foreign aid, nuclear weapons, land-mines, mineral resources, the publication of state secrets and free speech. Each standpoint has its own water-tight case for legitimacy as was the case with the violent events and demonstrations which took place at the beginning of August 2017 in Charlottesville, Virginia, and in Mountain View, California: some people were asserting their lawful right to free speech and others were asserting the lawful right to egalitarian treatment regardless of race, religion and gender. Professor Venkatraman of the University of Boston commented at the time that the main problem was the fact that meaningful dialogue between the ethically polarised factions was just not possible. (Venkatraman, 2017a)

In the case of digitalisation and AI, thousands of developers worldwide are legitimising themselves to pursue the objective of creating maximal levels of autonomy in their own image such as through proprioception (identifying and resolving internal needs such as the recharging of batteries), exteroception (adapting behaviour to external factors) and autonomous foraging (identifying and exploiting existentially critical resources). The levels of intelligence and self-sufficiency of their artefacts, including emergent, i.e. unpredicted and unpredictable behaviour and even disobedience to human instructions, have been increasing so rapidly that figures like Stephen Hawking, Nick Bostrom, Bill Gates and Elon Musk are now publicly urging for prudence, e.g. in the development of autonomous weaponry. However, their own self-legitimised argumentation seems largely ineffective and restricted to the raising of verbal warning fingers in the name of mankind, as is to be found in a BBC interview with Stephen Hawking. (BBC, 2014) Ardent opponents of digitalisation and AI also raise their own arguments, but genuine agreements are currently scarce.

Pre-empting the discussion in Section 2, the managers of organisations in the field of digitalisation around the world currently have fundamental decisions to make:

  • Will they allow themselves to fall into ethical circularity concerning legitimacy, or not?
  • Will they search for a way to avoid it, and, if so, how? Or will they choose to ignore it and proceed with their current set of ethical premises?
  • Will they wait for others to point the way, e.g. in order not to lose their competitive advantage in the short- and mid-term, and then follow them?

However, all three decisions are clearly lacking in long-term adequacy. As mentioned in the Introduction, the responsibility for the ethical footprint of an organisation (Robinson, 2014) arguably lies within the personal and collective accountability of its senior managers. If the managers themselves and/or if third parties recognise this accountability, the decision which the managers make concerning ethical legitimacy constitutes the starting point for all positive and negative consequences for which they can be held accountable. In order to uphold this accountability, they have to identify the ethical standpoints from which their ethical footprints will be judged and somehow to overcome the circularity of the argumentation of legitimacy on all sides.

We propose that the latter can be achieved by firstly addressing the issue, not about which ethical principles should be applied and why, but about the extent to which ethics can be legitimised at all, by whom or by what. In other words, senior managers have the opportunity – and arguably the obligation – to radically question the phenomenon of ethical legitimacy per se which, in turn, requires that they adequately understand the phenomenon of ethics. This paper is intended to contribute not only to the deepening of that understanding but also to catalysing fundamental change in the field of ethics. To this end, we further propose that a particular perspective of ethical neutrality, which we will henceforth term ‘anethicality’ – since this perspective lies outside the confines of merely human ethics – and a linkable perspective of ‘a-certainty’, which, by definition, lies outside the paradigm of certainty and uncertainty, could bring radical movement into the fields of humanity’s self-understanding, of its future in an increasingly digitalised world and of its footprinting on the planet Earth.

It is arguable that a greater quality and quantity of resources should be allocated into achieving this radical movement than is being invested annually into the whole field of digitalisation: it is arguably regrettable that AI-development is creating a need for the development of human ethics and not vice versa. One could ask if it would not be significantly more beneficial to humanity and other sentient beings if human ethical development would take place sooner and faster than AI development.

1.3 Unde venis et quo vadis, contemporary ethics

Before illuminating the above reflections in more depth and addressing the potential space which anethicality and a-certainty might have to offer, we will examine the nature of the ethics underlying predominant developments in digitalisation and AI as well as the challenges which the increasingly digitalised world could have in store for private and corporate life if the field of ethics remains stagnant. Significantly, we will be referring to the enduring attractiveness of the ethical legacy of Friedrich Nietzsche which, it has often been claimed, was strongly influential on ideological developments which led to the First and Second World Wars. In tracing the roots of Nietzscheanism back as far as the works of Homer, Dante Alighieri and Giacomo Leopardi we aim to contribute to a deeper understanding of where much of contemporary ethics in the field of digitalisation has come from and where it is leading.

By way of introduction, we note that in the early period of the 20th Century, Jewish intellectuals were interested in the works of Nietzsche to such an extent that they were regarded by right wing factions in France as Nietzscheans (Schrift, 1995); we also note that, during the First World War, German soldiers were given copies of one of Friedrich Nietzsche’s most famous works to boost their militaristic patriotism. (Aschheim, 1994) What is it about the works of Nietzsche which could have appealed to both Zionists and anti-Semitic nationalists and what could the link be to the contemporary development of AI and SI?

In the book which was distributed to the German soldiers, ‘Also sprach Zarathustra’, Nietzsche writes the famous proclamation that God is dead, which, in the context of his writing, refers centrally to the god of Christianity. (Nietzsche, 1883) In this and other works, e.g. (Nietzsche, 1882), Nietzsche reflects that, in being able/worthy to kill their belief in God, the perpetrators themselves become god, i.e. that Christian godliness is a concept created by its believers and that believers must have the faculty to create their god in the first place in order to be able to kill him later.

In the same works, Nietzsche also coins the terms

  • Übermensch (the Super-Human, enhanced beyond human limitations to perfection)
  • ‘Untermensch’ (the Inferior-Human, the undesirable antithesis of the Super-Human) and
  • ‘letzter Mensch’ (the Last-Human, who lazes in comfort and abundance).

Interestingly, at the 2017 SU-GS in San Francisco, terminology was used like ‘augmented intelligence’, ‘better people’, ‘human enhancement’ and ‘enhanced participants ’ (a term which was applied to superior-level SU-GS participants) – terms which seem to be strikingly close to Friedrich Nietzsche’s concept of the Super-Human.

Given the historical significance of the cultural reception of the ethical premises in the works of Nietzsche in countries such as China, Cuba, France, Germany, Italy, Japan, Russia and the United States of America from the beginning of the 20th Century, we will reflect upon the extent to which not only concepts like the ‘Super-Human’ but also, more importantly, key underlying ethical premises in Nietzsche’s work are to be found in the motives, direction and consequences of a vast segment of current developments in digitalisation, AI and SI, not to omit questions concerning the ethical foot-prints of the drivers of these developments and their legitimacy.

As sociologists, historians and philosophers have been researching more and more deeply into the roots of socio-political movements in Europe, Asia and the Americas during the 20th Century, an increasing amount of information has come to light which indicates that political figures such as Mao Zedong, Che Guevara, Vladimir Lenin, Joseph Stalin and Benito Mussolini were all strongly influenced by central ethical premises in Nietzsche’s work. (Lixin, 1999), (Rosenthal, 1994). Similarly influenced were Charles de Gaulle, Theodore Roosevelt and Adolf Hitler, alongside philosophers, psychologists, sociologists and writers such as Jean-Paul Sartre, Emil Cioran, Osip Mandelstam, Max Weber, Hermann Hesse, Theodor Herzl, Sigmund Freud, C.J. Jung and Carl Rogers. Whilst the cultural and individual reception of Nietzsche’s works varies and whilst certain central premises in the works themselves, which were published between 1872 and 1889, are often portrayed in the secondary literature to vary, the most significant ethical premises which are also particularly relevant for this paper can be formulated as follows:

  • the will to influence and determine (reverse-Darwinism and the will to power)
  • amor fati and eternal autonomy (living the freedom of embracing one’s life in such a way as to want to live it all over again in exactly the same way)
  • self-perfection and superhumanism
  • human aesthetics (defiance of nihilism by saying ‘yes’ to – and in experiencing – life as a whole as beautiful, or at least certain presuppositions of such as temporality or necessity as beautiful (May, 2015)).

These four ethical premises are so closely intertwined that we must examine them as a whole rather than as separate items, an approach which in itself is crucial to understanding both Nietzsche’s works (which are perhaps ethically more complex and differentiated than has sometimes been portrayed) and to recognising parallels with current developments in digitalisation, AI and SI.

The will to influence presupposes that the human possesses the faculty to exert will , which means that the human is necessarily autonomous in action, in motives and in ethics. This ethical premise ‘posits’ no god or other transcendental authority who/which might, for example, bestow humans with the free-will to either conform with, or sin against, preordained ethics. In affirmatively assuming a state of being free to one’s own devices, the human-being confronts itself with an apparent choice between two necessities, as follows, ones which we will term ‘self-assigned ethical imperatives ’.

The self-assigned ethical imperative of accepting human existence as being futile, thereby embracing nihilism and its consequences.

We find this imperative expressed by Emil Cioran, for example, with the sentence:

Ohne Todessehnsucht hätte ich die Offenbarung des Herzens niemals erlebt. (Our translation: ‘If I hadn’t yearned for death, I would never have experienced the opening of my heart.’) (Cioran, 2008) (p. 508)

As an alternative to choose from, we have

The self-assigned ethical imperative of defying mental acts both of nihilism and of being embedded in a natural environment which shows itself to be at least indifferent to, if not hostile to, human-beings, thereby embracing human creation and human passion, i.e. human aesthetics.

This imperative we find expressed by Friedrich Nietzsche with the words:

Denn nur als ästhetisches Phänomen ist das Dasein und die Welt ewig rechtfertigt. (Our translation: ‘It is only as an aesthetic phenomenon that existence and the world are eternally justified.’) (Nietzsche, 1886)

In contrast to the oeuvres of Cioran, who, in one place writes that he would like to be as free as a stillborn child (Cioran, 2008) (p.1486), Nietzscheanism involves imperatively embracing human creation and passion through, and with, the will to influence, and thereby to be free and alive. (Church, 2012) (p. 135) This latter ethical premise brings with it both the freedom and the imperative to seek ultimate sensations. (Church, 2012) (p. 145) In necessarily affirming one’s natural passions, one defies dogmas such as Christianity which, by virtue of its preaching the suppression of natural passions, is deemed by Nietzscheanism to

  • negate natural human life on earth,
  • to position true life in an after-life – such as one in heaven or hell – and
  • to introduce the debilitating mental phenomena (or ‘mindsets’) of having expectations and ulterior values which steer one’s thoughts, feelings and actions and which lead one to need to justify the latter according to an externally-, i.e. theistically-, pre-ordained set of ethical premises.

In possessing and embracing the will to influence, one defies the passive fatalism of a Darwinist understanding of natural evolution: instead, one takes control over one’s own faculties and development. Amor fati involves the necessary affirmation of whatever has been, whatever is, and whatever will be. Enhancement is a necessity for the human condition and can be perceived to stem from the enhancement of the individual, i.e. one does, and should, strive to perfect the human body and mind both in the present and for the future ; one does so in recognising that the authentic improvements made by one generation can be passed onto the next and that each individual, as an integral part of human continuity and as a ‘synthetic man’ who integrates the past (Church, 2012) (p. 247), contributes to its own longevity and eternal autonomy. The other option would be to let humanity decay and cease to exist.

It is important to be mindful of the historical contexts in which Nietzsche’s writings were being read and absorbed: a broad range of historians and sociologists tell us that, like many other politically engaged figures, Mao Zedong, Vladimir Lenin and influential intellectuals around them deplored the remains of feudalism which they perceived to permeate their respective societies with a ‘peasant’ mentality; they regarded the roots of these remains of feudalism as the antithesis of the energising ethical premises which they found in the works of authors like Nietzsche; the ‘peasant’ mentality represented an immense, if not insurmountable, hindrance to the development of a world-class nation of super-humans; Marxism did not offer a solution which matched the character-traits and aspirations of individuals such as Mao Zedong or Joseph Stalin, nor did it contribute to the attainment of international competitiveness or even supremacy in the way in which ‘Nietzscheanism’ could do. Consequently, the influence of central ethical premises in Nietzschean works on the socio-political developments of China and Russia can be seen to be greater than those of Karl Marx, even though Nietzsche’s works were formally banned in various communist countries from time to time. The relevance of this matter to the current paper is critical for the following reason: whilst Nietzschean thinking and its underlying ethical premises have been associated with some of the causes of ‘humanitarian atrocities’ such as those of the two World Wars, falling at least temporarily into wide disrepute, a very similar cluster of ethical premises, those which concern us here and which do indeed incite the self-legitimisation of individual and collective supremacy, can

be identified to underlie a major proportion of the forces which are driving current developments in digitalisation, AI and SI

and

explain the anxieties and the warnings which certain figures both outside and within these ‘exponential’ developments are currently voicing with particular vehemence.

Key proponents of AI and SI at the SU-GS promised a new salvation for humankind in reversing global warming, enabling the human race to become a multi-world species, running tourist flights to Mars, creating a world of abundance, eradicating both all human medical illnesses and authoritarianism etc.

And, just so that you understand the rules of the game,

the audience was given to understand in various synonymous formulations throughout the event

do make sure that you are part of these developments, if you don’t want to get passed by!

Listening between the lines, one of the common core messages of most key speakers could be interpreted as being explicitly energising-entreating and implicitly excluding-threatening, i.e.

The choice and the consequences are yours alone!

Writing in the Financial Times, Robin Wigglesworth cites something very similar in relation to the use of AI which is purported to be revolutionising the management of money, e.g. in the form of natural language processing (NLP):

“Data is being generated and digitally captured at an exponentially increasing rate, and NLP is an important part of our strategy to understand what is going on across the global markets we invest in,” says Kevin Lee, head of data science at GIC, Singapore’s SWF. “Everyone is at least looking at them, otherwise you risk falling behind.” (Wigglesworth, 2017)

It is significant for our discussion that Mao Zedong, Vladimir Lenin, Adolf Hitler, Charles de Gaulle, Theodor Roosevelt and many others who were inspired by works which notably included those of Nietzsche rose to positions of large-scale influence and became driving forces behind the socio-political and economic developments of the 20th Century. Very often couched within such developments lay the phenomenon of inclusion and exclusion in a form which was quite possibly accentuated through the circular and Nietzschean twist of self-determined legitimacy. Either one empowered oneself to adopt and embrace the posited ethical credo, or not:

If your ethicality just happens not to match, then you will be passed by!

In other words, at that time, any consequences such as dis-enfranchisement and exclusion were self-inflicted. Today, national economies have the option to include themselves through digital compliance and digital assertion or to exclude themselves and bear the consequences on their own shoulders. In turn, political parties have the option to include themselves in the forces exerted by the economy and consumers or to bear the self-inflicted consequence of being ousted or not even voted for.

Interestingly, we find that not only the word ‘anxiety’ (see above) but also the word ‘inclusion’ – according to Google’s N-Gram – has been progressively increasing in prevalence in recent years, suggesting that it, too, is a significant issue in contemporary society. (Google, 2017b)

Also, we note that the vocabulary currently being used in the fields of the development of digitalisation, AI and SI which we have been analysing seems, in general, not to include terms such as ‘grace’ or ‘pity’, let alone ‘shame’ or ‘guilt’. Nietzsche himself explicitly deplored the concept of ‘pity’, i.e. pity for the Lower-Level Human, just as he held the concept of egalitarian democracy in low regard. (Nietzsche, 1888) As Vladimir Jelkic remarks:

Nietzsche’s criticism of democracy is part of his overall criticism of modernity. …. For Nietzsche, the democratic movement is not only a decay of political organisation, but also – and this is more important – a form of man’s diminishment, the diminishment of man’s value and worth through having made it mediocre. … This equality of rights is odious to Nietzsche because he holds that it is directed against the “creative fullness of power”, noblemen and higher status people. (Jelkic, 2006)

Higher individual and collective status naturally co-occurs with individual and collective inferiority – without which the concepts of the Super-Human, superintelligence and supremacy have no significance. Accordingly, those who do not, or cannot, embrace the development of technology-based enhancement and Super-Humanism , including those who stand in its way are, by implicit definition, collectively Inferior-Humans. It follows that to have pity on such humans would logically and ideologically not fit with the set of four ethical principles outlined above. According to Nietzsche’s credo, the concepts of ‘love for humanity’ and ‘pity’ belong to the undesirable ethical principles which Christianity had been propagating for many centuries and which had helped the weak to maintain their grip on power (Froese, 2006) (p. 118); further, those who do not, or cannot, embrace human aesthetics as the justification to existence and life have factually opted for the first of the two possible self-assigned ethical imperatives given above, which is nihilism wherein human life has no value. Given the fact that any consequences would be self-determined, it would be unfitting for others to feel or show pity for such people. It follows that there would be no sense of bad conscience or guilt towards those who find themselves in a condition of self-inflicted dis-enfranchisement – which helps us to understand why, as mentioned in the Introduction above, Jay Ogilvy writes:

The artificial intelligence-driven, post humanist future promoted by Ray Kurzweil and others is a cold, cold place. (Ogilvy, 2017)

The black-and-white dichotomy between superior and inferior to be found in Nietzsche’s legacy – i.e. his ethical footprint – manifests itself today both overtly and covertly in the eastern, western, northern, southern, digital and non-digital hemispheres. In this vein, the futurist Robert Tercek comments in an article on digital convergence:

Data-poor businesses find themselves at a striking competitive disadvantage compared to data-rich companies. And individual people are at the greatest disadvantage of all, because we have next-to-no-ability to utilize our personal data. The companies with the best data are most likely to succeed because they see the trends most vividly and that’s why they can make better decisions about where to place their bets. (Roland, 2017)

In the work of digitalisation specialists such as Professor Venkatraman (Venkatraman, 2017b), we see that the monetarisation of big data has become the secret to exponential growth, commercial success and supremacy, as the new meaning of the abbreviation ‘URL’ clearly indicates:

Ubiquity First, Revenue Later.

Those commercially intelligent companies which aspire to market superiority are currently hunting, according to Venkatraman, for any and all opportunities to capture data and turn it into monetary value. One example is how search engines can gather data about consumer decisions at what is termed the ‘zero moment of truth’, i.e. when consumers are researching for products. The search engines process the data which they glean from online search-behaviour so that product information can be adapted and placed in such a way as to help and influence consumers in huge masses; the tools of the search engines can even identify commercially viable products and services which do not yet exist. Not only are the daily online search and credit-card behaviour being minutely monitored, statisticised, valorised and monetarised but also attitudinal information and habits from a rapidly increasing number of walks of life, including the personal health, the mobility, the political leanings and beliefs of billions of people.

Whilst the commercial objective may be to monetarise the data and not to make people paranoid about being followed and monitored, what can possibly be seen to be happening from a critical, societal perspective is a return to the scale of psychological dis-enfranchisement which emerged at the beginning of the 20th Century. That was when millions of men and their families were arguably psychologically and physically conscripted into a status with a strong likelihood of yielding amor vitae meae (love of my life) for a statistical entry into a list of named and unnamed gravestones. This time, at the beginning of the 21st Century, one could argue that psychological dis-enfranchisement is not ethicised, as it was then, under the imperative of a nationalistic vision of amor/salus patriae (the love/salvation of the country), but under the imperative of a global vision called salus humani generis (the salvation of humanity). Many of the proponents of AI and SI are making it abundantly clear that the achievement of this global vision will be impossible without the ingredients of superintelligence which lesser humans lack, or lack access to.

In the case of Microsoft, the publicly available formulation of the corporate credo is:

Our mission is to empower every person and every organization on the planet to achieve more. (Microsoft.com, 2017)

At Facebook, one finds the Mark Zuckerberg’s vision formulated as follows:

… In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us. (Zuckerberg, 2017)

A significant inherent element which these two statements share is that of power-ownership: the latter is, of course, a prerequisite for the empowerment of those who possess the faculty to be empowered. Zuckerberg’s formulation ‘that works for all of us’ may or may not contain a Freudian slip in its ambiguity: Who is meant by us? Is it every human-being or every dominant player in social media? Who is the person doing the work? – And for whom?

Whilst it is arguable that figures such as Mark Zuckerberg cannot be made ethically accountable for what has factually developed out of their creations, it can be conjectured that social media like Facebook, LinkedIn, Twitter, WhatsApp etc. operate today with a hidden agenda, i.e. with a conscription-like strategy which one might formulate along the following lines:

Sign up, enhance your dignity, get closer to your community so that we can factually turn you into a ‘statistic’, a commercial-data-resource, a means to our own market supremacy, without you even thinking about it! We need monetisable data from two billion people just like you in order to retain and expand our supreme market position.

Whilst establishing the veritable ethics behind the visions, missions and strategies of these companies would require interviews and reflections (Robinson, 2016) with the key managers, certain hints can be gained from examples such as the following report in the Guardian:

Jiranuch Trirat, a 22-year-old from Phuket, was left devastated after her boyfriend, Wuttisan Wongtalay, hanged their 11-month old daughter Natalie from the side of an abandoned building before taking his own life.
He broadcast Natalie’s murder on Facebook Live, a video that Jiranuch came across that evening. (Guardian, 2017)

Commenting on Zuckerberg’s response to the question why Facebook took over 24 hours to remove the video from public access and why such material is not immediately removed but allowed to stay online due to stringent internal policies, Michael Schilliger writes in the Swiss newspaper, NZZ:

Dass Facebook diese Richtlinien so exakt und detailreich ausgearbeitet hat, ist Ausdruck des Selbstverständnisses des Konzerns und vielleicht deutlichstes Zeichen dafür, wie er seine Mission als Weltverbesserer wahrnehmen will. Facebook will erklärtermassen die Meinungsfreiheit so hoch wie möglich halten, uns so wenig wie nötig einschränken. Diese Maxime macht Facebook so attraktiv für seine Nutzer. Und darauf wiederum gründet letztlich das Geschäftsmodell des Konzerns. (Schilliger, 2017)
(Our translation: The fact that Facebook has created these policies with such a level of precision and detail helps us to see how the company defines itself and is perhaps one of the clearest signs as to how it intends to achieve its mission as the world’s do-gooder. Facebook declares that it wants to provide maximum opportunity for us to express our opinions and therefore imposes the minimum level of restrictions on us. It is this maxim which makes Facebook so attractive to its consumers. And here, of course, lies the core of its business model.)

Schilliger comes to the conclusion that it is Facebook’s business model, namely ensuring maximum attractiveness to its users, which dictates how policies and decisions are made: in other words, the commercial ethics of ‘maximising advertising revenues and monetising as much gathered data as possible’ overrides the societal ethics of ‘respecting personal dignity’ and ‘sympathising with the individual in times of personal tragedy’. As discussed in relation to the phenomenon of cultural differences in ‘The Question of Intent in Joint Ventures and Acquisitions’, (Robinson, 1993) differences of ethical standpoint can lead to irreversible interpretations of intent, often undesired ones. In the case at-hand, those who place their focus on societal ethics can interpret Facebook’s commercial ethics and intent to be that of self-empowered commercial self-interest and indifference towards the powerless, insignificant individual.

In effect, current developments in the application of digitalisation seem to constitute a continuation of the process of uninventing the individual to which Nietzsche’s works contributed at the end of the 19th Century. When, as mentioned above, Nietzsche proclaimed that (Christianity’s) humans had killed the god whom they themselves had created and that they had become their own god in deeming themselves worthy of doing so, arguably, he tried to circumvent the logical consequence of his thesis, namely that they had, at the same time, effectively ‘killed’ themselves in wiping out the foundation for their own individual ethical significance: his attempt at circumvention manifests itself in his vision of the reverse-Darwinist creation of the Super-Human. One is tempted to wonder if this would be a new generation of ‘jesus christs’ or of the ‘self-chosen few’. The vision of this Super-Human which – Nietzsche leaves us to interpret – would be the embodiment of the being which creates the set of fundamentally new values, which he posits as a necessity for the attainment of genuine eternal autonomy and freedom, effectively throws a black shadow of inferiority over the contemporary human species: the latter is the Last-Human, i.e. a creature of insurmountable ethical inadequacy and thereby of moral insignificance. Of necessity, Nietzsche’s Super-Human must repudiate his/her/its parents/creators. The parent-child bond, and with it the family, are torn asunder. This atomisation of the human relationship into independent constituents constitutes a mental act and an ethical premise which – not surprisingly given Nietzsche’s rejection of Christianity – stand in contradiction to one of the ten commandments of the Hebrew Bible and which, in the era of digitalisation, have far-reaching consequences for self-understanding , for individual mental health, for social unity, for corporate culture and for further developments in AI and SI.

In parallel to the implications of the vision of the Super-Human, the ethical premises and circularity behind Nietzsche’s vision of amor fati which, etymologically at least, can be interpreted to mean not only

‘love of fate’, i.e. love of my fate and that of others (whatever the nature of that fate might be),

but also

‘love of what has been said ’, i.e. by me (and possibly by others?)

could erase any remains of moral significance which future society might otherwise have inherited from the pre-Super-Human condition. In positively affirming amor fati for the human condition , Nietzsche was arguably implicitly and positively affirming the human condition’s futility. Nietzsche would appear then to have replaced the dualism of the ‘is-entity ’ and the ‘should-entity ’ with another, even more fundamental one, i.e. self-created meaningfulness and self-created meaninglessness.

The dismantling of the ethics of Christianity, a religion which Nietzsche viewed in ‘Der Antichrist’ (Nietzsche, 1888) as being based on a nihilistic worldview , and the dismantling of the ethics of other forms of theism , brings with it a deconstruction of the ethics of the credo upon which modernism and liberal democracy were built, i.e. the ‘moral individual’ (as defined below).

The notion that the Super-Human would ‘transvaluate’ or re-evaluate values, i.e. build ethics from zero – that is, if the Super-Human were to find the word and concept of ‘ethics’ at all appropriate – seems, when we contemplate what could emerge from AI, SI and singularity, very similar to our contemporary predicament.

It would not be surprising, therefore, to find that, wherever our Nietzschean-like cluster of four ethical premises underlies the creation and the use of digital instruments such as the internet, social media, soft robots and other AI-artefacts, the central pillar of liberal democracy – i.e. the moral individual – is also progressively eroding.

At this point, it is crucial that we differentiate between the concept of ‘individuality’ and that of the ‘moral individual’, a distinction which is expounded by Larry Siedentop in his book entitled ‘Inventing the Individual’. (Siedentop, 2014) Here, Siedentop defines ‘individuality’ as an aesthetic notion, the origins of which are to be found in the Renaissance. The ‘cult of individuality’, as he formulates it, is a cultural phenomenon which gradually evolved through the later movements of humanism and secularism into its fully-fledged, modernist state. In contrast to its counterpart, the moral individual (which is defined below), the phenomenon of individuality seems today not to be eroding, but to be finding increasing resonance in the space created in atomistic, utilitarian, secular and egalitarian segments of global society, where rationality, reason and empiricism override ‘irrational’ belief and ‘unreasoned’ moral convention, thereby de-legitimising – or having little place for – the form of regret, shame, bad conscience or guilt which pervades the whole person’s BEING.

It is in this new space that individuality can thrive in the acts of both creation and consumption of digital artefacts. Through the atomisation of BEING , individuality has now become the promotion, legitimisation and act of active self-expression and individual happiness, whereby:

… individual wants or preferences are taken as given, with little interest in the role of norms or the socializing process. (Siedentop, 2014)

In his treatise on ‘The Divided Individual in the Political Thought of G.W.F. Hegel and Friedrich Nietzsche’, which is the subtitle of ‘Infinite Autonomy’, Jeffrey Church writes that Hegel argued individuality to be an ultimate consequence of the atomisation and alienation of individuals:

Finally, this abstract thinking gives rise to the liberal “atomistic” view of individuality , which in turn maintains and deepens this loss of ethical life. According to the atomistic conception, freedom simply consists in the subjective right to particularity … Communities that enshrine ethical meaning, such as the state, are regarded as external impositions on individual freedom … (Church, 2012) (p. 95)

In order to make as clear and as relevant a distinction as possible between ‘individuality’ and the ‘moral individual’, we propose – following Siedentop – that the latter involves a non-atomistic ethical premise whereby the individual’s personal, internalised ethics or ‘ethical innerness’ plays a determining role in his/her physical behaviour, attitudes and thoughts in alignment with collective, societal ethical innerness. Thus, this understanding of the moral individual involves the holistic form of the concept of ‘integrity’, a form which captures a form of authenticity comprising both inner wholeness and moral uprightness in society. The moral individual has a moral conscience which is ‘one’ with that person’s identity. Leaders in the Western world who stray too far from the moral individual and lose their ethical authenticity do so irreversibly at their own peril, as we have seen with numerous political figures such as Bill Clinton and Tony Blair.

In Dante’s ‘Divine Comedy’, which we referred to in ‘If You Have A Vision’ (Robinson, 2016a), one finds scores of examples of the moral individual, each portrayed with particular succinctness. Dante treats the physical body, the moral character and the professional occupation of his chosen figures in the ‘Divine Comedy’ as being unavoidably – in the Nietzschean sense of self-determined fate – ethically contingent upon, and congruent with, each other. The manner in which this one-ness, which is fundamental to the invention and concept of the moral individual, is described and commented by Erich Auerbach in his book ‘Dante, Poet of the Secular World’ (Auerbach, 2007) and serves as an illustration of the converse of atomism, i.e. through holism and ‘non-duality’. The latter, originally an ancient Hindu and Buddhist notion, is expounded by David Loy in his book ‘Non Duality’, (Loy, 1997) and signifies a non-atomised state of BEING from which ultimate integrity, one-ness and un-contradictoriness emerge.

It is particularly relevant at this point to note that Dante conceived (i.e. gave birth to) his moral individuals and wove them into his poetic story-line whilst he himself was subject to a strongly theistic and strongly supremacy- and commercially-driven, often simoniac social environment. In the vocabulary of our discussion, the figureheads of this social environment had ordained that he should be banished from life in Florence since he had chosen not to affiliate himself with the meritocratic Black Guelfs who supported the papacy; as a result of his deeds and ethical leanings, he found himself in a self-inflicted, permanent exile and a situation of the self-dis-enfranchisement of what he felt to be his natural identity.

The ‘Divine Comedy’ reveals how Dante had started to reflect, as Nietzsche did six centuries later, on his perceptions of a lack of integrity in the practice of Christianity. This Christian environment was ruled on Earth at that time from the Vatican by the Bishop of Rome whom believers expected to be a moral role model. The incumbent Bishop of Rome, Pope Boniface VIII, was Benedeto Caetani, a man whose legitimacy to hold such an office Dante fundamentally questioned on ethical grounds. In the Divine Comedy, Dante reveals that he perceives there to be a faith-shattering, disqualifying incongruence and contradictoriness between the individual, Benedeto Caetano, and the holy office of the Bishop of Rome, not least because of the suspicion that this new pope had orchestrated a self-legitimised and self-legitimising power-grab from his predecessor, Pope Celestine V, Pietro Angelerio. As the Divine Comedy unfolds, one realises that Dante applies his poetic licence to paint each figure, including Pope Boniface VIII, as an individual whose self-inflicted, self-legitimised ethics are inextricably bound to their self-inflicted fate, for eternity. Dante, the dis-enfranchised exile, legitimises himself to be god and to ‘kill’, i.e. to dis-enfranchise at the core of identity, the figure whom he feels to be an ethically impostrous pope by allocating him a permanent place in Hell while he (the pope) was still alive. In affirmatively assuming the role of the theistic authority which he has negated, Dante arguably erases the last remains of his own moral significance. He does so not only with unique, aesthetic , poetic skill, but also with an ‘as-if’ form of poetic licence which evolves into a truly visionary comedy of human ethics. Positioned on the knife-edge which unites and divides the positive affirmation of both existence and nihilism, this comedy has inspired millions of people for almost seven hundred years. Deeply embedded within his comedy we find meaning and meaninglessness juxtaposed. Dante plays with the centrality and the triteness of ethical premises, those vital yet tenuous ‘as-if’ phenomena posited by human BEING and capturing its essence – i.e. the as-if phenomena which can integrate into the ethical systems underlying individual and collective identity or disintegrate such identity, as we will explore below. The central role which as-if phenomena play in the creation and realisation of corporate and other visions is examined in depth in the article mentioned above (Robinson, 2016a). What concerns us here is their role and attributes at the core of human ethics, as illuminated in the works of Dante and Nietzsche, and why a deep understanding of these phenomena could be sagacious for those contributing to and/or influenced by developments in AI and SI. We will return to these points in Section 2.

The parallels between the ‘Divine Comedy’ and the works of Nietzsche (who made several references to Dante during his literary career) are striking at the level of the ethical premises which we have been discussing, i.e.

  • will to influence, including the inherent circularity in relation to legitimacy,
  • amor fati, (Rubin, 2004) (p. 127-130)
  • human aestheticism,
  • the inherent nihilism which underlies a positive, visionary, ‘as-if’ affirmation of life, (Robinson, 2016a)

As a short illustration in recapitulation of what we have discussed so far, Robert Durling offers the following translation of Dante’s original lines in the ‘Paradiso’ section of the ‘Divina Comedia’:

And I would not have you doubt, but be certain; that to receive grace is meritorious; according as the affect is open to it.
E non voglio che dubbi, ma sia certo; che ricever la grazia è meritorio; secondo l’affetto l’è aperto. (Durling, 2011) (p. 581)

Condensed into just three aesthetic lines of poetic ‘as-if’ licence, Dante commences with the ethical premise of the positive affirmation of human existence which finds expression in the self-legitimised statement of personal will by the speaker of whom Dante is enamoured, Beatrice. This ethical premise of the will to influence is then emphatically contextualised by not one, but two explicit references to the paradigms of certainty-uncertainty and belief-doubt : clearly – as with Kant’s ‘categorical imperative’ (Kant, 1871) – self-assigned faith, i.e. the human affirmative imperative , is contingent upon its counterpart, nihilism. The human condition is thus atomised, the moral individual un-invented, Dante’s belief in integrity has been shattered into the shards of as-if’s upon which human BEING has landed with its tender feet. As also discussed above in relation to Nietzsche’s works, the phenomenon of human ethicality with its inherent circularity becomes clear in the message that fate comes to those who have the faculty for that particular fate, and hence for amor fati: merit comes only to those who are meritorious, and those who are not meritorious unavoidably find their own self-inflicted fate – at no fault of the meritorious. Not only does Dante portray the receiving and the attaining of grace as being reciprocally contingent , he pre-empts Nietzsche also with the aspiration of humanity towards the attainment of perfection and ethical meritocracy.

Returning to Siedentop’s book, ‘Inventing the Individual’ (Siedentop, 2014), we note that, whilst he does not make any reference to the ‘Divine Comedy’ or to Dante, he does underline the significance of Pope Boniface VIII in relation to the evolution of the moral individual. Siedentop plots the creation of the moral individual in steps which include the following:

Philip the Fair’s resistance to the theocratic claims in Boniface’s bull, Unam Sanctum, drew the attention of the whole of Europe to constitutional issues. As a result, the papal attempt to submit all nations to its sovereign authority suffered a serious and lasting reverse. (p. 328)
… in its basic assumptions, liberal thought is the offspring of Christianity. It emerges as the moral intuitions generated by Christianity were turned against an authoritarian model of the Church. (p. 332)
Through innovations in thirteenth century canon law, corporations came to be understood as associations of individuals, ceasing to have an identity radically independent of and superior to that of their members. … This was no atomized individualism. Self-reliance and the habit of association were joined. (p. 338)
The church had projected the image of society as an association of individuals, an image which unleashed the centralizing process in Europe. (p. 346)
It was through the creation of states that the individual was invented as the primary or organizing social role. (p. 347)

In his treatise of the invention of the moral individual , Siedentop’s lack of reference to the ethical legacy of either Dante or Nietzsche and – whether the lack of reference was deliberate or not – allows him to evade the thesis that the uninventing of the moral individual was induced by

  • Christianity’s negational suppression of natural passion and aestheticism,
  • its promise of conditional salvation from human sin in an after-life,
  • its atomisation of the family-unit and
  • its escape from humanity’s ethical inadequacy through the non-human conception of a superhuman.

Whilst literature on the fate of the moral individual appears to be scarce, there is a wealth of publications on the erosion of values in contemporary society, some of which – rationalists, in particular, might argue – need to be treated with caution or skepsis. What most of these publications share, regardless of whether the contents are based on empirical evidence or not, is the fact that their authors base their observations and theses on the premise that societal ethics should be preserved, i.e. on the premise that societal ‘shoulds’ and ‘should-nots’ have a legitimacy and are a necessity in guiding, condoning, tolerating, rewarding, denouncing, censuring and punishing people’s mental and behavioural acts.

From the perspective of anethicality which, as remarked earlier, lies outside the confines of solely human ethics, there are several phenomena at play in contemporary society which could explain the perception that societal values are indeed eroding in many communities. These include the likelihood that:

  • the principles of the market economy and digital development now pervade such vast segments of life and society that there has been a marked proportional shift from societal ethics to business and engineering ethics

and

  • national and party politics as well as voting at local and national elections are being influenced in such a fashion by what is now termed ‘digital citizenship ’ that the nation-state is being un-invented, cf. ‘Digital Citizenship and Political Engagement’ by Ariadne Vromen (Vromen, 2017)

Evidence of phenomena such as the first one can be found in research studies, e.g. at the University of Bonn which concludes:

In markets, people ignore their individual moral standards (…) This is the main result of the study. Thus markets result in an erosion of moral values. „In markets, people face several mechanisms that may lower their feelings of guilt and responsibility,“ explains Nora Szech. In market situations, people focus on competition and profits rather than on moral concerns. Guilt can be shared with other traders. In addition, people see that others violate moral norms as well. (Falk, 2017)

Michael Keating, professor of politics at Aberdeen University, sees a trend which is very closely linked:

The integrity of the nation-state has disintegrated under the pressure of the world’s economic system. (Keating, 2017)

Given Larry Siedentop’s observation that the emergence of the moral individual correlates with the emergence of the nation-state, it is comprehensible that the un-invention of the one will go hand-in-hand with that of the other.

Looking at the phenomenon of the extensive pervasion of digitalisation and atomisation, could it be that we are indeed approaching ethical singularity, Nietzschean trans-valuation and entering into uncharted, supra-ethical or ethic-less territory?

Could it be that the more digitally-conditioned generations and segments of society are – perhaps inadvertently – indicating to their less digitally-conditioned counterparts , and society at-large, that the advent of AI and SI is challenging much more than merely the way in which values are assigned different priorities ?

Could it be that the legitimacy of, and the necessity for, ethical ‘shoulds’ and ‘should-nots’ are being challenged?

Or, referring back to our earlier discussion, could it be that the appropriateness of any form of imperative which emerges together with the mental act of a positive affirmation of human existence is being challenged?

If Keating is right that the integrity of the nation state has disintegrated, (Keating, 2017) then so also the ethical integrity of the individual.

1.4 Ethical integrity: Why have we invented such an impediment to ethical transformation?

When a person A says that a person B lacks ethical integrity, A often means that B has been behaving in an unethical manner. A deems B’s thoughts and behaviour to stem from a set of values and ethical premises which are different to A’s and thereby bad. Sometimes, A means that B has been erring from accepted ethical norms. As these examples show, the term ethical integrity can, on the one hand, be used to express ethical divergence and exclusion. If, on the other hand, A were to praise B on the grounds of his/her ethical integrity, then A would be expressing feelings of ethical congruence and inclusion.

In general terms, members of an ethical community are expected to behave in an ethically acceptable and predictable fashion; community members expect themselves and others to conform and tend to express their dislike – or even abhorrence – of non-conformists in an ostracising manner.

Being based on the premise of mono-ethicality, the concept of ethical integrity serves to strengthen intrinsic rigidity within systems of ethical premises both in the individual and in the community. The use of the term ‘ethical integrity’ implies, of course, that ethical laxity and divergence do indeed exist and also that they should be avoided and admonished in the interests of the survival of the community: in other words, the diversity, plurality and malleability of human nature should not be left undisciplined. Applying the terminology of our previous discussion, we can recognise that, for many centuries, disciplined adherence to a specific set of as-if ethical premises and conventions, i.e. ethical integrity, serves to provide individuals and communities with existential certainty. It follows that ethical integrity inherently expresses the anticipation of – and, quite often, the fear of – existential uncertainty.

Thus, it is reasonable to argue that ethical integrity has been invented as a key criterion for social cohesion, identity and BEING by making human thoughts and behaviour either reciprocally predictable and trustworthy or fear-generating and untrustworthy: whilst ethical malleability and transformation run the danger of generating fear, untrustworthiness and existential uncertainty (particularly in mono-theistically conditioned cultures, (Robinson, 2014)) mono-ethical integrity is a sine qua non for an affirmative, certainty-creating and -preserving human condition (both for individuals and communities) including affirmative nihilism. (Cioran, 2008)

1.5 Uninventing ethics: Is the development of AI and SI challenging ethical integrity?

What we are possibly now heading towards is an age, already anticipated by the digitally-conditioned generations, in which the mind-set of creating existential certainty – through the assumption of ethical premises as the concretisation of a meaning to life – finds itself compelled to co-exist and interact with the artefacts of AI which apparently have no such mindset, i.e. do not (need to) search for a meaning to their existence, have no emotion-binding ethics, have no need for certainty, but do have the capacity to process very high volumes of data with no suppression of information through psychological factors or through volume limitations due to restricted retrieval capacity.

Could it be that a fundamental shift in global ethics is already underway or, returning to the earlier discussion, could it be, in the immediate term at least, that a significant proportion of these sensor- and algorithm-packed, probability-calculating AI-artefacts will simply remain instruments of the ethics of human economic, political or racial supremacy?

Some of the fears which are emerging in relation to the latter scenario in particular have been captured by Jon Kofas and include the following:

Considering that most people will live in the non-Western World, those in the West will use AI as the pretext to keep wages low and exert their political, economic, military and cultural hegemony.
The universal presence of robots would mean the absence of self-determination and even the absence of humans collectively determining their own destiny.
There is a case to be made that identity with the machine and emulating it leads to a necroculture distorting human values where inanimate objects have greater worth than human beings – materialism in a capitalist society over humanism of an anthropocentric society is the norm.
… human dignity would suffer across the board for all people subjected to AI robot surveillance and supervision.
Will AI create war crime conditions much worse than we have ever seen, or will it be discriminating killing and destroying?
There is a very real danger that governments will program AI to manipulate public opinion.
Why would corporations not be using AI to manipulate consumers and increase profits? (Kofas, 2017)

Such fears could have their roots in an ethical standpoint which differs from the original cluster of ethical premises which we have been discussing. The fears implicitly warn us that, unless something changes at the level of core ethical premises, increasingly large segments of humanity could be following the path of the Nietzschean ethical footprints which ultimately end in

supreme influence in the hands of those who legitimise themselves to take it

and

self-inflicted dis-enfranchisement, exclusion and/or nihilism for the rest of humanity

until

the perfection of existence emerges through disjunctive development , e.g. a post-singularity, post-humanity state.

Currently, in accordance with the adage of ‘seeing is believing’, internet and media consumers process so much visual information about other people and cultures that they realise increasingly that their personal worldview is just one of a potentially infinite number. Through watching the world news and documentary films they see that humanity is so multi-religious, multi-secular, multi-atheist, multi-democratic etc. that no single worldview can be veritably valid – let alone universally valid – and certainly not their own.

They also see that a high proportion of individuals and groups take ethics into their own hands with negligible levels of unified global recognition or sanctions; if there are any consequences at all, these can differ so widely from one culture, nation-state and geopolitical union to another that, even though the legal systems of most nation-states were designed to restrict it, individuality-based ethics and behaviour must be legitimate.

They see beheadings, hangings, suicides, thefts, acts of rape and terrorism broadcast publicly in the media, alongside the live-filming of people being swept away to their deaths by natural disasters, alongside trillions of snapshots of people sunning themselves, enjoying their individuality and following their passions. They see uncountable documentary films about different worldviews or creatures – living, procreating, dying or already extinct – alongside one film after another re-casting previous versions of history, science and truth. They also see air-borne drones and earth-bound robots doing tasks – including outwitting the world’s best chess-players – with exponentially increasing levels of ‘dexterity’, precision, tirelessness, effectiveness, efficiency, calculatory and data-retrieval capacities which threaten to dwarf human capability into what may be felt to be humiliating insignificance in numerous sectors of professional and private life.

Viewed from the perspective of the ethical premise of self-determination, the effects on consumers of prolonged interaction with these media and artefacts could include the stripping of their dignity and the banalisation of their ethical integrity and personal identity. For people who have been ethically conditioned to cherish the premise of personal dignity, something which even those in the more advanced stages of dementia try to preserve for as long as possible, this erosive process may have serious psychological consequences. It is no coincidence that organisations which support people who wish to retain their dignity by exercising self-determination in relation to the termination of their own lives have chosen names such as such as ‘Dignitas’ (which is based in Forch, Switzerland) and ‘Death with Dignity’ (in Portland, USA).

For many people, suicide or euthanasia are their final earthly acts of ethical integrity, their will to practice affirmative, self-legitimised certainty, to achieve eternal autonomy and amor fati. The powerless which they feel can be physical, psychological and often both. Many suicide notes express a need to autonomously put an end to the inner torment of lost dignity, powerlessness or heteronomy and thereby to assert their ethical integrity.

The true motives behind the several suicides at the annual Burning Man events in Nevada which have reached the press are largely unknown, but leave us wondering to what extent they were final acts of ethical integrity or perhaps of ethical nihilism. At the time of writing in 2017, there was a poignant form of suicide at this event involving a 41-year-old man who had been living and working in Switzerland. According to the accounts of witnesses which were reported in the media, the man jumped over all of the safety barriers and ran directly into the burning fire where he literally became The Burning Man, and died. It is common knowledge that this particular annual event is well-frequented by the digital community and consciously promotes the radical self-expression of individuality and the breaking of societal taboos. It is also well-known that global suicide rates have been increasing over the past 45 years and are currently highest in the 15-29 age-range. (WHO, 2017) In recognising that the motives and reasons for suicide are highly diverse, we do not intend here to suggest any direct or causal link with the conceptual design or organisation of the Burning Man events. However, in any society which regards the taking of one’s own life as undesirable, the active promotion of ethical premises which underlie an increased expression of individuality, self-determination and self-legitimacy and which, in turn, could possibly lead to the erosion of personal dignity and ethical integrity – thereby increasing the numbers of suicides – would arguably warrant serious study.

One question which certainly raises itself is how both individuals and society at-large can most effectively come to terms with the erosion of ethical integrity if it does occur and if it is regarded as undesirable – a matter which we have addressed in a separate article entitled ‘Ethik Macht Krank’ (Ethics Makes Us Ill). (Robinson, 2017). At this point in the current discussion, it may suffice to suggest that undesired developments could best be addressed through approaches based on different paradigms than those which created them: premises and techniques applied in education, management and psychotherapy would arguably need to avoid contributing ‘more of the same’ when addressing undesired issues.

A second question, which relates to the central point which we are examining here, concerns the extent to which the premise of legitimacy is pertinent: within that premise, one can ask which entity or ‘authority’ it is which can legitimise itself, or others, and on which grounds, to catalyse and reinforce – for vast segments of global society – a potentially irreversible shift in ethical premises from those which underlie the holistic moral individual and holistic forms of theism towards an increasingly atomised form of individuality.

As we have expounded above, it is this atomisation which constitutes the modernist foundational premise for phenomena such as self-determination, self-empowerment, self-legitimisation, self-validation, self-esteem, self-responsibility and the need for control as well as being a potential source of what are often regarded as psychological disorders such as ‘self-alienation’, ‘social alienation’ and the treatment of the self and others as ‘objects’.

In a book entitled ‘Liberal Democracy as the End of History’, Christopher Hughes links individuality and nihilism as part of his treatise on postmodern challenges:

This libertarian desire to free ourselves from the constraints of being human has a distinctly Nietzschean flavour. … A postmodern politics is nihilistic since it aims to break convention, rules, power and universals and realise individuals as self-determining beings who construct their own rules and ethics. (Hughes, 2011)

The example of suicide and euthanasia, as an expression of self-determination, is one which shows very clearly how the question of legitimacy can be argued consistently and circularly both from a libertarian perspective and from the perspective of ethical premises such as collective-determination, theistic pre-ordainment or the primordiality of nature. Depending on which ethical premises apply, ethical integrity can legitimately either include or preclude the active termination of one’s own life – or helping others to do so. As discussed in other sections, including Section 1.4, ethical integrity, being a mono-ethical concept with a water-tight circular argumentation regarding legitimacy, is not only a safeguard against ethical laxity but also a hindrance to meaningful inter-ethical dialogue and ethical transformation.

1.6 Ethical transformation: Lying in wait for centuries until the advent of AI?

Writing two centuries ago, Giacomo Leopardi may have anticipated the transformation of human ethics without having had the means to trigger it. In his voluminous notebook entitled ‘Zibaldone’ written between 1817 and 1832, we find numerous reflections by Leopardi which circumvent transformation impediments such as the concept of mono-ethical integrity and the circular argumentation of legitimacy. Even though his works have probably been less widely read in global terms than those of Dante or Nietzsche, in literarian and ethical terms, Leopardi can be described as a lineal descendant of Dante and a lineal ascendant of Nietzsche – see also ‘Emerson’s Knowledge of Dante’. (Mathews, 1942 No. 22) In his notes, Leopardi repeatedly deliberates on the visionary ethical legacy to be found five hundred years earlier in Dante’s ‘Divine Comedy’ through which Christianity turns out to be both a creator of ownerless aesthetic illusion (Pfaller, 2014) and a birthplace of nihilism. Leopardi continues in the same vein and progressively dismantles the legitimacy of the dualistic ethical paradigms of ‘truth-untruth’ and ‘certainty-uncertainty’. More than one hundred years before the building of Unimate, the world’s first digital and programmable robot, Leopardi points towards a futile, anaesthetic existence which scientific rationalism and technological development has in store for humanity. In his article entitled ‘The Nietzsche of Recanati’, David Hart formulates Leopardi’s conclusions as follows:

The principle culprit is Christianity … and … The curse of scientific reasoning has rendered the world uninhabitable for us. (Hart, 2014)

Very early in his notes, Leopardi escapes from the circularity of a certain number of humanity’s self-created ethical premises:

A further sad consequence of society and the civilization of humanity is a precise awareness of our own age and that of our loved ones, so that we can know with certainty that so many years from now … I will certainly die or they will die. … It is something that makes … (one’s) situation like that of a condemned prisoner and infinitely diminishes nature’s great generosity in concealing the exact time of our death, which if seen with precision would be enough to paralyze us with fright and dis-hearten us for our whole life. (Baldwin, 2013) (p. 93)

In contrast to the positing of certainty-uncertainty-based premises and visions such as a Super-Human or a one-way ticket on a rocket to Mars (Singularity University, 2017), Leopardi’s reflections tend towards anethicality and a-legitimate premiselessness. He cancels out as-if certainty and fear-avoidance and breaks out of the dichotomous atomisation in meaningfulness and meaninglessness. Human BEING and nothingness entail an aesthetic amor fati within nature’s benevolence. Could it be that Giacomo Leopardi anticipated ethical singularity nearly two hundred years ago, or did he stop short of escaping from the ethical premise of aesthetics as the meaning to existence, just as Dante had done, as Nietzsche would do after him and as scientists, engineers and others have also been doing for centuries?

The appeal of the term singularity, and its use by scientists in particular, can be an expression of seeking a deep order in life and the universe along the lines of Mary Somerville’s thinking in ‘On the Connexion of the Physical Sciences’? (Somerville, 1834)

One finds a similar searching for order in Edward O. Wilson’s deliberations in ‘Consilience: The Unity of Knowledge’ (Wilson, 1998) and also in Peter Watson’s reflections in ‘Convergence: The Deepest Idea in the Universe’. (Watson, 2016)

Not only the fact that humans seek a deep order, but also the manner in which many have been doing so, is of ethical significance, as Ciprian Valcan notes in relation to the works of Emil Cioran:

Like Nietzsche, Cioran notices the unitary nature of the productions of the intellect. They act as filters which prevent the perception of plural reality and the incessant evolution of all things, building the edifice of a stable world, homogeneous and identical with itself. If the world is in fact an infernal succession of sensations, a terrible carousel of always obsolete forms, a theatre of uniqueness and of the unrepeatable, our gnosiological apparatus constantly works on the skilful deformation of these aspects of existence. It suggests their replacement with a comfortable image, in which constancy, continuity, measurability, predictability are the main pillars that make people confidently believe that they are walking on safe ground …
The mission of concepts is to pacify the world, to make it into a province of the self where there is no room for unpredictability or accident, where everything abides by the laws of reason, following their immutable order and refusing the interference of affectivity or sensitivity. (Valcan, 2008).

By positing technological singularity, scientists could be affirming an as-if certainty, namely a stringently reflected prediction and a potentially truly visionary vision that a post-singularity state will emerge through the development of AI and SI in a form which we logically cannot predict. In the event that this happens – whether sooner or later – the scientists who posited and/or predicted singularity will have been right, just as those who conceived chaos theory, for example, were also proven right. Thus, certainty, rightness and ethical integrity prevail. In prevailing by virtue of the circularity in the argumentation of the legitimacy of the chosen set of premises, the mono-ethical foundation of singularity is eternalised and dialogue with other such systems is essentially futile. As with the Nietzschean-like cluster of ethical premises which, as discussed, quite possibly underlie a major sector of contemporary developments in digitalisation, AI and SI, nothing fundamental changes at the ethical level: there is no ethical transformation and – like digital transformation, AI and SI – the posited, order-motivated singularity remains a tool for those who have the faculty for using it in accordance with their respective ethics, such as engineering or business ethics or a mixture of both. Those who lack this faculty exclude themselves at no fault of the self-empowered, self-included, self-determiners.

For others, the positing of singularity and their personal involvement in contemporary developments in digitalisation, AI and SI may have nothing to do with the scientific pursuit of order or affirmative self-determination and reverse Darwinism, but, for example, with the aestheticism of human BEING in accepting the whims of higher authority. In the Introduction to the Oxford translation of Homer’s Iliad by Anthony Verity, Barbara Graziosi remarks:

This poem confronts, with unflinching clarity, many issues that we had rather forget altogether: the failures of leadership, the destructive power of beauty, the brutalizing impact of war, and – above all – our ultimate fate in death … the poem is much concerned with how authority is established, questioned, and maintained.

Graziosi then quotes from the Old Babylonian version of the Epic of Gilgamesh:

“You will not find the eternal life you seek. When the gods created mankind, they appointed death for mankind, kept eternal life in their own hands.”

The ethical premises in these citations differ strongly from those of the Nietzschean-like cluster discussed above. In the final chapter of the Iliad itself, we find two central figures, Achilles and Priam, sharing a meal and taking pleasure in each other’s company, i.e. the ethics of aestheticism in what Graziosi terms ‘pleasure in the affirmation of life in the face of death ’. (Verity and Graziosi, 2011)

Figures like Achilles and Priam, living today and basing their BEING on such ethical premises as these are unlikely to regard digitalisation, AI and SI as a tool for their own ends. Instead, they might see AI-artefacts as perhaps an instrument of power which is owned, governed and applied by the super-human , included beings, to which they do not belong.

Although strikingly different from the Nietzschean-like cluster of ethical premises, we can nevertheless recognise also here an ethical thread which stretches through the works which we have been discussing from Homer to Dante to Leopardi to Nietzsche and to Cioran, i.e.

that the as-if belief in a metaphysical or super-human authority, the as-if autonomous assumption of self-determination and the as-if appreciation of aestheticism are all ways of positing/creating certainty in what can be perceived as an otherwise groundless indeterminacy and uncertainty.

The human condition is therewith atomised binarily through the as-if duality of certainty versus uncertainty which itself mirrors an understanding of the human condition which sharply dichotomises, as we have been discussing, between

  • life and death (and/or life and after-life),
  • rightness and wrongness (and/or good and evil),
  • should and should not,
  • legitimacy and illegitimacy,
  • autonomy and heteronomy,
  • determinacy and indeterminacy,
  • order and disorder,
  • predictability and unpredictability,
  • meaningfulness and meaningless-ness,
  • trust and mistrust,
  • inclusion and exclusion,
  • ethical and unethical thoughts and behaviour.

Having explored the ethical premises, including such binary atomisation, upon which a major section of contemporary developments in digitalisation, AI and SI seem to be based and having explored certain chapters of our ethical heritage in order to provide deeper insight into these ethical premises, we propose that the advent of AI and SI has the potential to catalyse ‘ethical singularity’ and thereby re-roll the ethical super-dice which we first mentioned in the Introduction – albeit, like global warming, arguably almost too late. The reasons behind offering this proposal include the fact that artefacts of artificial/machine intelligence do not currently – and perhaps never will – function with the type of emotion-binding existential terror which, as will be discussed below, characterises the contemporary human condition in vast segments of the world. Accordingly, unless it were programmed into them by humans, humanoid robots would not need to seek existential meaning or certainty in ways in which human-beings do. There is no a priori need for AI-artefacts to function mono-ethically, nor to base their interactions among themselves or with humans on e.g. trust or inclusion.

Thus, if developments such as SI could provide ways of processing information in ways which are not exclusively binary and atomistic , humanity would gain the opportunity to observe how technology-aided forms of intelligence can learn to operate anethically, i.e. ethically neutrally, in what looks likely to remain a multi-ethical environment for centuries, if not millennia, to come. As stated in the Introduction, we have not been exploring how human ethics and humanitarian law can be programmed top-down into AI-artefacts such as robots, the primary reason being to be able to focus on exploring where the application of significant trends in contemporary human ethics originated, where they may be leading and how digitalisation, AI, SI and singularity might contribute to visionary ethical transformation.

In Section 2, based on these reflections, we will illuminate some of the major challenges now facing the supervisory and executive boards of leading organisations around the world.

In closing this current Section, we also propose that if ethical transformation is indeed deemed desirable or necessary, one might consider exploring the as-if character inherent in all ethical premises from the following perspective. If vast segments of global society can recognise that they have been conditioned to lay systems of impermeably interlocked as-if premises as the foundation of an as-if certainty for life – i.e. as their ‘living hypothesis’ – then they will also recognise that behind this hypothesis for BEING lies an as-if premise of uncertainty. Moreover, particularly in atomistically conditioned societies, this as-if premise of uncertainty may have been ignored in their conditioning, not least by those religious institutions which have arguably instrumentalised people’s fears and ‘existential terror’ for their own ‘raison d’être’. However, to recognise and accept that as-if certainty and as-if uncertainty belong together, that there cannot be one without the other, just as there can be no positivity towards life without negativity – i.e. no existentialist affirmation without nihilism – can potentially lead to a re-understanding of BEING through

relativising the current dominance of andro – and anthropocentricism in ethics and thereby
integrating biocentrism and ecocentrism into a form of ethics which is not based on the earthly life-span and needs of the human-being
reducing the current dominance of the workings and effects of the human neocortex, its consciousness, its existential why-ness , its search for fulfilment and dignity and, perhaps above all, its as-if-ness on this planet’s BEING.

Existential terror based on the ultimate uncertainty of human BEING might then be relativised so as to make place, in the first instance, for an equivalent existential terror of

  • the products of human strivings for certainty

and

  • the denial of human temporality.

In the longer term, both forms of terror might cancel each other out.

2. What challenges to current thinking and ethics could the supervisory and executive boards of organisations around the world feel obliged to address and resolve at the level of corporate ethics before it is too late?

The reflections in the previous section allow us to pinpoint several key challenges to current thinking and ethical premises in both business and society at-large which are emerging from developments in digitalisation, AI and SI – challenges which arguably require immediate attention and proactive tangible resolution by employers. Whilst the following examples are mostly taken from the fields of digitalisation and AI and, in particular, from soft robotics, the ethical insights can be transferred to organisations working at the periphery of these fields and beyond.

2.1 The Challenge of Legitimacy: Corporate Ethicality Questions

One of the most central corporate ethicality challenges concerns the legitimacy of adopting and pursuing a certain set of ethical premises, particularly in an age in which one particular set already commands a dominant position in the economy and society at-large, including education, finance, health, leisure, mobility, politics and technology. Few managerial boards can afford to ignore the opportunities and risks of ubiquitous connectivity or digital transformation – such as big-data exploitation and the maximally autonomous robotisation of manufacturing- and service-tasks – on the organisations for which they carry a legal, economic and/or moral responsibility. However, in addressing these opportunities and risks, the senior managers of companies including those which operate on the periphery of the main thrust of digital development – as in Switzerland and many other countries around the world – are faced with at least four ‘corporate ethicality questions’:

  1. How do they legitimise joining the ethical mainstream (e.g. a Nietzschean-like cluster of ethical premises) or taking another course?
  2. What argumentation do they use when laying the foundations for the future ethical footprints of their organisation – and what alternative argumentation could they use?
  3. To whom do they feel accountable at present – and on what ethical premises do the parties to whom they feel accountable base their judgements?
  4. Which individuals or groups could one day make them positively accountable for ethical foresight or negatively accountable for ethical negligence – and on what grounds?

We propose that the answering of these questions deserves a different approach to ones which are commonly practised such as calculating the financial risks for legal and ethical transgressions and building them into profit margins or assessing the likelihood of laws and regulations catching up with changes in societal ethics.

In a report entitled ‘Risk and Reward – Tempering the Pursuit of Profit’ published by the Association of Chartered Certified Accountants (ACCA), the balancing act between societal and business ethics which managerial boards now face is formulated as follows:

It is not reasonable to expect businesses to act altruistically.
A business that voluntarily forgoes economic opportunities will not only jeopardise its own existence but may well harm the interests of its shareholders, and invite legal action from them for doing so.
But developments in company law, the regulatory environment and stakeholder engagement are now combining to make it clear that a company which fails – or refuses – to see the fuller picture and the longer-term prospect will not be acting in the best interests either of itself or of its investors.
… ‘doing the right thing’… is where business ethics really come in. Knowing what the right thing is will often be straightforward and unproblematic in our personal lives, but the dynamics of the business environment … make it more difficult. Knowing what the right thing is can also appear easier with hindsight after one has knowingly or unwittingly crossed an invisible line of transgression.
Ethical policies and practices need to be appropriate to the environment in which the business operates, which means that government and regulators are not necessarily the right source of ethical guidance. (Davies et al., 2010)

Whilst opinions might vary on matters such as

  • what the future role of the nation-state and its jurisdiction will be, given the impact of social media, e-democracy , blockchain, cryptocurrencies etc., see ‘Whither the Nation State’ by Sebastian Payne (Payne, 2017)
  • how organisations should function, given the impact of
    • digital transformation on manufacturing and service industries,
    • agility, holacracy and digitalisation on the work-place,
    • the prevalence of individuality on society at-large,
    • the vociferousness of people who feel dis-enfranchised by globalisation and digitalisation

and

  • how these current developments will impact on the physical and mental health of the working and non-working population

it is arguable that organisations which employ people currently represent one of the most significant sources of social cohesion, existential purpose and financial security in the lives of a significant proportion of world citizens. This means that the owners and board members of most employer-organisations, in the Western world at least, face the challenge of how to balance societal and business ethics within a global environment which is currently showing strong signs of attraction towards anarcho-capitalism, i.e. a distrust of nation states and the undoing of the perceived dis-enfranchisement of the individual – a challenge which requires ever-increasing levels of acumen in the field of ethics, including the dimension of the ethics of non-commercially motivated, perhaps gnosiological, scientific and engineering enquiry and pursuit, e.g. where digitalisation and AI are furthered in their own right without any social, commercial or political motives. In the following discussion, we will employ the term ‘engineering ethics’ to convey the latter and thereby also underscore a significant differentiation between societal, business and engineering ethics. In so doing, we are mindful of figures such as Alfred Nobel who engineered armaments based on his invention of dynamite; also of Julius Robert Oppenheimer who played a major role in engineering the technology from which the atomic bomb was created and used by others in a form of ethics which led him later to cite (an often-misunderstood translation of) the Bhagavad Gita:

Now I am become Death, the destroyer of worlds. (Hijiya, 2000)

In the Civil Law Rules on Robotics of the European Parliament, which we referred to earlier, we find the challenge of how to balance societal, engineering and business ethics portrayed very vividly as follows:

… humankind stands on the threshold of an era when even more sophisticated robots, bots, humanoids and other manifestations of artificial intelligence (“AI”) and … it is vitally important for the legislature to consider its legal and ethical implications and effect without stifling innovation
… (there are) not only economic advantages but also a variety of concerns regarding their direct and indirect effects on society as a whole …
… (there are) new liability concerns … (for example) the legal liability of various actors concerning responsibility for the acts and omissions of robots …
… the trend towards automation requires that those involved in the development and commercialisation of AI applications build in security and ethics at the outset …
… the use of robotics … should be seriously assessed from the point of view of human safety, health and security; freedom, privacy, integrity and dignity; self-determination and non-discrimination, and personal data protection… (European Parliament, 2017)

Linking to our discussions in Section 1, we propose that the complexity of the challenges to senior management – whether they currently have a high level of digitalisation and of AI in their own operations or not – now takes on a further dimension as artificial intelligence, artificial ethics, superhuman intelligence and superhuman ethics begin to establish themselves firmly in professional, private and political life. The new ethical challenge concerns the legitimacy of deciding not merely on the extent to which they should, or could, participate in the switch from being an employer of human-beings and a benefactor to family well-being to being an applier of technology and a beneficiary of digitalisation, but also the extent to which they should, or could, embrace non-humanness into their raison d’être and understanding of ethics, as expounded in the concluding remarks to Section 1.

2.2 The Challenge of Multi-Ethicality: Singularity Tragedy or Singularity Comedy?

The complexity of legitimacy is somewhat intensified by the fact that the outer world of any organisation is already factually multi-ethical and perhaps also its inner world, as we have discussed in depth elsewhere. (Robinson, 2014) (Robinson, 2016) Certainly, senior management has the opportunity to be cognisant of the increasing prevalence of individuality and the concomitant increase in multi-ethicality in many segments of national and global society – a trend which may be linked, at least in part, to the exponential developments in immediacy and ubiquity which the internet and digital artefacts have made possible. Simultaneously, senior management may need to be cognisant of the aspirations of nation states and their leaders, of commercial and financial institutions as well as of certain so-called criminal forces which, for motives which might include an unwillingness to relinquish sovereignty, are actively exploiting movements such as e-democracy and anarcho-capitalism to their own ends, thus re-centralising power and/or re-institutionalising authoritarianism in society – as is observed by social critics and authors such as Adam Greenfield in ‘Radical Technologies’. (Greenfield, 2017)

As a concrete reflection concerning changes of thinking among senior managers, should the latter anticipate and prepare an alignment of corporate ethical premises with those of autocracy and higher authority in the wake of those of holacracy? To underscore this reflection, it is perhaps worth noting that one of Silicon Valley’s key figures in autonomous robotics, Anthony Levandowski, has formally founded a religion which is devoted to the worship of the godhead of Artificial Intelligence. (Harris, 2017) Furthermore we note that, at a different time in history, certain world figures who practised political authoritarianism had previously – and seemingly paradoxically – found personal intellectual nourishment in the anti-authoritarian works of Friedrich Nietzsche, as expounded in Section 1.

Whilst the ethics of autonomous individuality lie behind much of the development of digitalisation, AI and SI and whilst they may have an immense impact on the ethics and modes of thinking of billions of people around the world, the likelihood that the world’s population will ever become mono-ethical is less than minute: indeed, societal movements against digitalisation and globalisation are likely to manifest themselves more and more strongly and thereby contribute to a variety of counter-balancing effects in the field of ethics and ethical diversity. Barring the extinction of humanity, which would certainly constitute a ‘Singularity Tragedy’ for the human species, the notion of reaching a single global ethical dot, following which human coexistence is unrecognisably re-evolutionised, is likely to remain a ‘Singularity Comedy’ – in both the humorous and the happy-end sense of that term. Accordingly, a further item can be added to the four corporate ethicality questions for senior management listed above, i.e.

  1. how do they address the factor of multi-ethicality, including artificial and superhuman intelligence, when developing the code of ethics, vision, culture and strategy of their organisation?

Given questions, reflections and insights such as these, given also the immense wealth of business and engineering opportunities which digitalisation offers and not forgetting either the warnings given by Stephen Hawking and others (as cited in the previous section) or potential risks to the mental health of employees and society at-large, it is arguable that company owners and their managerial boards now require an even deeper understanding of the ethical premises which underlie their corporate activities and their technological, business and social environments than ever before. For the purpose of precision, we stress that the use of the term ‘ethical premises’ refers primarily to the type of deep-level phenomena which we have been discussing in Section 1 – phenomena which can constitute the ‘invisible lines of transgression’ mentioned in the ACCA report above. Whilst it can be posited that an in-depth understanding of ethical premises is necessary, it is likely to be insufficient unless senior-management also knows how to apply that understanding to the ethical re-crafting of their business model and its dove-tailing with their new code of ethics, vision, culture and strategy. One of the key competence-clusters which people of responsibility could draw on when addressing the new challenges and answering the five corporate ethicality questions is that of ethical competence, inter-ethical competence and anethicality.

‘Ethical competence’ is defined here as the ability to think and behave in a manner which is regarded by a given ethical community as being appropriate within that community: ethical competence is thus always defined in relation to a single ethical community, i.e. a single system of ethical premises.
‘Interethical competence’ is the ability to think and behave in a manner which differing ethical communities regard as being ethically neutral. (Robinson, 2014)
‘Anethicality’ is a state of mind which is free of fixed or ‘non-negotiable’ ethical premises (Robinson, 2014) and is thereby perceivably empathetic to each and every constellation of human ethical premises as also to non-humanness such as artificial intelligence. Being a state of neutrality, anethicality neither accepts nor rejects ethical premises or standpoints and is not equivalent to ethical relativism – matters which are explained in more detail in a previous article entitled ‘The Value of Neutrality’. (Robinson, 2007) Having this state of mind does not preclude a person from being ethically competent in relation to one or more ethical communities. Possessing ethical competence in relation to a certain ethical community could, however, preclude a person from possessing or developing anethicality, i.e. if the ethical premises of that ethical community cannot include ethical neutrality.

Depending on the ethical context and possibly assisted by technology-aided forms of intelligence (see Section 1.6), the development of anethicality could constitute a contribution to the ‘mental plasticity ’ (Church, 2012) (p. 134) of senior managers and other people, regardless of factors such as their cultural conditioning or age, and could add a new dimension of value both to the expertise which a person has already amassed from professional and worldly experience and to his/her ability to overcome inherent circularity in the argumentation of ethical legitimacy. Irrespective of any personal benefit, all stakeholders of an organisation could stand to gain from a management team which avoids the potential traps of deeply-engrained experience and which, instead, turns its collective experience into a resource of agility, versatility and acumen. By embracing the opportunity which the development of digitalisation, AI and SI now offers human-beings’ modes of thinking, the currently most influential stewards in business and society at-large – particularly if these happen to be ethically visionary individuals and groups – could take this opportunity to contribute to metamorphosis at one of the very places which determine life’s quality and/or meaning and the nature of the human and non-human condition, albeit within a limited sphere or range of microcosms.

However, included in the core senior management challenges is the task of overcoming the fact that both managers and employees, almost without exception, have been conditioned to think and behave in mono-ethical terms. In fact, much of the mono-ethical conditioning which tends to commence in people’s early childhood seems to get reinforced and intensified as they progress through their further education and into their professional lives. At work, their thoughts and behaviour are expected to conform with a single set of ethical norms, i.e. a code of ethics , often underpinned by legislation. In an atomistically-orientated society, mono-ethicality brings with it the dualism of ‘ethical behaviour’ and ‘unethical behaviour’ and senior managers typically have no difficulty in expressing what type of behaviour belongs to which category. In other words, the cognitive ethical awareness of senior managers is generally very high. When one observes their behaviour, however, it becomes clear that there are often discrepancies between their cognitive ethical awareness and their ethical competence. From time to time, their behaviour can stray from the ethical norms of the community in which they operate. Whilst such discrepancies can lead to tensions, to legal and career consequences and to health problems, which we have discussed elsewhere (Robinson, 2017), what is relevant for the current discussion is the fact that the phenomenon of the ‘ethical transgression’, which is frowned upon by the community and which can trigger a genuine bad conscience within the individual, is an expression not only of mono-ethicality within a veritably multi-ethical context, but also very often of a schism between ‘individuality’ and ‘the moral individual’. To recall our discussion in Section 1, ‘individuality’ is defined here as the product of an atomistic, secular and often atheistic way of thinking in a community through which individuals legitimise themselves to exercise freedom of thought and behaviour and to distinguish between the expression of natural passions and cognitively reflected behaviour. Self-determination, self-empowerment, self-validation, self-esteem, self-responsibility, self-reproach and self-alienation are concepts whose contemporary definitions closely match that of individuality. The concept of the ‘moral individual’ contrasts with ‘individuality’ in being void of internal atomisation – i.e. it entails a holistic form of integrity based on one-ness in personal thought and behaviour – and void of external atomisation – i.e. through living in one-ness with the community and its moral and legal rules.

Whilst the proliferation and impact of atomistic thinking, which has been a sine qua non in science, technology, engineering and the invention of individuality, are increasing in the world of business and society at-large along with the evolution of digitalisation and whilst this mental conditioning could be progressively un-inventing the moral individual on a grand scale, we find that the latter, i.e. the moral individual, still plays a defining role in the credibility of senior management, as also of politicians. Such publicly-exposed people are expected to be role-models in both a mono-ethical sense and as a moral individual. Consequently, even though their individuality may be very high, they realise that they must expect a high moral-individual level of themselves, something which may be difficult reconcile with their past behaviour and their ethical footprints. In other words, the further major challenge for senior managers in such a predicament would lie in the fact that until their own mono-ethically conditioned self-image and the mono-ethically conditioned external expectation of them are changed, they would lack the type of credibility which would be needed to effect ethical transformation. Such change could be effected through:

  1. undergoing the inner ethical transformation from mono-ethical competence to inter-ethical competence and to anethicality

and

  1. creating understanding for this inner transformation among their peers, employees, clients, business partners and, not least, their loved ones.

2.3 The Challenge of Ethical Transformation: Corporate Options and Corporate Questions

One way of effecting ethical transformation, borrowing a term from Venkatraman, whose work we discussed earlier (Venkatraman, 2017a), would be to commence at the ‘edge’ of a corporate structure and its activity, i.e. to build a corporate spin-off which could later be a role-model for – if not induce a re-invention of – the original organisation. Al-ternatively, one could transform the whole organisation which, depending on its size and history, could prove to be highly risky.

A new corporate-edge entity could characterise itself through one of the following ‘ethical transformation options’:

  1. a carefully selected set of ethical premises which would differ, probably quite strongly, from those of the existing organisation and, depending on the nature of those premises, form the basis for a new organisational/business model with its own code of ethics, vision, culture and strategy,

or with something even more ambitious, e.g.

  1. multi-ethicality, including non-humanness, and thus be an exact replica of humanity as we know it, and anticipating the emerging, transformative and ever-diversifying terrestrial condition.

In choosing between such ethical transformation options for the whole organisation or for a corporate-edge entity, decisions would have to be made, consciously or subconsciously, concerning a variety of ethical premises, the groundwork for which we laid out in the previous section and also in the list of five corporate ethicality questions above. Linking back to those discussions, ethical transformation questions might include the following:

  • To what extent does the legitimacy of choosing and adopting a new set of ethical premises need to be addressed, by whom, how explicitly, what would be the role of reason, passion or intuition, and why?
  • How can it be established that a chosen set of ethical premises is truly transformational, or not? How anthropocentric is the chosen set, and why?
  • To what extent does any argumentation on the matter of legitimacy need to be evaluated in terms of potential inherent contradiction or circularity and why? To what extent is the why-question (above and below) appropriate?
  • What criteria should be used to determine who should ask, and who should answer, the question as to who should be involved in any decisions on choosing and adopting a new set of ethical premises and why?
  • How should any decision-making processes take place and why?
  • To what extent should human, artificial and superintelligence be involved in any data-gathering or decision-making and why?
  • To what extent should the responsibility for the consequences of any decisions for oneself and for any third parties be considered, how and why?

The answers which result from such questions are likely to have a major impact on the relationship between management and employees and also between the organisation and its human and non-human environment – relationships which are already undergoing fundamental change in many countries and cultures. Arguably, it would be wise for senior management to address these questions proactively rather than, consciously or subconsciously, finding reasons for procrastination.

Ethical transformation will also necessarily bring into question the role which several pillars of managerial understanding and teaching have played until the present day. Such pillars include

  • trust
  • reliability
  • moral conscience
  • authenticity.

Hitherto, these and other fundamental preconditions for productive relationships inside organisations and in society at-large have been inextricably linked to the premise of mono-ethicality. As such, they fulfil a socially conditioned need for cohesion through the expectation of predictability and thereby constitute an institutionalised norm for relationships of most types, including those between management and employees.

If the expectations for ethical conformity and predictability are not fulfilled either by managers or by employees – whether singly or collectively – then relationships can very quickly degenerate and sooner or later trigger serious organisational and social malfunction as well as individual psychological and psychosomatic health disorders. Individuals or organisations who do not fulfil these fundamental expectations can become irreversibly ostracised – justified on the very familiar grounds of a lack of ‘trust’, ‘reliability’, ‘moral conscience’, ‘authenticity’ etc.

It follows that senior managers who are contemplating ethical transformation – as a prerequisite for or as a consequence not only of developments in digitalisation, AI and SI but also of the fact that their inner and outer world is already multi-ethical – are faced with the challenge that, whilst firmly-rooted elements of mono-ethicality such as trust and reliability impede even the tiniest amount of transformation and multi-ethicality, they are compelled to find and/or develop adequate amounts of inter-ethical competence and anethicality: the latter involves an agile, versatile state of mind with a high level of empathy which seeks for itself neither certainty nor uncertainty, neither predictability nor unpredictability, neither the fulfilment or non-fulfilment of expectations, neither trust nor mistrust.

Whether declared explicitly or not, deciding for any one of the two ethical transformation options given above will almost certainly also run into legitimacy challenges from and/or with most mono-ethical entities, whether the latter are within the organisation or outside it: there is a high likelihood that the circular legitimacy argumentation which characterises any mono-ethical entity will preclude finding a sustainable consensus. However, the chances of attaining such a consensus could be significantly higher if there were sufficient inter-ethical competence and anethicality present among those involved, affected or both. If adequate levels were indeed present, then finding solutions concerning the role of reason, passion or intuition as also deciding on who would be involved and how to handle accountability would also not be particularly difficult.

2.4 The Challenge of Ethical Foresight and Accountability: Transformational Contingency

Examples from the advancement of artificial and super-intelligence offer opportunities to observe and reflect on the role of ethical foresight and accountability and to deepen our understanding of current dynamics at the interface between engineering and business ethics, on the one hand, and societal and individual ethics, on the other. Taking an example from the field of soft robotics, we will now examine the contingency of digital and ethical transformation.

Let us imagine that we have consciously founded, and are operative in, an organisation or corporate ethical-edge entity which we have legitimised ourselves to base upon the Nietzschean-like cluster of ethical premises discussed in Section 1. Through adopting this cluster, we have further legitimised ourselves to build a humanoid robot with an agent-learning programme. The latter has enabled our robot to approximate human behaviour.

In the speech repertoire which it has built through learning from human speech behaviour, our humanoid can now say

‘Sorry, please forgive me!’

and

‘I love your eyes when you get angry!’

Let us further imagine that we have built our humanoid with a face which can redden and with eyelids which can vary in millimetres between being wide open and tightly closed; we have programmed it in such a way that the facial reddening and a noticeable tightening of the eyelids can occur when the humanoid perceives certain human behaviour, e.g. when a human says in an energised tone of voice and with direct eye-contact

‘What you just did is not what we agreed!’ (or some semantically equivalent statement).

The result of programming the humanoid in this way is that it can replicate the facial expression of a human-being with a guilty conscience, i.e. with the reddened face and tightened eyelids. Simultaneously, it is able to express, in verbal terms, an acknowledgement of wrong-doing and guilt through either an explicit apology or a deflecting form of flattery.

If we assume that we have not tried to, or have not succeeded in, implanting a human type of conscience into the humanoid, the following reflection can exclude any implications of what we have created for the ‘being’ of the humanoid. Accordingly, we can restrict our reflection to the implications for certain sectors of current local and global society from a human point of view – noting that this example serves also as a symbol for many other developments and outcomes of digitalisation. Significantly, these implications may lead us to regret not having anticipated them before building the humanoid in the first place and before even legitimising ourselves to adopt the set of ethical premises which underlies our corporate entity. Alternatively, we might have no regrets at all, for reasons which may be equally significant for understanding ethical transformation.

Starting with the facial expression, humans who interact with our humanoid may or may not be mentally and emotionally affected by the reddening and the tightened eyelids. Depending on their psychological constitution and condition at the time, a high proportion of humans will not be able to avoid entering into some form of relationship with the humanoid, even though they will be cognisant of the fact that the humanoid’s change of facial expression does not stem from a human conscience. Even for us when we were doing the construction and programming of the humanoid, there is the possibility that we will have been motivated by the thought of being able to trigger behavioural reactions in an Other, not least ones such as guilt, and getting the Other to do exactly what we want – as is a possible key motivation behind the creation and application of sex robots – perhaps as a vent for feelings of personal insufficiency and/or a need to exercise some form of personal power or self-determination. The life-like element of what we have constructed corresponds to one of the basic features of the modernist human condition: through the activation of our faculty of anthropomorphism (i.e. the attribution of humanness into non-humanness), we can generate a self-asserting experience of immediate (i.e. temporal) personal certainty not only through perceiving the demarcation between animacy, pseudo-animacy and inanimacy but also through being and, of course, playing the Creator, i.e. being, and, of course, playing the Creator of the Creator – cf. the reflection in Section 2.2 on movements such as Anthony Levandowski’s creation of the godhead of AI. (Harris, 2017)

In terms of the ethical premises which are at play here, we can readily recognise several elements of the cluster which we discussed above in relation to the works of Dante and Nietzsche and upon which we have built our organisation or corporate ethical-edge entity , i.e.

  • will to influence,
  • amor fati and eternal autonomy,
  • human aestheticism,
  • inherent nihilism within the ‘as-if’ affirmation.

From the perspective of the human who is interacting with our humanoid, he/she enters into an ‘as-if’-relationship, where, in exercising the will to influence ‘Otherness’, he/she can fulfil natural passions and strengthen personal autonomy in a self-legitimised, affirmative way over a pseudo-human. In striving to attain personal certainty through exerting as-if control over the Other, the human agent can find personal confirmation through our humanoid’s reaction. The state of mind which emerges over time in the human can quite easily develop from being one of attraction and fascination into being one of self-projecting attribution, addiction and thereby of self-addiction. In other words, the interaction with our humanoid can serve as a catalyst or boost for a narcissistic form of aestheticism – or possibly an aesthetic form of narcissism.

Whilst such anthropomorphism has, of course, not been at all uncommon in society, e.g. between humans and machines, until the present day, interactions between humans and highly sophisticated humanoid robots – including the one we have legitimised ourselves to build in our example – contain a particular element which may deserve ethical and juridical attention, namely something which we will henceforth term the ‘as-if0 dimension’ (which will be explained below) and is emerging with the digital, inanimate form of autonomous intelligence which is currently being developed in numerous countries.

Returning to the example and as already mentioned, the human interactor is cognisant of the fact that our humanoid does not have a guilty conscience and that its behaviour, just like that of an actor in a film or on a theatre-stage, is at best a simulation of human behaviour. We will term this mode of cognisance the ‘as-if2 dimension’, which captures the ability of adults to abstract in a cognitive way from concrete experience. When in full as-if2 mode, the human interacts with our humanoid in a mental state of maximally conscious awareness. It is the faculty for this as-if2 mode which allows humans to interact with others on a meta-level and thereby in a manner which generates negligible amounts of positive or negative deep-psychological impact from their interactions. Strongly atomistic cultures are predisposed to as-if2 interactions as also are ratio-based modes of thinking which commonly occur in exchanges between people in professions such as those of lawyers and engineers.

In parallel with the as-if2 mode of interaction, it is possible, of course, that, at a less conscious, non-abstracted level, emotions will be triggered in the human as he/she interacts with the behaviour of our humanoid. This can occur when the human’s nonconscious faculties begin to interact anthropomorphically with the humanoid as-if its behaviour were animate and when the human is unable to control any emerging emotions, e.g. through cognitive reasoning. Such modes of interaction involve what we will term the ‘as-if1 dimension’ – a dimension which can be observed in the behaviour of people of all ages including, most clearly of all, in young children who have not yet developed the faculty which makes purely rational abstract as-if2 interactions possible. A young child can play with a toy in what can seem to adults to be a ‘pretend’ type of fashion, that is until the child becomes implacably perturbed when the toy, with which it has created an affectional, symbiotic bond, gets lost or taken away. For an adult who is in a highly cognisant as-if2 mode and who thereby abstracts and is consciously aware that it is ‘only a toy’ which the child has lost, communicating with the perturbed child who is at the as-if1 level can become tremendously challenging and vice versa.

The same as-if1 phenomenon manifests itself in the behaviour of young children when they use nouns and pronouns in a symbiotic manner with people or things, i.e. when the words which they articulate are virtually inseparable from the object. It is only during later stages of cognitive, conceptual and linguistic development that pronouns are used anaphorically in referring to an explicit syntactic antecedent and/or what is commonly termed a ‘referent’, as investigated in ‘The Use of Anaphora by Children’ (Robinson, 1983).

The significance of the as-if1 dimension is to be recognised in the broadly documented observation that children are likely to experience psychological consequences for the rest of their lives if family relationships are ruptured at an early age, as noted in ‘The Making and Breaking of Affectional Bonds’ by John Bowlby (Bowlby, 1979, 2005) (p. 141). Affectional bond-relationships are represented in the mind/being of the young child as ‘non-negotiable’ premises.

Roger Hilton, who pioneered abstract art in Britain in the post-World War II period, is cited by Adrian Lewis in his book ‘The Last Days of Hilton’ as describing the difference between what we have termed here the ‘as-if1 dimension’ and the ‘as-if2 dimension’ in relation to art as follows:

One has to face the eternal problem about children’s art. It is often charming and you can borrow from it. The difference is, I think, that children are essentially realists , whereas a mature painter is not. (Lewis, 1996) (p.83)

Lewis elaborates:

Hilton is cleverly drawing attention away from imagery to the status of the image. Piaget refers to the first four years of a child’s development as characterised by ‘logical ontological egocentricity’ in the sense that the child constructs its own reality. … A mature artist is aware of making an image or illusion which does not command or constitute reality … and is aware of making a dream-work. (Lewis, 1996) (p.84)

The use of the term ‘dream-work’, which captures many elements of our discussion , allows us to recognise that, since the human bonds and life-visions of adults can vary in affectional intensity and significance, as can be seen in times of extreme circumstance (e.g. depression following the breakdown of a dream relationship, ecstasy following the fulfilment of a truly visionary vision) there are often as-if1 elements at play which have the emotional power to override the as-if2 faculty.

Significantly for our reflection on AI in general, including the possibility of building certain capacities into humanoids, we note that it is the as-if1 dimension which makes the human more deeply vulnerable to deception and more mentally and emotionally susceptible to the consequences of disillusionment than the as-if2 dimension, particularly at the level of deep-ethics as discussed in a previous paper:

As a central part of the human survival system, the deep-ethical system acts as a counterforce to what is sometimes termed the ‘terror’ of inevitable death and, as such, is inextricably linked to people’s deepest feelings, yearnings, sympathies, antipathies and emotional reactions. Lying largely beyond the reach of their cognitive faculties, it is at the level of deep-ethics that human organisms autonomously determine when, where and with whom they feel really comfortable, or really uncomfortable – really relaxed, or really stressed. (Robinson, 2016b)

The deep-ethical systems of human-beings tend to develop in the first four to five years of their lives and, once intrinsically bonded, form the non-negotiable core of their identity. If people’s deep-ethics are violated to the extent that they see no way of upholding their identity, then their ethical health – i.e. their deep-psychological health – can deteriorate to a level where serious depression and/or violence in its various forms can set in. (Robinson, 2016b)

Based on these reflections and depending on one’s ethical standpoint, the legitimacy of consciously or unconsciously developing AI-artefacts which catalyse deep-level bonding of an affectional nature could warrant questioning. If affectional bonds, predominantly at the as-if1 level, were to emerge in human-beings through their ‘interactions’ with autonomous intelligent, inanimate AI-artefacts such as our humanoid, these could lead to a stable, one-sided source of deep pleasure, fulfilment and reliability or possibly to one which mutates into serious psychological harm for the human. Initially at least, the human may lead him-/herself to feel in control of the ‘interaction’; perhaps through the activation of his/her anthropomorphism, the human might put his/her trust into the (functioning of the) artefact. From the ethical premise of self-determination and without deeper reflection, it would be the human who would be accountable for any and all consequences of such ‘interaction’ and not the programmers or their senior management.

Given the unlikelihood of AI-artefacts ever being enabled to operate in an as-if1 mode, we will assume also for our humanoid

  • that it is not able to reciprocate the feelings and trust of the humans with whom it ‘interacts’,
  • that it has neither the faculty nor the need for emotionally-based certainty, for affectional bonds or for deep-ethics and
  • that it does not function with as-if1s since, unlike humans, its ‘condition’ is not founded on the mitigation of deep-seated feelings and premises such as emotion-linked uncertainty, loneliness or existential terror.

However, given the autonomous agent-learning character of the humanoid, it is able to ‘participate’ in a form of asymmetric ‘interaction’ involving potential anthropomorphism, of which its programmers could arguably be aware, and algorithmically calculated and (for itself) non-affectional, deep-ethics-free behavioural choices. For example, the humanoid will calculate whether to say

‘Sorry, please forgive me!’
or
‘I love your eyes when you get angry!’

This calculated decision will be reached on the basis of an as-if0 premise. In other words, the phrase which it decides for will be articulated

  • not in an as-if2 mode of reciprocal interaction,
  • nor in an as-if1 mode of reciprocal interaction,
  • nor in the mode of an interaction between a human and an inanimate object such as a toy,
  • but either in an asymmetric as-if0:as-if1 mode or an asymmetric as-if0:as-if2 mode.

Whilst both of the latter are non-reciprocal modes of ‘interaction’, it is the as-if0:as-if1 mode which arguably poses the more serious accountability questions for senior management, since the as-if0 faculty must have (for lack of alternatives) consciously or unconsciously been programmed into the humanoid without adequate foresight in relation to the factor of anthropomorphism and without consideration of the contingent legitimacy of choosing the Nietzschean-like cluster of ethical premises.

Concluding reflections

That which in the last example might appear, from an AI-development and an engineering or commercial ethics perspective, to be a fascinating, harmless artefact, could, in the psychological reality of the human-being, become the equivalent of a human psychopath, i.e. a manipulator and a simulator of ethics with no genuine feelings of shame, disgust or guilt. Almost paradoxically, the life-affirming gift of as-if1 lays bare its nihilistic roots and allows the human agent to become his/her own victim.

Argumentation from the perspective of the Nietzschean-like cluster that the self-victimisation consequence is self-inflicted and that neither our ethical-edge entity nor we ourselves, as its founders, managers or engineers, need bear any responsibility for any negative consequences nor question our legitimacy is arguably untenable, as we have seen.

Reflected from a different ethical perspective, others might, in analogy to our reference to Oppenheimer in Section 2.1, draw the conclusion:

Unintentionally, you are become Suicide, the annihilator of trust and moral conscience.

Viewed anethically, the above example enables one to see that the dynamics and contingency of ethically diverse BEING and DOING

  1. could be left to their own devices with whatever consequences may transpire

and/or

  1. could/should be consciously anticipated, proactively prevented and, where necessary, retrospectively admonished where accountability is due, e.g. with the senior management of the corporate-edge entity in our example

and/or

  1. require a level of inter-ethical competence which contemporary society arguably does not currently possess in order to radically address and resolve the challenges of living in a factually multi-ethical world.

The particular example, and also the discussion throughout the paper, permit the conclusion that forms of artificial intelligence

  • which have not been (or cannot ever be) equipped with an as-if1 faculty,
  • which do not function on the basis of mono-ethical understandings of trust, moral conscience and authenticity,
  • which do not possess the faculty of creating pseudo-certainty,
  • which do not operate on the notions of legitimacy or control,
  • whose ‘being’ and ‘doing’ is not founded on existential terror and
  • which, for example, have no amor fati passions and aesthetics through which to attain temporal positive self-affirmation

represent a condition which is free from many elements of human ethics which currently lie at the source of both mild and also very severe intrapersonal and inter-human dissonance and dysfunction.

The example also offers the opportunity – possibly an Ethical Singularity Comedy – to overcome what Ciprio Valcan terms ‘our gnosiological apparatus’ which ‘constantly works on the skilful deformation of (these aspects of) existence (Valcan, 2008) and for humanity to learn how to rewrite the ethics of BEING.

Stuart D.G. Robinson
24th December 2017

Bibliography

  1. Aschheim, S. (1994) The Nietzsche Legacy in Germany, Oakland, University of California. (back)
  2. Auerbach, E. (2007) Dante – Poet of the Secular World, Berlin, Walter de Gruyter. (back)
  3. Baldwin, K. (2013) Zibaldone: The Notebooks of Leopardi, London, Penguin Classics. (back)
  4. BBC (2014) Stephen Hawking warns artificial intelligence could end mankind [Online]. Available at http://www.bbc.com/news/technology-30290540 (Accessed 4 October 2017). (back 1) (back 2) (back 3)
  5. Bowlby, J. (1979, 2005) The Making and Breaking of Affectional Bonds, Oxford, Routledge. (back)
  6. Britschgi, T. (2018) Artist’s impression of a galactic singularity, Luzern. (back)
  7. Carvalko, J. (2012) The Techno-human Shell – A Jump in the Evolutionary Gap, Mechanicsburg, Sunbury Press. (back)
  8. Chodat, R. (2008) Wordly Acts and Sentient Things, Ithaca, Cornell University Press. (back)
  9. Church, J. (2012) Infinite Autonomy, Pennsylvania, Pennsylvania State University Press. (back 1) (back 2) (back 3) (back 4) (back 5)
  10. Cioran, E. (2008) E.M. Cioran Werke, Frankfurt am Main, Suhrkamp. (back 1) (back 2) (back 3)
  11. Davies, J., Moxey, P. and Welch, I. (2010) Risk and Reward – Tempering the Pursuit of Profit, London, Association of Chartered Certified Accountants. (back)
  12. Durling, R. (2011) The Divine Comedy of Dante Alighieri, Volume 3, Paradiso, Oxford, Oxford University Press. (back)
  13. European Parliament (2017) Civil Law Rules on Robotics, Strasbourg. (back 1) (back 2) (back 3)
  14. European Union (2017) The Charter of Fundamental Rights of the European Union [Online]. Available at http://www.europarl.europa.eu/charter/default_en.htm (Accessed 4 October 2017). (back)
  15. Europol (2017) Europol Newsroom [Online]. Available at https://www.europol.europa.eu/newsroom/news/massive-blow-to-criminal-dark-web-activities-after-globally-coordinated-operation (Accessed 4 October 2017). (back)
  16. Falk, D.A. (2017) Markets Erode Moral Values [Online]. Available at https://www.uni-bonn.de/Press-releases/markets-erode-moral-values (Accessed 4 October 2017). (back)
  17. Financial Times (2017) Stitched up by robots: the threat to emerging economies [Online]. Available at https://www.ft.com/content/9f146ab6-621c-11e7-91a7-502f7ee26895 (Accessed 4 October 2017). (back)
  18. Froese, K. (2006) Nietzsche, Heidegger and Daoist Thought, New York, State University of New York. (back)
  19. Fukuyama, F. (1992) The End of History and the Last Man, New York, Free Press. (back)
  20. Fukuyama, F. (2002) Our Posthuman Future: Consequences of the Biotechnology Revolution, New York, Farrar, Straus and Giroux. (back)
  21. Future of Life Institute (2017) An open letter to the United Nations convention on certain conventional weapons [Online]. Available at https://futureoflife.org/autonomous-weapons-open-letter-2017 (Accessed 4 October 2017). (back)
  22. Google (2017a) Google Books Ngram Viewer: anxiety [Online]. Available at https://books.google.com/ngrams/graph?year_start=1800&year_end=2008&smoothing=20&content=anxiety (Accessed 4 October 2017). (back)
  23. Google (2017b) Google Books Ngram Viewer: inclusion [Online]. Available at https://books.google.com/ngrams/graph?year_start=1800&year_end=2008&smoothing=20&content=inclusion (Accessed 4 October 2017). (back)
  24. Greenfield, A. (2017) Radical Technologies, London, Verso. (back)
  25. Guardian, T. (2017) Thai mother saw daughter being killed on Facebook Live [Online]. Available at https://www.theguardian.com/world/2017/apr/27/thai-mother-watched-daughter-being-killed-on-facebook-live (Accessed 4 October 2017). (back)
  26. Harris, M. (2017) ‚Inside The First Church Of Artificial Intelligence‘, Backchannel, 15 November, pp. https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/. (back 1) (back 2)
  27. Hart, D. (2014) The Nietzsche of Recanati – A Review of Zibaldone [Online]. Available at https://www.firstthings.com/article/2014/05/the-nietzsche-of-recanati (Accessed 4 October 2017). (back)
  28. Hijiya, J.A. (2000) ‚The Gita of Robert Oppenheimer‘, Proceedings of the American Philosophical Society, vol. 144. (back)
  29. Hughes, C. (2011) Liberal Democracy as the End of History: Fukuyama and Postmodern Challenges, London, Routledge. (back)
  30. Jelkic, V. (2006) ‚Nietzsche on Justice and Democracy‘, Synthesis Philosophica 42, vol. 2, pp. 395-403. (back)
  31. Kant, I. (1871) Critik der reinen Vernunft, Riga, Hartknopf. (back 1) (back 2)
  32. Keating, M. (2017) The nation-state is dead. Long live the nation-state [Online]. Available at http://www.academia.bz.it/articles/the-nation-state-is-dead-long-live-the-nation-state (Accessed 4 October 2017). (back 1) (back 2)
  33. Kofas, J. (2017) Artificial Intelligence: Socieeconomic, Political and Ethical Dimensions [Online]. Available at http://www.countercurrents.org/2017/04/22/artificial-intelligence-socioeconomic-political-and-ethical-dimensions/ (Accessed 4 October 2017). (back 1) (back 2)
  34. Lewis, A. (1996) The Last Days of Hilton, Bristol, Sansom. (back 1) (back 2)
  35. Lixin, S. (1999) Nietzsche in China, New York, Peter Lang Publishing. (back)
  36. Loy, D. (1997) Non Duality, New York, Humanity Books. (back)
  37. Mastin, L. (2017) Singularities [Online]. Available at http://www.physicsoftheuniverse.com/topics_blackholes_singularities.html (Accessed 4 October 2017). (back)
  38. Mathews, J. (1942 No. 22) ‚Emerson’s Knowledge of Dante‘, Studies in English, pp. 171-198. (back)
  39. May, S. (2015) Professor May Nietzsche and amor fati [Online]. Available at https://www.youtube.com/watch?v=V8QG6_FxyRI (Accessed 4 October 2017). (back)
  40. Microsoft.com (2017) Microsoft About [Online]. Available at https://www.microsoft.com/en-us/about/default.aspx (Accessed 4 October 2017). (back)
  41. Mittal, N., Lowes, P., Ronanki, R., Wen, J. and Sharma, S.K. (2017) Machine intelligence: Technology mimics human cognition to create value [Online]. Available at https://dupress.deloitte.com/dup-us-en/focus/tech-trends/2017/moving-beyond-artificial-intelligence.html#endnote-sup-1 (Accessed 4 October 2017). (back)
  42. Nietzsche, F. (1882) Die fröhliche Wissenschaft, Chemnitz, Schmeitzner. (back)
  43. Nietzsche, F. (1883) Also sprach Zarathustra, Chemnitz, Schmeitzner. (back)
  44. Nietzsche, F. (1886) Die Geburt der Tragödie, Leipzig, E.W. Fritsch. (back)
  45. Nietzsche, F. (1888) Der Antichrist, Leipzig, Alfred Kröner Verlag. (back 1) (back 2)
  46. Ogilvy, J. (2017) The Forces Driving Democratic Recession [Online]. Available at https://worldview.stratfor.com/article/forces-driving-democratic-recession (Accessed 4 October 2017). (back 1) (back 2)
  47. Payne, S. (2017) ‚Whither the Nation State‘, Financial Times, 24th October. (back)
  48. Pearce, D. (2012) ‚Humans and Intelligent Machines, Co-Evolution, Fusion or Replacement‘, in Eden, M.S.S. The Bio-Intelligence Explosion, Berlin, Springer Verlag. (back)
  49. Pfaller, R. (2014) On the Pleasure Principle in Culture – Illusions without Owners, London, Verso. (back)
  50. Robinson, S.D.G. (1983) The Use of Anaphora by Children, Göttingen, Unpublished doctoral thesis. (back)
  51. Robinson, S.D.G. (1993) The Fundamental Question of Intent in Joint-Ventures and Acquisitions, Zug, Switzerland, 5C Centre for Cross-Cultural Conflict Conciliation. (back)
  52. Robinson, S.D.G. (2007) The Value of Neutrality, Zug, Switzerland, 5C Centre for Cross-Cultural Conflict Conciliation AG. (back)
  53. Robinson, S.D.G. (2014) Interethical Competence, Zug, Switzerland, 5C Centre for Cross Cultural Conflict Conciliation. (back 1) (back 2) (back 3) (back) (back 5) (back 6) (back 7)
  54. Robinson, S.D.G. (2016) Ethical Health Consultations, Zug, Switzerland, 5C Centre for Cross-Cultural Conflict Conciliation. (back 1) (back 2)
  55. Robinson, S.D.G. (2016a) If You Have A Vision – or are developing one, Zürich, bbv Consultancy. (back 1) (back 2) (back 3)
  56. Robinson, S.D.G. (2016b) Ethical Health Management In Practice, Zug, Switzerland, 5C Centre for Cross-Cultural Conflict Conciliation AG. (back 1) (back 2)
  57. Robinson, S.D.G. (2017) Ethik Macht Krank – die medizinischen Folgen ethischer Konflikte am Arbeitsplatz, Zug, Switzerland, 5C Centre for Cross-Cultural Conflict Conciliation. (back 1) (back 2)
  58. Roland, C. (2017) Digital convergence: The shape of things to come — or is it already here? [Online]. Available at https://pre-developer.att.com/blog/digital-convergence (Accessed 4 October 2017). (back)
  59. Rosenthal, B. (1994) Nietzsche and Soviet Culture, Cambridge, Cambridge University Press. (back)
  60. Rubin, H. (2004) Dante In Love, New York, Simon & Schuster. (back)
  61. Schilliger, M. (2017) Warum für Facebook Mord nicht gleich Mord ist [Online]. Available at https://www.nzz.ch/international/facebook-files-warum-fuer-facebook-mord-nicht-gleich-mord-ist-ld.1295627 (Accessed 4 October 2017). (back)
  62. Schrift, A.D. (1995) Nietzsche’s French Legacy, London, Routledge. (back)
  63. Sen, A. (2009) The Idea of Justice, London, Allen Lane. (back 1) (back 2)
  64. Siedentop, L. (2014) Inventing the Individual, London, Penguin. (back 1) (back 2) (back 3)
  65. Singularity University (2017) Singularity University [Online]. Available at https://su.org/ (Accessed 4 October 2017). (back 1) (back 2) (back 3)
  66. Somerville, M. (1834) On the Connexion of the Physical Sciences, London, John Murray. (back)
  67. Thornhill, J. (2017) ‚AI’s rapid advance sparks call for a code for robots‘, Financial Times, 31 Aug. (back 1) (back 2)
  68. Ulam, S. (1958) ‚Tribute to John von Neumann‘, Bulletin of the American Mathematical Society, May. (back)
  69. Valcan, C. (2008) ‚The Philosophical Periods of Emil Cioran‘, in Alin Tat, S.P. Romanian Philosophical Culture, Globalization and Education, Washington, The Council for Research in Values and Philosophy, Available: https://emcioranbr.org/. (back 1) (back 2)
  70. Venkatraman, V. (2017a) Workshop: The Digital Matrix, Boston. (back 1) (back 2)
  71. Venkatraman, V.N. (2017b) The Digital Matrix, Vancouver, LifeTree Media. (back)
  72. Verity, A. and Graziosi, B. (2011) Homer The Iliad, Oxford, Oxford University Press. (back)
  73. Vromen, A. (2017) Digital Citizenship and Political Engagement, London, Palgrave Macmillan UK. (back)
  74. Watson, P. (2016) Convergence: The Deepest Idea in the Universe, London, Simon and Schuster. (back)
  75. WHO (2017) Mental Health – Suicide Rates [Online]. Available at http://www.who.int/mental_health/prevention/suicide/suicideprevent/en/ (Accessed 4 October 2017). (back)
  76. Wigglesworth, R. (2017) ‚Why investors seek an edge in ’natural language processing‘: Money managers are learning to search for clues hidden in the spoken word‘, Financial Times, 26 October. (back)
  77. Wikipedia (2017) Technological Singularity [Online]. Available at https://en.wikipedia.org/wiki/Technological_singularity (Accessed 4 October 2017). (back)
  78. Wilson, E.O. (1998) Consilience – The Unity of Knowledge, New York, Alfred Knopf. (back)
  79. Winfield, A. (2017) BBC Radio 4 The Life Scientist [Online]. Available at http://www.bbc.co.uk/programmes/b08ffv2l (Accessed 15 October 2017). (back)
  80. Zuckerberg, M. (2017) Building Global Community [Online]. Available at https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634 (Accessed 4 October 2017). (back)