top of page

Artificial Intelligence - Ethics and Meaning from a Jungian Perspective

Updated: Oct 28, 2023


I'm excited to announce a collaboration with Margaret Estabrook Stienstra, a Jungian analyst. Margot is a training analyst at ISAPZurich. I met Margot during a trip organised by her to Carl Jung's Kusnacht home and soon discovered our shared interest in the ethics and psychological significance of AI. We have since developed a collaboration to explore AI from a Jungian perspective.

 

Terms of Reference


Nicholas Toko (Jungian Analyst-in-training, MA, BA, Säid Business School, University of Oxford AI Programme) and Margaret Estabrook Stienstra (Jungian Analyst (ISAP Zurich, AGAP, IAAP), Psychotherapist (ASP), M.I.M., Lic. Phil) share a keen interest in Artificial Intelligence. ‘Artificial Intelligence: Ethics and Meaning from a Jungian Perspective’ is an editorial collaboration between Nick and Margot dedicated to exploring Artificial Intelligence from a uniquely Jungian psychological perspective and with a specific focus on the meaning and ethical implications of artificial intelligence technology on individuals, groups and society.


Margot's profile


Artificial Intelligence has been hailed as the new electricity. AI will transform the personal and work lives of humans, particularly in the field of healthcare, for many years to come. However, there are three major pitfalls associated with AI - privacy, bias and replication. There is also a question about the purpose or significance of AI for the psyche. These three major pitfalls and the question of meaning will form the pillars from which Nick and Margot will explore AI with a Jungian lens.


What is Artificial Intelligence?

The term ‘artificial intelligence’ (AI) was coined by American academic John McCarthy in 1956, however, the origins of AI is credited to British mathematician, Alan Turing who made the first, real scientific contribution to AI (University of Oxford AI Programme). For his PhD, Turing developed a mathematical algorithm which essentially invented the modern day computer. Turing was also fascinated by the idea that computers could be intelligent. In 1950 he published a paper in the journal Mind called ‘Computer Machinery and Intelligence’ and developed what is known as the Turing test.


The Turing test, originally called the Imitation Game by Alan Turing, is a test in which a human engages in a text-based conversation with both a human and a machine, and then decides which of the two they believe to be human. If the human interrogator is unable to distinguish between the human and the machine based on their responses, then the machine is said to have passed the Turing test. The Turing test is widely used as a benchmark for evaluating the success criteria of artificial intelligence.


John McCarthy built upon Turing’s work. He organised a summer school at Dartmouth College for people interested in Turing’s ideas. He had to write a grant proposal for this and to give it a name. He called it Artificial Intelligence. The name stuck and it has been used by the AI community ever since.


Artificial intelligence is about building machines that can do things that can currently be done by a human or animal brain and potentially, bodies. Google CEO Sundar Pichai states that ‘AI is one of the most important things humanity is working on and it is more profound than electricity or fire’ (Clifford, 2018). Although there is a lot optimism about AI, others like entrepreneurs Elon Musk and the late theoretical physicist Stephen Hawking have warned that AI may pose a threat to the human race (Sulleyman, 2017; Cellan-Jones, 2014). While Hawking described AI as ‘either the best, or the worst thing, ever to happen to humanity’, Musk emphasized the importance of AI regulation stating that ‘there should be some regulatory oversight, maybe at the national and international level’ to ensure that AI does not become a risk to humanity in the future (Hern, 2016; Marr, 2017).


There is no single, all-encompassing definition for AI, but rather a wide range of definitions which attempt to define what AI truly means. According to Russell and Norvig (Artificial Intelligence: A Modern Approach 2016:1), the various definitions of AI can be categorised into four dimensions: thinking humanly, acting humanly, thinking rationally, and acting rationally.


Thinking Humanly

This refers to the ability of a machine or AI to think like a human by problem solving and making decisions. This category is based on the type of AI that aims to mimic human cognitive capabilities and includes the following definitions of AI:


‘The exciting new effort to make computers think… machines with minds, in the full and literal sense’ (Haugeland, 1985).


‘[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning…’ (Bellman, 1978).


An AI engineer uses human cognitive capabilities to develop an algorithm which matches the internal functioning of the human brain with a computer programme. An algorithm is a process or set or rules to be followed in calculations or other problem-solving operations, especially by a computer. Algorithms are essential to the way AI processes data and makes decisions.


Human behaviour can therefore be used as a 'map' to guide the performance of the algorithm.


Thinking Rationally

This refers to the ability of a machine or AI to perceive information, make judgements, reason, and then act based on the result. This type of AI uses logic to solve problems and includes the following definitions of AI:


‘The study of mental faculties through the use of computational models’ (Charniak & McDermott, 1985).


‘The study of the computations that make it possible to perceive, reason, and act’ (Winston, 1992).


This approach to AI is based on the 'laws of thought' approach or the human reasoning process best exemplified by the Greek philosopher Aristotle who tried to codify the human reasoning process by crafting patterns for argument structures or syllogisms that would always lead to a correct conclusion in a given scenario.


An example of one of these reasoning patterns is: ‘Socrates is a man; all men are mortal; therefore, Socrates is mortal’. Aristotle’s syllogisms were supposed to serve as the roadmap of how the human mind operates, and led to what is now referred to as logic.


Acting Humanly

This refers to the ability of a machine or AI to do the things that humans can and includes the following definitions of AI:


‘The art of creating machines that perform functions that require intelligence when performed by people’ (Kurzweil, 1990).


'The study of how to make computers do things at which, at the moment, people are better’ (Rich & Knight, 1991).


This type of AI is based on the Turing test which involves testing the intelligence of a machine. The testing is done by a human who poses questions in order to gain insight into the respondent’s intelligence. The machine will be considered intelligent if the human is unable to tell whether the responses are from a human or from a machine.


A machine or AI passes the Turing test if it accomplishes the following tasks:

  • Natural language processing: This enables the machine to communicate in a human language.

  • Knowledge representation: This gives the machine the ability to store what it already knows, as well as any new information it receives.

  • Automated reasoning: This allows the machine to use its stored information to draw conclusions.

  • Machine learning: With this, the machine can adapt to new circumstances and identify and infer patterns.

Acting Rationally

This refers to the ability of a machine or AI to act and behave in rational ways by accomplishing tasks with the best possible or expected outcome in any given scenario and includes the following definitions of AI:


‘Computational intelligence is the study of the design of intelligent agents’ (Poole, Mackworth & Goebel, 1998).


‘AI is concerned with intelligent behaviour in artifacts’ (Nilsson, 1998).


A Jungian Perspective on AI

What is the significance of AI for the psyche? AI will surely bring many benefits, but it will also present ethical and legal challenges for society. A long-term concern about AI is what happens if we reach the ‘Singularity’ - the hypothesized point at which AI becomes smarter than people.


On February 29, 1952, Jung wrote a letter to British neuropsychiatrist, neuroscientist and neurophilosopher, John Raymond Smythies. In the final sentence of that letter, Jung summarises his speculations about the equivalence of psychic energy and mass into a single equation, ‘Psyche = highest intensity in the smallest space’. The highest intensity of mass imaginable is infinite density, while the smallest space is zero volume, which is the precise definition of a gravitational singularity, both at the origin of The Big Bang and in black holes. This equivalence allows us to restate Jung's equation as: Psyche = Singularity (Timothy Desmond, Psyche and Singularity, 2018:103). This formula could therefore serve as a pivot point from which we will interpret AI from a Jungian perspective.


Hypothesis or speculative thoughts

AI in the Anthropocene, our current geological age characterized by dominance of our human activity on the environment, can be seen as a leap in the individuation process of the collective. As such, we are not at this point superimposing a judgement of positive or negative, good or evil; rather, we acknowledge the psychological fact of an evolutionary development of consequence in the making. Singularity in this sense can be seen as the creative process of a “coming into one’s own” towards the making of meaning.


Analytical psychology helps us to understand consciousness, and therefore it provides a unique perspective of, and insight into, the significance and ethical implications of AI on individuals, groups and society, in a Zeitgeist demanding new accountability in the human, more-than-human, and human-constructed spheres.


One key hypothetical question is, ‘Is the implication that the creator of the AI machine algorithm is acting as the Self within the psyche?’


From the perspective of analytical psychology, this is a compelling question. Artificial Intelligence is predicted to impact humanity in far greater ways than fire, electricity and even the industrial revolution. AI puts the human psyche into relationship with an intelligent machine, and we’re curious to know, what is the role of the unconscious mind on the algorithm and vice versa.


Subscribe, join as a #JBOI member and stay up to date with our latest article on AI, ethics and meaning from a Jungian perspective.


Objectives

  • undertake a comparative analysis of analytical psychology and the theoretical framework of AI.

  • consider the ethical impact of AI from a Jungian perspective.

  • develop a uniquely Jungian perspective of AI, by exploring the meaning of AI for the individual, group and society.

  • develop an advisable set of ethics for AI, how to be in the right relationship with AI for the ‘more than human’ common good. An acceptable, ethical stance toward the human world extending into AI.

Outcomes

  • a series of monthly blogs on Nick’s #JungianBitsofInformation website discussing our findings and conclusions.

  • a series of presentations (online or in-person seminars) setting out our findings and conclusions to interested individuals and groups.

  • a series of podcast discussions setting out our findings and conclusions.

  • develop an article or essay setting out our hypothesis, findings and conclusions.

Audience

  • AI practitioners e.g. scientists, engineers

  • Business or organisational professionals

  • Jungian analysts, Jungian analysts-in-training, training candidates, matriculated auditors, psychoanalysts, psychotherapists, counsellors

  • Individuals with a general interest in AI

  • Individuals with a general interest in psychology

  • Jungian training institutes

  • Institutes promoting Jungian thought leadership or psychosocial and psychoanalytic thinking

References

Russell, S & Norvig, P. 2016. Artificial intelligence: a modern approach. 3rd Ed. Essex, United Kingdom: Pearson.


Desmond, T. 2018: Psyche and Singularity: Jungian Psychology and Holographic String Theory. Nashville, Tennessee: Persistent Press.


Artificial Intelligence Programme: University of Oxford.


0 comments
bottom of page