top of page

The Unconscious v Artificial Intelligence Hallucinations: A Jungian Psychoanalysis of ChatGPT

Updated: Aug 27, 2023

A series of blogs that explores the pros and cons of Artificial Intelligence (AI) in the workplace and provides a comparative analysis of AI and psychology from a uniquely Jungian or analytical psychology perspective.

 

ChatGPT

A couple of weeks ago, I received an email from the BCS (British Computer Society) also known as The Chartered Institute for IT about a webinar taking place on Wednesday 3 August entitled 'AI: Using Generative Pre-trained Transformers (GPT) Without Hallucinations'. As a member of BCS, I often receive invites to their events, however, none of which have piqued my interest until now. I was very excited to read that there was the capacity to join the session on Zoom despite the venue tickets selling out. The event was billed as a presentation in which attendees will learn how to build a GPT AI application on their data and perfect for developers, data engineers, and anyone interested in building intelligent chatbot applications.


I am planning to use ChatGPT for two reasons. I am going to implement ChatGPT on my website #JungianBitsofInformation and the British Association for Psychological Type (BAPT) website. My vision for installing ChatGPT on the #JungianBitsofInformation website is top secret for now, however, I will share my ideas over time. My plans to implement ChatGPT on the BAPT website are not so secret. I am a board member of BAPT responsible for digital projects and one of my priorities is to modernise their website. I am overseeing a new design and implementation of a new website for the organization. I think ChatGPT is a useful AI tool that will replace the traditional search functionality and will allow BAPT members to quickly find information from BAPT's website and from internal data that is not publicly available (internal documents, meeting notes, archives, etc).


What is ChatGPT? I thought about this for a few seconds before I said to myself, 'How would ChatGPT describe itself?' This is its response to the question.


'ChatGPT is a type of artificial intelligence language model developed by OpenAI www.openai.com. It belongs to the GPT (Generative Pre-trained Transformer) family of models. GPT models are designed to generate human-like text based on the input they receive. They are pre-trained on a massive amount of text data and can then be fine-tuned for specific tasks.


ChatGPT is tuned explicitly for generating conversational responses, making it well-suited for interactive and natural-sounding conversations. It can understand and generate text in a way that resembles human conversation, allowing it to engage in discussions, answer questions, provide explanations, and more.


The model is trained to predict the next word in a sentence based on the context provided by the preceding words. This enables it to generate coherent and contextually relevant responses. It can be used for a wide range of applications, including customer support, virtual assistants, tutoring, content generation, and more.


It's important to note that while ChatGPT can produce impressively human-like text, it doesn't possess true understanding or consciousness. Its responses are generated based on patterns in the data it was trained on, and it lacks genuine comprehension or emotions'.


Interesting response. I think it summarises its purpose quite well and I like the reference to consciousness and emotions, given the nature of my blog. Essentially, I see ChatGPT acting like a ChatBot on the new BAPT website responding to questions posed by anyone visiting the site. ChatGPT's response time is fast and the actual response is mostly accurate and reliable. As an AI application, it needs to be trained on good-quality data so that it can generate an equally good-quality response. ChatGPT is an incredible leap into emerging AI technology which will have significant impact on our personal and work lives.

The webinar was incredibly insightful, however, a few presentation slides into the session and things became very technical. I knew this is where my learning came to an abrupt end and where I need technical expertise to help me install ChatGPT on my website and BAPT's website. It was interesting to hear from the presenter that AI application skills are in huge demand and that potential earnings are very high. I have been looking for a freelance ChatGPT expert for some time now but it has not been successful. The search continues - please do get in touch with me if you have the expertise and are available. www.nicholastoko.com/contact.


Hallucinations

The one thing that I learned most about from the webinar was the idea of 'hallucinations', an interesting take on the psychological phenomena.


Distortions of reality in the response of ChatGPT to a question or problem.

(Source: Akmal B Chaudhri, Senior Technical Evangelist, AI: Using Generative Pre-trained Transformers (GPT) Without Hallucinations, SingleStore)


Hallucinations in the context of AI refer to the phenomenon where an artificial intelligence system generates outputs that are not based on actual data or reality but rather are a result of the model's internal processes. This can lead the AI application to produce content that seems vivid, creative, or even bizarre, but lacks a genuine connection to the input it received or the real world.


I love the Hallucinations infographic which wonderfully visualizes the 'normal' and 'abnormal' output of an AI application such as ChatGPT. You should expect accurate and reliable data, information, knowledge, and insight from an AI application, not to mention an ethical and legal response. However, AI is still a relatively new and emerging technology therefore it has its 'faults' or hallucinations in which it makes false, incorrect, or inaccurate responses to a question or problem.


A hallucination from a psychological perspective can be defined as 'perception-like experiences that occur without an external stimulus. They are vivid and clear, with the full force and impact of normal perceptions, and not under voluntary control. Hallucinations may occur in any sensory modality - visual and auditory - but auditory hallucinations are the most common in schizophrenia and related disorders i.e. auditory hallucinations are usually experienced as voices, whether familiar or unfamiliar, that are perceived as distinct from the individual's thoughts' (adapted from DSM-5, Diagnostic and Statistical Manual of Mental Disorders). Typically, patients with schizophrenia experience auditory hallucinations consisting of a voice that commands them to perform some act or comment on their thoughts and actions.


Hallucinations are therefore a sensory experience that is a product of a person's psyche and does not exist in the outside world. From a psychoanalytic perspective, an apparent perception of an external object not actually present, or a subjective image that the person experiences as an external phenomena. Owing to the failure in 'reality testing' involved in hallucinating, it tends to be regarded as psychotic, though this is not always so (Critical Dictionary of Psychoanalysis). Reality testing refers to a person's capacity to distinguish between mental images and the external world, between fantasy and external reality, and to correct one's subjective impressions by reference to external facts.


Hallucinations and delusions are often seen as failures in reality-testing. Hallucinations can occur as a result of fever, brain disease, religious rituals, drugs, and sensory deprivation as well as for psychotic reasons, and are in stark contrast to delusion, a falsely held belief that cannot be influenced or corrected because of contradictory evidence.


I asked ChatGPT to describe hallucinations in an AI context. It said that hallucinations in AI can occur for various reasons, including:


Overfitting: If an AI model is trained too well on a specific dataset, it might start generating content that mimics the training data too closely and loses the ability to generalize. This can result in the model generating outputs that are essentially memorized fragments of the training data, even if they don't make sense in a broader context.


Data Bias: If the training data contains biased or incorrect information, the AI might generate outputs that reflect those biases or errors. These outputs could be considered hallucinations as they are not accurate representations of reality.


Model Complexity: Highly complex AI models, such as deep neural networks, have many layers and parameters. Sometimes, the interactions between these components can lead to unexpected and unrealistic outputs.


Lack of Clear Context: AI models may generate content without a clear understanding of the context or the world. This can lead to outputs that seem coherent but are disconnected from reality.


Noise in Training Data: If the training data contains noise or inconsistencies, the AI might learn to reproduce this noise in its outputs, resulting in hallucinatory content.


Generating Novel Content: Some AI models are designed to be creative and generate novel content, which can occasionally lead to outputs that seem imaginative but lack a grounding in real-world data.


It's important to note that while AI hallucinations can be intriguing and sometimes even amusing, they can also raise ethical concerns. AI-generated content, especially if it contains misinformation or biased perspectives, can have real-world consequences. Researchers and developers of AI systems strive to reduce and mitigate hallucinatory behavior through techniques like careful dataset curation, model regularizations, and ongoing monitoring and fine-tuning of AI systems.


A Psychoanalysis of ChatGPT

Fascinating response from ChatGPT. I picked up from its response, words such as 'fantasy, disconnected from reality, inaccurate representations of reality, hallucinatory content, imagination', which parallels the characteristics or features of hallucinations from a psychological perspective. This is in itself interesting from a psychoanalytic perspective because of the AI engineer's apparent subject-object relationship with the AI application or ChatGPT, or even the algorithm, and by this I mean, ChatGPT's output is analyzed as if it is a conscious entity with faults similar to the human psyche. The output of ChatGPT is described as if coming from a conscious or unconscious psychological perspective.

 

'Our brains make us intelligent; we see or hear, learn and remember, plan and act thanks to our brains. In trying to build machines to have such abilities then, our immediate source of inspiration is the human brain, just as birds were the source of inspiration in our early attempts to fly'

(Alpaydin, E. Machine Learning: The New AI)

 

ChatGPT hallucinations are perceived as a product of the AI application's inner workings - neural networks or neural nets - a subset of machine learning. The name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another. Neural networks enable machines to learn, acting like a 'black box' in which the algorithm aims to resolve an 'input' e.g. a problem or question, and provide an 'output' i.e. a response or solution to the question or problem.


Neural networks are at the heart of emerging innovation and developments in AI applications such as ChatGPT. Deep learning is an approach to machine learning where artificial neurons are connected in networks to generate an output based on weights associated with the inputs (Woolridge, M. Artificial Intelligence). The workings of the brain act as an inspiration for neural networks by replicating how neurons i.e. nerve cells in the brain receive inputs, process the inputs, and then produce an output i.e. the activation of a synapse.


Sigmund Freud, the founder of psychoanalysis, drew an intriguing model of a neural network in which changes in the gain of synaptic connections ("permeability") among neurons are predicted to be the basis of learning and memory. This drawing was made in 1895. The arrow on the left represents incoming energy to the neural network. The small circles represent neurons, and the perpendicular double lines represent synapses.

Before his rise to fame as the founding father of psychoanalysis, Freud trained and worked as a neurologist. He carried out pioneering neurobiological research, which was cited by Santiago Ramóny Cajal, the father of modern neuroscience, and helped to establish neuroscience as a discipline (Costandi, M. Freud was a pioneering neuroscientist. Guardian). Freud lost interest in his work as a neurologist, preferring to delve deeper into the unconscious part of the psyche. However, his brief work on neurology inspires my work exploring the similarities between the inner workings of the psyche and AI.

(Source: The design of a neural network, the University of Oxford AI Programme)


A helpful definition of a neural network by IBM:


'Artificial neural networks (ANNs) are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network.


Neural networks rely on training data to learn and improve their accuracy over time. However, once these learning algorithms are fine-tuned for accuracy, they are powerful tools in computer science and artificial intelligence, allowing us to classify and cluster data at a high velocity. Tasks in speech recognition or image recognition can take minutes versus hours when compared to the manual identification by human experts. One of the most well-known neural networks is Google’s search algorithm'


ChatGPT's inner workings (i.e. the neural network) act as an underlying structure with a hidden layer and processes that may manifest as a hallucination or an incorrect, false or imagined outcome. Neural networks are not the same as the human brain but AI engineers aim to use the brain's processes as inspiration to enable effective machine intelligence.


Similarly, the psyche has its own structure and dynamics; mental processes which mimic algorithms and their activity in the 'hidden layer' of the neural network, a replication of the unconscious in the psyche which may also produce hallucinations particularly in an abnormal psyche.


Neural networks can process data to identify patterns in data that drive decisions and enable innovations such as ChatGPT. However, the patterns may be imagined, false, or, incorrect resulting in a hallucinatory outcome. Similarly, the psyche may also imagine or falsely perceive something, especially in an abnormal psyche, resulting in a sensory hallucination manifesting in a person's external world. AI and psyche's hallucinatory outcomes.


Both ChatGPT and the psyche may have a distorted view of reality. ChatGPT may provide an outcome based on biased data, a response or answer which is simply not true. The psyche, especially under the duress of psychosis, may also produce external images or voices which the person experiencing the psychosis believes is true. The role of the AI engineer is to let ChatGPT know that its response is incorrect, and therefore adjust its learning accordingly. Similarly, the psychoanalyst/psychiatrist will help bring self-awareness to a person experiencing hallucinations, helping them to distinguish between fantasy and reality.


There also appears to be a dynamic relationship between ChatGPT and the AI engineer, both are in a dialectic relationship with each other, the AI scientist and their unconscious and ChatGPT and its neural network. The unconscious and the neural network act like blackbox - a hidden realm or layer where problems are solved, data is processes and fantasies are expressed. Both transform each other. ChatGPT becomes more intelligent, and the AI engineer makes faster decisions based on the insights provided - productive a subject-object relationship - however ChatGPT's insights could just be an hallucination.


In conclusion....


Hallucinations in the context of AI refer to the phenomena where an AI system generates outputs that are not based on actual data or reality but rather are a result of the model's internal processes. Hallucinations in the context of psychoanalysis are a product of the psyche's internal processes, an apparent perception of an external object not actually present. Both technological and psychological processes are distortions of reality i.e. hallucinations. ChatGPT comes with a health warning, what you ask it may just be a figment of 'its' imagination. Similarly, the psyche is also prone to fantasies and imagination. Does the use of the term, 'hallucination', infer that ChatGPT is a conscious entity? Well, there seems to be a case to say that there is a dynamic subject-object relationship between the AI scientist's conscious and unconscious psyche and ChatGPT's neural network and hidden layer. Is the AI scientist's vision for ChatGPT an hallucination or delusion? We already know that ChatGPT can and does hallucinate. Watch this space for more psychoanalysis of hallucinations in ChatGPT and the psyche.


Learn about ChatGPT www.openai.com

 

The Self in Jungian Psychology v the Singularity in Artificial Intelligence

Coming up in the next blog in the series, The Unconscious v Artificial Intelligence, I will explore the uncanny similarities in the ultimate strategic goals of Jungian psychoanalysis and AI.


In Jungian psychology, the concept of the #Self is a central and fundamental idea. The Self represents the totality of an individual's psyche, encompassing both conscious and unconscious aspects. It is the archetype of #wholeness, integration, and #individuation. A Jungian psychoanalysis is a process towards psychological health and fulfillment involving the process of self-realization and the integration of various parts of the personality - a psychological form of 'wholeness'.


The term "Singularity" in the context of AI often refers to the hypothetical point in the future when artificial intelligence surpasses human intelligence and becomes capable of thinking humanly, acting humanly, thinking rationally and acting rationally - a technological form of 'wholeness'. Essentially, AI capable of self-improvement, leading to rapid and potentially uncontrollable technological growth. This concept has gained significant attention in discussions about the future of AI and its potential implications for society.


I hope you found the blog insightful. That's all I aim to achieve with my writing, anything else is a blessed bonus. Thanks for taking the time to read the blog, feel free to get in touch should you have any questions or comments.








bottom of page