top of page

The Unconscious v AI: Interview with an AI engineer

Updated: May 25


Two psychoanalysts/therapists meet an AI Engineer...


In our latest exploration of Artificial Intelligence (AI) from a Jungian perspective, Nicholas Toko and Margot Stienstra interviewed Praveen Selvaraj who is an AI practitioner, and he works at the Alan Turing Institute - the UK’s national institute for data science and artificial intelligence.


A world view of AI

For the Jungian analyst or psychotherapist, the acronym AI refers to a practice within C.G. Jung’s Analytical Psychology known as Active Imagination, a means of engaging - at least - consciously, by conscious intention- with what Jung termed the Collective  - or universally shared, evolutionary- Unconscious. But in our efforts to learn about today’s pressing technological influences, and their ethical implications, most notably via Jungian psychology, we turn our attention to AI as compelling audiences in today’s Zeitgeist, as Artificial Intelligence. To better understand this phenomenon, we had the valuable opportunity to meet with AI Engineer, Praveen Selvaraj. So, two very different worldviews interface to explore common ground and divergence.


Nick met Praveen by chance at their apartment building shared resident lounge where they were both working. They introduced themselves to each other and it was then Nick discovered that Praveen works in AI at the Turing Institute. Interestingly, Nick was writing his blog at the time, so it seemed like a synchronistic moment to meet Praveen and to invite him to an interview with him and Margot.



All three of us live an international lifestyle. Praveen hails from India and grew up also in the Middle East. Nicholas was born in Uganda, grew up in the U.S. and the UK, and spent time in Argentina before training as Jungian analyst in Zurich, Switzerland and practicing in London where he previously earned a bachelor’s degree in international business studies and Spanish, a post-graduate degree in human resource management, and master’s degree in Jungian and post Jungian Studies and completed a training programme in Artificial Intelligence from the University of Oxford. Margot is from the northeastern U.S., holds a  Bachelors in European humanities with an unofficial minor in the history of Chinese thought, and a master’s in international management, has lived her adult life internationally as an educator, and trained and practices as Jungian analyst/psychotherapist in Zürich, Switzerland.


So, from these different perspectives we sought congruence in our respective and shared curiosity about Artificial Intelligence and its current trajectory.


The nature of AI learning

Much of our conversation focused on the nature of learning. In the AI world this is differentiated as Supervised Learning, a teach and reward interaction between the AI engineer and AI algorithm, and Unsupervised Learning whereby one awaits more independent processing from the AI algorithm.


Machine Learning

Machine Learning, a field of artificial intelligence, refers to the process in which a machine is trained to learn from data without explicitly being programmed to do so. The process of machine learning is iterative, and as machines are fed more data, they're able to independently adapt. There are three learning approaches in Machine Learning: Supervised Learning, Unsupervised Learning, and Reinforcement Learning.

(Source: Oxford AI Programme, 2023)

Supervised Learning

Supervised learning refers to the type of learning where machine learns to map an input variable to an output variable based on labelled examples. The labelled examples constitute what is known as a training set. The algorithm processes the training set and then attempts to identify unlabelled examples i.e. the test set. The most common learning methods associated with supervised learning are regression and classification, both of which require structured data to enable the algorithm to learn.


Unsupervised Learning

Unsupervised learning refers to the type of learning where an algorithm learns to map an input variable to an output variable without using any labelled examples. Thus, unlike supervised learning, there's no training set given to the algorithm from which to learn. Rather, the algorithm attempts to model the distribution of data provided in a test set by looking for hidden patterns. The most common learning method associated with unsupervised learning is clustering, which allows the machine to produce a model out of disorganised data by clustering data that displays similar features. (Oxford AI Programme, 2023). This has parallels with Jung’s Analytical Psychology, since clusters and patterns also emerge in the aforementioned Collective Unconscious. As an early stage of emergent consciousness and meaning making.


Praveen confessed this remains a big debate among those who work in AI. To what extent does the algorithm’s current capability extend beyond “memorization” of data input, and to what extent has it developed the singularity, the ability to process that data to make new applications and think humanly, act humanly, think rationally, act rationally?


The ethical implications of AI

Our conversation developed towards degrees of ethical concern for known and yet unknown implications of increasingly extensive use of AI, one of which has to do with a glitch in the development of algorithms, as AI has been developed so far with limited access to a diverse group of facial recognition images and thus has a bias to interpreting mostly based on white faces. In Jung’s Analytical psychology this would definitely be indicative of the shadow or human shadow complex. So, Praveen and others investigate ways to expand the database to more truly representative images of the totality of humans which can be accurately interpreted. Nick has had personal experiences of AI being unable to recognize his face because of his dark skin tone. He recalled a time when he was using a card machine/ATM which required him to show his face to the screen as a means of security validation. The AI machine kept asking Nick to take off his helmet though in fact he was not wearing one. His particular skin tone was not immediately recognized by the machine. Notwithstanding Nick’s feelings about not being recognized as a human by an AI machine, it also raises questions about the ethical use of AI which is not trained on a set of data based on a diverse human population. The implication in this scenario is that anyone with dark skin is unable to access the services of the card machine/ATM purely based on the colour of their skin. This is but one example of an ethical aspect of AI as currently understood. All such revelations, and those not yet detected, must be considered shadow aspects of AI, as they elude conscious reckoning.

The Shadow

Jung defined the shadow as ‘the thing a person has no wish to be’ (Carl Jung’s Collected Works 16, para.470), or in other words the negative side of the personality, the sum of all the unpleasant qualities on wants to hide, the inferior, worthless and primitive side of man’s nature, the ‘other person’ in one, one’s own dark side Jung emphasizes that each of us has a shadow, that everything substantial casts a shadow, that the ego stands to shadow as light to shade, that it is the shadow which makes us human (Samuels A., Shorter B., Plaut F., A Critical Dictionary of Jungian Analysis, p.138).

Everyone carries a shadow, and the less it is embodied in the individual's conscious life, the blacker and denser it is. If an inferiority is conscious, one always has a chance to correct it. Furthermore, it is constantly in contact with other interests, so that it is continually subjected to modifications. But if it is repressed and isolated from consciousness, it never gets corrected and it's liable to burst full suddenly in a moment of unawareness. At all counts, it forms an unconscious snag, thwarting almost well-meant intentions

(Carl Jung’s Collected Works 11, para. 311).


Jung argues that there is a split in the modern human being between the light and dark sides of the human psyche which the enlightened optimism of Western Christianity and the scientific age has sought to conceal or to deny the existence of the duality of light and dark in human beings.


The shadow can be found in both the personal and collective unconscious. It is a living part of the personality and if unconscious, it can behave in an autonomous way. Given the shadow is also an archetype, it has deep unconscious roots, thus its contents are powerful creating strong affect in the human - obsessional, possessive, autonomous - overwhelming the ego. The contents of the shadow typically shows up as a strong, irrational projecting, positive or negative, upon one’s neighbour. Here Jung argued convincingly that personal antipathies and cruel prejudices and persecutions of our time can be linked back to the Shadow Complex on a collective level.


It is no wonder then that AI engineers are not exempt from the Shadow Complex. The inability of some AI algorithms to recognise a non-white face is due to the lack of diversity in the training data and/or the appearance of prejudice in the development of the AI algorithm.

     

Another ethical consideration arises from the many users who have no technical understanding of how it works. Whereas the two psychologists involved approach the question with a high degree of caution, especially as we note a startling degree of readiness among so many, including highly educated colleagues and clients, to turn to. e.g. ChatGPT for initial readings on a variety of questions, from “opinions” on intellectual work, articles, etc., to unconscious material such as dreams. Essentially, AI has a higher level of authority over one’s own thinking. Praveen noted that AI is a new Oracle. Nick and Margot would see that as an archetypal image of a universal need or longing.


One aspect we discussed was a notable impulsivity aroused by curiosity around problem-solving. Does it suggest, we wondered together, a lack of self-confidence in one’s own intellectual discernment? Or on an emotional level, a tendency towards mistrust in self and other?


Praveen is a self-described optimist and has been able to maintain a scientist’s neutral curiosity and objectivity; whereas psychology as social science makes every effort to  be scientific, accepts the paradox that all knowledge is ultimately filtered through the subjectivity of the human psyche. Both lenses share a healthy concern for the unknown that we are innately curious about, and that can act upon us beyond our awareness. It should be mentioned that the shadow also has positive potential as it may become illuminated into more consciousness. One post-Jungian, among others, who has focused on this is Melissa Kim Corter.

  

AI and the human brain

We then looked at comparisons to the human brain’s neural networking capability.



The above image is a neural network drawn by the founder of Freudian psychoanalysis - Sigmund Freud - in 1895 while writing a textbook for neurologists entitled The Project for a Scientific Psychology. Freud was trained as a neurologist in medical school. In our research on Freud’s interest in neuroscience, it remains unclear whether Freud lost interest in his work as a neurologist, preferring to delve deeper into the unconscious part of the psyche or due to the limitations of neuroscience at the time.


In this drawing, the left arrow represents incoming energy to the neural network. The small circles represent neurons. The perpendicular double lines represent synapses. Similarly, the hidden layer of a neural network is thought to be the basis of Supervised and Unsupervised Learning and memory of an AI machine.


Neural Network

(Source: Oxford AI Programme, 2023)


The images are strikingly similar. Neural networks and the unconscious can be thought of as ‘hidden’ layers in the totality of an AI machine and the human psyche respectively, and in which certain processes [algorithms or psychic processes] interact to produce an output [solution, prediction, recommendation, attitude, perspective] from an input [question, problem] posed by a human [AI engineer, ego, consciousness].


As our perspectives converged, we noted a tendency for AI users to short-cut psychological understanding and adopt psychological interpretations and jargon that psychological clinicians must more precisely employed given professional diagnostic considerations. For example, AI engineers sometimes perceive erratic behaviour of  AI algorithms as a ‘bipolar girlfriend’, a comment which suggests the AI algorithm behaves in widely extreme ways and this is perceived as ‘pathological’ and out of the ordinary.


We concluded with enhanced appreciation and commitment to ongoing dialogue and reciprocal education and be ready to modify diagnostic terminology that could be misconstrued beyond a psychological / psychiatric context.


Synopsis

Our interview with Praveen, an AI engineer, delves into the complexities of AI algorithms, their learning processes, and their implications for both AI and its practitioners. The discussion highlights how working with AI has prompted Praveen to reflect on human learning and understanding, both in AI itself, AI practitioners and the general public.


The Relationship between the AI algorithm and the AI practitioner

Working with AI has made Praveen think more about learning himself because when an AI algorithm is developed, you are not really giving it any specific roles. You are just giving it as much data as you possibly can, and it appears to learn by itself. You don't even know what it learns. This makes him think a lot about the ways that we teach AI to learn- a process which we can actually apply on ourselves. Hence our ethical vulnerability to what we don’t yet properly understand, (see AI Safety Concerns, and Trust issues, below) as well as ethical considerations to what we do so far recognize, such as the example of facial recognition mentioned above.

AI's Learning Process

AI learns through two main methods: supervised learning, where it is provided with direction on the right response by the AI engineer, and unsupervised learning, where it predicts outcomes without direction from the AI engineer.

Pitfalls of Artificial Intelligence

Comparison with the Human Brain

The learning mechanisms of AI are likened to human brain processes, where both remain largely mysterious. However, neuroscientists emphasize that the complexity of the human brain surpasses that of AI. Sigmund Freud, the founder of psychoanalysis, drew an image of a neural network illustrating the brain's mental processes, an image which is strikingly similar to the neural network of Machine Learning.

AI Safety Concerns

There is ongoing debate within the AI community regarding the potential for AI to develop goals and consciousness, which could pose safety risks if not managed properly.

Trust in AI algorithms

Interview


Nick: Hello Praveen. Could you please introduce yourself and tell us a little bit about yourself.


Praveen: Okay. Brief background about myself. I studied computer science and worked as a software engineer for a few years. Then I kind of sat down one day and-actually not one day, but for a few weeks, and thought about where the field is heading in the future. This was around 2016, and it looked like AI, and robotics were beginning to take off. I decided to study AI and robotics around 2019, and then I switched from software engineering to robotics and AI. Now, although I have a background in robotics and also worked for a bit in robotics, I'm right now only doing AI.


Nick: Where did you train? Where did you study AI and robotics?


Praveen: I studied both at the University of Leeds and at University College London (UCL).


Nick: Where were you born, Praveen, and where did you grow up?


Praveen: I grew up half in the Middle East, in Saudi Arabia, and half in India, because my dad was working in Saudi even before his marriage. My parents are still there, actually. A lot of south Asian immigrants went to different Gulf countries because they had an oil boom since the 1970s.


Nick: Yes, so you're an oil boomer.


Praveen: [chuckles] Yes, I guess so.


Nick: Which part of India are you from?


Praveen: I'm from the south, so below Bangalore. It's a city called Chennai.


Nick: Oh, yes.


Praveen: What about you Margot. I think I know a bit about you. Could you introduce yourself?


Margot: Sure, Praveen. I have been living a long time now in Europe, like-- I don't know, 25 years or something like that. Before that, I was an international expatriate in various countries. I am originally from Boston in the United States and was educated in Connecticut and Massachusetts in very innovative schools and very classical traditional ones, so I have both of these leanings. My master's degree was in international management where I was trying to balance a very qualitative orientation in humanities and arts with this more quantitative and pragmatic study. Then somehow-- Well, I won't bore you with how, but found my way to analytical psychology in Zurich, but I actually have stayed on and live here most of the time, so I divide my time between Zurich and the United States.

Prior to this work as an analyst, I was a teacher in international schools in different countries, including here. So, it's a combination of humanities and arts and more pragmatic things and interested very much in eastern as well as western parts of the A lot of dichotomies, and Jung speaks a lot about holding the tension of the opposites and awaiting the transcendent third that might emerge.

Transcendent Function

A collaboration between conscious and unconscious. The psychic link created between ego-consciousness and the unconscious as a result of the practice of dream interpretation and active imagination, and therefore essential for individuation (Stein, Murray. Jung's Map of the Soul: An Introduction. Open Court. Kindle Edition).


The functioning of the human brain is often characterized as having two cerebral hemispheres, the interaction of which is central to human mental functioning. The left hemisphere is the site of brain activity connected with linguistic ability, logic, aimed directed actions, and the base the laws of time and space. It may be characterised as analytical, rational, detailed in its operations. The right cerebral hemisphere is the site of emotions, feelings, fantasies, a general sense of where one is in relation to everything else, and a holistic capacity to grasp a complex situation in one bound. In contrast to the left hemisphere more piece meal approach, the transcendent function has been described in terms of an intercommunication between the hemispheres physiologically - The corpus callosum.


The corpus callosum is a structure in the middle of your brain that connects the right and left hemispheres (sides). The structure is made up of nerve fibres. Nerve fibres help the left and right sides of your brain talk to each other by sending signals. This communication pathway allows you to coordinate many essential functions that are part of your daily routine.


The purpose of the corpus callosum is to connect the left and right hemispheres of your brain so they can communicate. Your corpus callosum functions as a bridge. It allows nerve signals to move between the two sides of your brain. Nerve signals are like people crossing the bridge. Each person carries a message to a different location in your brain. These messages help you coordinate your:


  • Senses (vision, hearing, touch, taste, smell).

  • Movement (telling your muscles to move).

  • Cognitive function (memory, language processing, problem-solving and reasoning).

Praveen: Oh, yes, I love that.


Margot: I know me too. It's actually hard work sometimes, but that seems to be the general overarching pattern for me.


Praveen: I've read a lot of Nietzschean philosophy, and over the two, there's a lot of emphasis on contradictions, and you get a lot out of contradictions.


Margot: Yes, that's right. You can look at your life's pattern or notice different patterns over the course of your life, and maybe make meaning from them, but some of it becomes clear and some of it remains elusive. Anyway, I love my work as an analyst and feel very lucky. I think if I would say something about that, I'm really grateful to my parents for teaching me very early the importance of work, but also that you can enjoy it. I've been very lucky to pretty much always enjoy the work I do. Whatever it is, however, stressful, that I felt lucky to do it.


With AI, I'm out of my depth, which I think most of us are, but trying to navigate and trying to see how that jives with my Jungian background and to see what feels the most important. Anyway, that's probably enough about me. It's such privilege for me to be able to hear you who's really deeply in the field, to see what you think is interesting and you think is important. Out of all these questions that we gave you, I'm most interested in whatever interested you.


Praveen: Okay.


Nick: I would agree, whatever in our questions interested you most, because I think it'll be great actually for the interview. As for me, I was born in Uganda, and I grew up in the United States, spent some time back in Uganda and Kenya, and then moved to London. I am training as a Jungian Analyst in Zurich, Switzerland where I met Margot. I decided to become an analyst after a trip to India for about eight weeks. I had such a profound experience, dare I say, spiritual experience in India, and made the decision to train to be an analyst. As you know, I've just come back from a recent trip to India. India has a very, very special place in my heart, absolutely. Out of all the questions you saw, which ones draws your attention?


Praveen: I think working with the AI has made me think more about learning myself, because-- I don't know if you know, but when you develop the AI, you're not really giving it any specific roles. You're just giving it as much data as you possibly can, and it sort of learns by itself. You don't even know what it learns. That makes me think a lot about the ways that we teach the AI to learn, we can actually use on ourselves.


For example, summarization. You could read a bunch of text, and I often notice that when I'm reading a book, a few weeks down the line, you can often forget a lot about it. But if you make yourself actively engaged, like you read, but you force yourself to summarize what you read in your own words, those concepts sink deeper than just listening to an audiobook or whatever. I often think about that, like how to learn better myself. Does that make sense? I wanted to ask you guys, how do you think the AI is developed? Because a lot of the time, people assume that programmers sit down and give it a bunch of rules, which is how computer programmes were historically developed. But this is a totally new paradigm where it just learns by itself, and you don't even know what it learns.


Margot: Praveen, about what you say about this emphasis on learning, what is learning? What is it? Is it retention? What comes to me is the now old model of Bloom's Taxonomy, which is just a pyramid toward higher-order thinking. The bottom of the pyramid is rote learning, and then the higher you go, the more higher order into things like application. I wonder what you think-- It's so interesting that you say, with AI, we don't even know what it learns, but it would be helpful to also get a grip on what we think learning really is.


Praveen: That's actually a big debate in the field. People argue about it.


Margot: Really?


Praveen: Yes. People argue about, does the AI-- when you ask it a question, does it really understand the reply that it's giving you, or is it just giving you an answer that it's seen somewhere else like it's a kind of memorization? It's like, "Oh, this particular--" whatever question you asked it "--it seems similar to whatever I've seen before in this particular book, and I'm just going to rephrase that," or does it understand enough to-- Do you know what I mean?


Nick: To form its own response, right?


Praveen: Yes.


Nick: I understand that in AI there is what’s called Supervised Learning and Unsupervised Learning, that some AI engineers can say to the algorithm, "No, that's not correct. Go back and try again," or "Yes, that's correct. I'm going to reward you for doing that by giving you more data." So, you're almost teaching it and rewarding it, or you just give it a problem, and it comes back or whatever, and you don't supervise it. You just develop it.


Praveen: So, Supervised Learning started off in image recognition. You show it lots of images of different objects: humans, cats, dogs, and you give it a specific label, "What you're looking at is a picture of a dog." You do that for thousands or millions of images. That's Supervised Learning because you're giving it the answer. Self-supervised Learning is more like you're just giving it bunches and bunches of text, and you make it-- like, if I give one a sentence and I want it to predict what the next sentence is. That's turned out to be unexpectedly powerful. Maybe you can think about it in this way: if it's digesting a particular book, you've given it a few chapters, but for it to predict the next chapter, it has to truly understand all the characters in the book and all the themes and so on. People are always surprised that something as simple as predicting the next word or the next sentence ended up giving us such powerful AI systems.


Nick: Am I correct in thinking, and this is based on my learning on AI, that the unknown is the neural network? It goes into a neural network to solve the problem, right?


Praveen: Yes.


Nick: And that's where, as I understand it, if the courts said, for example-- If, let's say someone challenged a decision by AI, the AI engineer couldn't give evidence of how the algorithm made the decision because it went into the neural network, right?


Praveen: Yes. It's actually pretty similar to our own brains in the sense we don't actually know what's going on in our brains. We have theories as to what's going on, but from my limited exposure to neuroscience, it feels like we don't really know what goes on at the level of neurons, or even populations of neurons, like how do we process information. It's very similar with the AI.


Margot: How do you spend your days at work? Are you researching and building hypotheses, or are you more interested in direct application? What does it mean to be an AI engineer, given what you just said?


Praveen: My research is very much more applied. To give you an example, one of the types of AI is face recognition or face verification. An issue with face recognition models, which maybe you've heard about, is that they don't work well for all types of people. They often don't work well for darker skinned people. When I say it doesn't work well, it's something like, let's say you have-- I work within the context of digital ID systems. In India, for example, they have a national digital ID system.


How that works is you get a digital ID, and once you get an ID, you can access different services like banking or buy a SIM card. When you show up, they'll be like, "Okay, tell me what your national ID is. Then also, I need an on-the-spot verification, like a picture of your face, or maybe your fingerprint." They're going to use that picture that they took on the spot to match it back to whatever was in the system at the time of you registering. To match that, they use an AI model to match it, and like I said, it doesn't work equally well for different people. Darker skinned people, it doesn't work well.


Nick: That's so interesting. I think it's interesting, because I had to do a security check recently in India, a facial security check, and I knew it was an AI algorithm. The machine kept telling me, "Take off your helmet." I was like, "No, I'm not wearing a helmet."I kept looking into the camera. "Please take off your motorcycle helmet," is what it was telling me. Eventually, it realized I was a person, then it worked. You're right, it doesn't work well on dark skinned people, so it thought I was wearing a helmet. This is a story I say to people about some of the pitfalls of AI particularly around skin color, but what do you think-- We call that potentially shadow stuff, right? 


Praveen: The reason is actually deceptively simple. It's because the algorithm is only as good as the data it was trained on. A lot of the data it was trained on is collected in Western countries, where obviously, people are much more lighter skinned, so it's only used to looking at light skinned people. It's trained on majority of lighter skin populations, so it's not as good for darker skinned people. In my work, what I do is I try to-- The obvious answer is we need much more data on darker skinned people. To do that means to actually go around to specific demographics, like specific countries and so on, and collect images of darker skinned people, or you could use the AI itself to generate images of darker skinned people. That's much easier because they are not real people, you don't need consent.


Margot: Oh, really?


Praveen: Yes.


Nick: Why do you think-- I know you said it's deceptively simple, isn't it? That's a really good quote actually, but why don't the AI engineers just have a wider, diverse group of people to test it on? Why is that not just an initial starting point that the data should always be really diverse? Do you have any sense of why? I mean, I get it, they're testing on Western populations.


Praveen: It's not just testing. I mean, testing is one thing. Before you put out a system, you test it, and if your team happens to not have darker skinned people, you may not catch it. But even if they tested it, they would kind of be stuck because the datasets that were available for them to train their system on doesn't have that many darker skinned people because of the population of the country you live in. If you collect data from Switzerland, for example, even if you tried your best, it's not going to have that many darker skinned people as a ratio.


Margot: Praveen, if we could kind of unpack that a little bit, are we speaking about countries with certain scientific orientation or curiosity? Are we talking about educational institutions that might be promoting this? What is it? I don't mean to be stupidly naive here, but I'm really curious, and I think it's really important that we figure out what are the preconditions that would start this off being a lighter skin person's domain. That's horrifying to me.


Praveen: No, it's only because a lot of the research happens in Western countries. When you do a piece of research, you're often building off of other people's work, so if I'm about to create an AI model, let's say, I would actually not collect the data myself. I would go online and see what the available datasets are. Because it's more like a feasibility thing, right? I could be part of a small team with maybe X amount of money, so I'm not going to be collecting data myself. I'm like, "Okay, these are the datasets that are available, so I'm just going to use them to train my model." Then you test it, you find out that it doesn't work well for darker-skinned people. Then you're like, oh, it goes back to the data. It only has, I don't know, 200 images of darker-skinned people, and thousands of lighter-skinned people.


Margot: To wave a wand, if you like. You have your day, your job and whatever you get paid for and so forth, but in your field of interest and priorities, what would you like to see happen? Would you like to employ a team of people to go out and come up with fresh data, or do you hope somebody else does that? How important is this to you?


Praveen: I am trying to solve it by using the AI to create more balanced datasets.


Margot: Okay, so you're actively involved in that.


Praveen: Yes. The harder thing to do would be to actually maybe fund different universities or research teams in different countries with diverse populations so they can collect their own datasets and then make it public, which can then go back into training the models. Obviously, that's going to take a lot of money, and also issues like you have to get consent from all the people that you collect data from.


Nick: Ah, interesting. There are some constraints in getting more diverse data, which is A, you need different populations, B, you need that consent, and then C consider the costs involved.


Praveen: There's even so many more issues. For example, because celebrities are very comfortable with their pictures out there, a lot of the face datasets happen to be of celebrities or public figures, right? Maybe politicians, actors, whatever. Then even if we were to spend a bunch of money to create a dataset of celebrities in India, for example, those are again going to be biased towards lighter skinned people. Because a lot of the celebrities in India happen to be lighter skinned, if you know. I don't know if you've noticed that.


Nick: I know. Whenever I go to India, I'm always struck by all the billboards having extremely light skinned Indians, where the general population are like me. I mean, I was meeting people with my skin tone. But then you look at these massive billboards around India, and it's very light skinned.


Praveen: Yes. I also have this hypothesis that maybe it's similar in countries which have diverse skin tones in their populations. For example, Brazil, they often diverge towards a lighter brown being more desirable, for whatever reason. I don't know why, but yes.


Nick: Interesting. How about neuroscience? Sigmund Freud who founded psychoanalysis was a neuropsychiatrist. Some people describe AI like the mind even as far as to say that a neural network is like the mind. Freud drew a diagram of the mind in the late 19th century specifically about how neurons interact with each other to solve a problem, and the drawing is so interesting because it's very similar to the concept of the neural network and how it solves problems.


Praveen: When I speak to neuroscientists, they always make it a point to tell me that the brain is actually way more complicated than an AI. Like, one particular neuron can have connections to 50,000 other neurons, which is incredibly dense. That's not the case with an AI model, which is much more simpler. It's interesting that even though it's simpler, we have still been able to create very capable systems, but I guess you could argue, a brain is much more efficient, right? I mean, think about how little you eat.


Nick: Yes, because we're psychologically much more obviously nuanced. We have feelings, we have intuition, we have thinking. We have sensation, thinking, feeling, and-- what's the fourth one?


Margot: Intuition?


Nick: Intuition, yes. The algorithm doesn't have all the nuances of being human. What do you think about the nuances of the algorithm?


Praveen: Yes, it's really confusing. Again, it's a point of debate. Like, does it really have internal thoughts? Does it have goals? Or is it just a text generation machine, or a data generation machine? People don't really know.


Margot: Really?


Praveen: Yes. There's a huge subfield within AI called AI safety, where people worry a lot about, oh, what if it actually can have goals? Then it could have goals that are different from what you give it, and it could subvert whatever system you put into, because it has separate goals by itself. Like, it could be deceptive. What if it's deceptive like humans? It's much more powerful than humans because a particular AI can copy itself over multiple systems and then coordinate across thousands of machines.


Nick: Oh, really?


Praveen: I mean, it's not possible yet, but these are the kinds of things that people in AI safety worry about.


Margot: That's very, very interesting, especially since one of our strong interests is ethics, but it's hard to develop ethics around something that has no clear perimeter. One thing I'm really curious about, Praveen, is I notice in my work and general life, so many people's, I would almost call it an impatience, to turn to AI to solve problems and answer questions. And this incredible-- what's to me kind of surprising is the willingness to feed it, even whole bodies of work, essays, articles, whatever, that people have worked terribly hard on. While I understand their impulse for curiosity, I'm shocked that they would depend on it so much. As you said, it's only as good as the data we give it, as far as we know.


I'm looking at the human behavior, the psychology of so many people's impatience, and desire, and almost desperation to just feed AI without any discerning thought about-- first of all, thinking for themselves. What do they think before they give it to AI, and how much is that a matter of self-trust or something? When you speak of this idea of AI safety, clearly many people are either not aware that there's the possibility that AI has different goals than we give it, that it could subvert. Either they don't know that or they don't care. What do you think? What's your understanding of the general population's knowledge with respect to this? Do we need to guide or warn people that, "Hey, this is far more powerful, and if you think people can be deceptive, look at this exponentially." That's so fascinating, but what would you like to see happen for the human race to be able to use this productively?


Praveen: I don't think you're going to like my answer, which is that it's inherently uncertain, like there's so much debate that goes on about AI. There are some people who believe that it's already conscious, because it's very good at pretending to be conscious. If you ask it, are you conscious? Do you feel pain? It can give you answers that make you think that it's conscious. But then again, is it just taking it from the data that it's been trained on, or is it actually like a human being themselves giving you an answer they think? Or is it just really good at picking pieces of data and mirroring you?


Even in terms of the danger, again, there's a lot of debate. A lot of people think it's just a data production machine. It doesn't have its own goals, it doesn't have its own thoughts. Then there are others like, what if it suddenly develops it? The moment it develops it, because it's so powerful, things can get out of hand really fast. So, even if it doesn't have it now, one day if it develops it, then things can get out of hand, like maybe in a matter of days or hours. That's something that AI safety people worry about.


Margot: Right. Do you consider yourself one of these AI safety people? Is that in your domain of concern?


Praveen: I think it's a good idea to look into it. I can't predict the future, but there are some people, for example, who are working on putting the AI in a very controlled-- like a box or a very controlled setting, so that even if it's sort of deceptive, even if it wants to do harm, it can't. For example, it doesn't have access to machines through which it can control or through which it can copy itself and so on.


Margot: Are you sort of fatalistic about this? A curious observer to just see where it goes, or do you feel worried?


Praveen: I'm not worried, but again, it goes back to my personality. I just happen to be really-- I struggle to be pessimistic. I mean, obviously, I have my bad moments, but eventually, it only lasts for like one evening or so for me, but other people can be much more neurotic about it. I've met people who believe that we're all going to die in five years.


Margot: Yes, there's always a few of those. I'm encountering a lot of people in different locations expressing far more anxiety, not so much about AI per se, but in terms of uncertainty and / or bleak politic-economic outlook. Well, also, you strike me as having a very reasonable and scientific attitude toward this. You're careful and trying to keep a certain objectivity, but we're-- I think Nick and I, as analytical psychologists, we have to prepare ourselves to possibly educate, but also to work day-to-day in our practice with people who come to us a bit unraveled in daily life by certain life expectations not turning out in manageable ways. We're dealing with vulnerability, volatility, and our job as Jungian analysts is not to preach or interpret necessarily, but to try to work with the person to evoke new self-understanding. We're trying to find that middle ground between a more objective attitude and a readiness to accept subjectivity as well. Do you have any advice for us as we try to apply AI in this work?


Praveen: I actually had a question I wanted to ask both of you, which is--


Margot: Oh, please do.


Praveen: --kind of related to what you're asking, which, is a lot of people, like what you said before, they're feeding it their whatever reports, their emails, and so on, but they're also learning to-- Even I sometimes, I'm learning to cook and so on with the AI. It's just way faster, right?


Margot: Yes.


Praveen: You're giving it things to do, but similarly, a lot of people are also using it as a therapist, right? Now, the thing is, if it's good enough to give people a base amount of good advice, do you think that's a good thing that so many people would have access to? For example, maybe someone's catastrophizing or whatever, and they ask the AI like, "Oh, no, I feel terrible, blah, blah, blah. "They describe their condition and the AI tells them, "You need to calm down. This is not the case." Is that a good thing?


Nick: What I see in a work context is AI being used as a sort of companion. That's what the organization is typically saying to everyone, that we'll give you AI for you to do your job better or to minimize workload, but what I'm seeing at a very basic level is-- So, I'm working with clients on different projects, and we often have workshops with 10, 15, 20 people on the call on Zoom or Teams. The workshop is recorded, and usually there is someone taking the minutes and actions. What I see is that they literally allow the AI tool to record the minutes and actions and summarize it, then they just send it to people and say, "Here you go."


They don't check it for accuracy, they don't top and tail it. They just say, "Here," so they trust it completely. That's what I'm seeing that people trust the algorithm completely. They don't question it, top and tail it. They don’t think, "Oh, let me just edit what you said a little bit. ‘It’s like there is this implicit trust of the algorithm. Actually, I think there are polar opposites: people who don’t trust AI and people who trust it completely.


Margot: Yes. That's a good point, Nick. To elaborate on that, I think I see a lot of people-- the way I see that in my practice, for example, is people who are not necessarily trusting themselves. Not just my practice, colleagues even, who just turn to AI when they have any doubt at all about something that they've written or that they're contemplating teaching, as though AI always knows better. These are smart people, often academic, highly educated people with a capacity to be discerning, really fine-tuned thinking. I'm always astonished that such people are so ready to trust a dubiously understood external resource to that degree. Trusting, not trusting. Trust, not to digress too much here, but I think in a Jungian way of looking at it, trust could be said to have a lot to do with the mother archetype. Whether we had sort of a positive experience archetypally or more personally, or whether we had a negative context and a negative developing sense of a mother or mothering. That would definitely affect our sense of trust. I don't want to make any over-reductionist statements here, but if I just had to take what we're looking at right now and I had to put it in Jungian terms, that's what would come up first. That there's a connection between this lack of self-trust and the personal or archetypal mothering that the person has experienced to date.

Mother Archetype

The Jungian idea of an archetype refers to a kind of universal, symbolic pattern or image that comes from the collective unconscious - a part of the unconscious mind that Carl Jung believed is shared among all humans.


Archetypes aren't learned or personal; they're inherited mental structures that shape how we perceive and react to the world. They show up across cultures and time in myths, dreams, art, and religion. Think of them like recurring themes or characters that live deep in the human psyche.


Some classic examples of Jungian archetypes include:


  • The Hero - The courageous figure who overcomes trials (e.g., Hercules, Harry Potter)


  • The Shadow - The darker, hidden side of ourselves that we often repress.


  • The Anima/Animus - The feminine side in men (anima) and masculine side in women (animus).


  • The Wise Old Man/Woman - The sage or guide figure. The Mother - A nurturing, protective figure (can also be destructive).


So, in short, archetypes are like deep-rooted blueprints of human experience - shaping how we think, feel, and behave, often without us realizing it.

Nick: Yes. I'll also add that that's why, for me, the relationship between the individual and the algorithm is sort of-- I'm really interested in that relationship because I think the individual has-- As you know, we're influenced by internal and external forces, so to speak, like your family, your culture, et cetera, and then your own internal world, and I find it so interesting that people just trust it implicitly.


Margot: Including very bright people with a high capacity, who in any other context would prefer for people to break a problem down, look at the pros and cons. And it often appears people are surprisingly prepared to forego that.


Praveen: You know, there is a positive way to look at it, which is a lot of the time when you're thinking about a particular problem, you just want things to try out, and you're sort of using the AI as like a bouncing board of ideas. Like, "Oh, here's the thing I'm doing, what do you think?" And maybe it lets out five different things. It's not that you even trust the AI. It's more like, it's things for you to try. If you have an experimental trial and error kind of mindset, it gets you going, right? It gives you momentum, and that can often be good.


Margot: Yes, I agree with you. I'm not trying to judge this. I'm just saying that in the context in which I work, day to day with colleagues and clients, there's this tendency to just jump in, desperately needing or preferring AI's take on things. I totally agree with you, and I think that's great. It's reassuring to hear you say that for many people, it's still just something to try, to see what it has to say as another perspective.


Nick: Interesting you mentioned the therapy bit, because we're seeing a lot that people are going to apps to have a therapist, and the app gives them daily stuff to consider, think about. It surprises me, I must admit, that people are trusting an algorithm so much with their own mental health and take its advice seriously. This feels safer to many apparently, to have the distance of an online experience may bring in a needed sense of objectivity, rationality, and less personal risking therapeutic intimacy. If I'll give something positive around this, from a psychological perspective, dream analysis is an important part of analysis, and I am quite interested in the idea using an algorithm to analyse a dream or pose questions to the dreamer.


Praveen: Like I said before, I'm a natural optimist, so to put it in the most positive or optimistic way you could think, what if we have an AI system that's like a distillation of human knowledge across the world, but also across hundreds of years, because we have all that data. Then you just give it to everyone - like a really wise oracle - that you just go to and ask questions. Maybe, hopefully they don't allow it to dictate their whole lives, but more like suggestions, oh, this is something you can try out.


Margot: That's super interesting, Praveen. That also puts it in the realm, if we use Jungian speak, of the collective unconscious and the archetypal realm, which is ubiquitous. That's a really nice attitude of approaching the unconscious. A lot of what we do is help people find an attitude or a way of approaching the unconscious that doesn't overwhelm them, but it's informing them and enriches their life. That's really interesting that you said.

I must say that so far, and limited, with clients who have turned to AI to get input on a dream, before they brought it into the practice, my observation to date, very limited, is that they get reassurance from that. This may be self-serving, what they get out of the analytic session, and working on it in a deeper way, is more personalized and more enriching, and more relevant to themselves, which is what we're after, but they seem to feel a value to what they got from AI, like AI is giving them reassurance that they're on the right track, before they will take it into the session. That's just a very preliminary observation.


Praveen: I think you could be somewhat right there, because a lot of these AI systems are developed by companies as a product for customers. They have a bias towards making the AI super positive. Also, maybe it's too obsequious to you, it's always affirming you, and if you say it's wrong-- in the early days they had this problem, where if you tell the AI, it's wrong, it'll immediately admit that, oh, I'm sorry, I was wrong, you're right, kind of thing.


Margot: Interesting. Wow. That's amazing. I didn't realize its commercial aspect.


Nick: Are there any Jungian concepts that you could apply in your work as an AI engineer? Do you think any of those concepts play a role in your work, consciously or unconsciously?


Praveen: I've seen a lot of people on Twitter talk about how-- you know how there's different AI services. Different companies have their own AI services. They often talk about there's a different vibe with each AI. Google has its own AI, Anthropic, OpenAI, whatever. It's almost like they have different personalities, and people don't quite know where that comes from. I don't know if that's making sense to you. It's similar to how different people can be a little more introverted or extraverted.


Margot: I'm intrigued, Praveen, what you're saying about different companies giving this a different slant to reach their particular client base, so I'm supposing there's a difference between commercial applications of AI and, say, academic institutions, or whoever's trying to keep a more nonprofit-oriented orientation. Have you noticed that? Is that the case?


Praveen: When it comes to the chatbot, like GPT and stuff, there's only commercial players. There's no research-grade system out there.


Margot: Really?


Praveen: Yet. Yes, because it takes a lot of money to train from scratch, millions of dollars. It's only commercial players who've done it so far. Although a few of them open source their models.


Margot: Did you know that, Nick?


Nick: No, I didn't. That's so interesting.


Margot: Now talk about shadow.


Nick: I guess what I'm learning here on this call particularly is about how its commercial costs are heavily determined, its development of AI. I'm interested to know where all this data's stored.


Praveen: Which one?


Nick: The one you generate when you talk to it, or the data itself, the data sets. We have a mind that can contain a lot of information. As you said, we don't know what happens in the brain, but what about AI, its data, its brain? Where's that? Is it a server somewhere in, I don't know, Norway? I heard they take so much energy that they have to take them to colder countries.


I don't know whether this may have been somewhat exaggerating, but I heard somewhere that all these data sets have to be held in cooler climates because they take up so much energy. They've got to be in temperatures that are relatively cool. If you went to, let's say, Africa, in some parts of Africa, it's just hot all the time, it wouldn't be an ideal environment for some of the data that you have to gather for AI. Is that so?


Praveen: Once you train the AI, there's no separate data, it's just the neural networks, it's all in there. They are often run in huge data centers. Like you said, running them, all the time would make the machines really hot. The data sensors are weather controlled, whatever you want to call-- air conditioned. Obviously, if you're in a climate that's really hot, I'm assuming you have to use a lot of energy to keep it at a particular temperature. Let's say ideal temperature is, I don't know, 10.


Margot: Now, not to digress too much, because I know we're rounding out our hour here, but are we then starting to encroach on the poor ecology? Are we doing damage with that high use of energy to the environment?


Praveen: Again, because I'm an optimist, I don't see it that way. The way I see it, a lot of companies are getting into developing their own energy sources. Some of them are investing in nuclear energy now. I know some people are really against nuclear, but countries like France have shown that you can actually go nuclear and it's reasonably safe. Even the nuclear waste in current generation, nuclear tech is really low, and you can save them in the concrete blocks so that they're pretty safe. I'm hoping that this whole AI boom leads to a lot of adoption of nuclear energy. Then we can have a really cheap energy, which is a good thing.


Nick: I wanted to say something because I wanted to see how you'd both react to this. Some questions I sent out was generated by AI.


Praveen: Oh, really.


Nick: I went to the AI app - Claude - and it recommended the last few questions. I thought it was very interesting that there was a common theme around the questions it suggested which focused on how does the AI engineer deal with ambiguity and uncertainty and giving it to AI to solve. I thought that was quite interesting, just in terms of an observation. Also, it did say, has the AI engineer's dreams changed since they started working with AI? I was interested to hear, do you dream? Have you noticed any changes since you started working in AI? Do you notice any changes, if any?


Praveen: No. I know I dream, but I'm also one of these people who don't retain a lot of their dreams when I wake up. I sometimes notice that I dream. I think it's called hypnagogia, where you're in-between. You're not quite asleep but you're in a dreamy state. I've been in those situations before, so I do know that I dream. Some people, because they don't retain their dreams, think they don't dream. Actually, they could be dreaming, but they don't retain it when they wake up. I don't retain a lot of the dreams.


The thing that I found most interesting was the whole collective unconscious thing, because that's the thing that's most relevant to the AI, because you're showing it as much data that's been generated on the internet by different people. It's picking up all these things that people are giving it. A collective unconscious. In the early days, back in 2023, I think it was Microsoft, they released their AI algorithm, and it used to behave in a very bipolar way.

People used to joke that it's like a bipolar girlfriend or something. Then over time, the companies tuned it to make the responses less insane. There's always a question of, what's underneath? Even cloud, for example, it's pretty stable. It's nice to talk to, but is there something lurking underneath?


Nick: I've noticed AI engineers tend to borrow terms from psychopathology. Interestingly, bipolar. They also use hallucinations as when you ask AI a question, it comes back with false information. they call it hallucinations.


Praveen: Maybe it goes back to the very beginning of the field where people wanted to create a useful system, but they wanted to replicate whatever's going on in our brains. That's why I think they keep borrowing terms from psychology and so on. That's another criticism that some people have. The AI industry is using all these terms that's meant for human beings for machines.


Margot: It's all interesting and good. It's only that it might be fruitful for there to be continued dialogue between AI engineers and psychologists to refine what that is. It totally makes sense. one quick example would be with hallucinations and delusions, and so forth. We talk about bizarre and non-bizarre, are they outrageous or are they plausible? Then people put a positive or negative spin on these. Whereas in the Jungian world, we're trying to stay a little bit less judgmental, not tending to pathologize people because of the phenomenon they're experiencing. I would hope that we can continue the sharing information and sharing points of view and melding perhaps a transcendent third between AI engineering and our work.


Praveen: By the way, I just want to quickly mention the hallucination thing. There's a criticism there that it's not something that people came up, it was just based on a random blog post that someone wrote where they used the word "hallucination" and then people picked it up. There's a criticism that it's not quite hallucinating, meaning sometimes when you talk, you could be trying to remember a source, like a name of a book or whatever. You could be misremembering it.


Praveen: That's very different from like hallucinating something that's there, oh, like your table is on fire or whatever. People say that, oh, you should use a more accurate term that maybe it's a confabulation rather than a hallucination.


Margot: That's really interesting because that brings us into this area of objectivity and subjectivity. Ultimately, Jung would say, and others, that all our perceptions are ultimately subjective no matter how objective we try to be, and memory is so faulty. On the other hand, we can gradually become conscious of the possibility that we may be misremembering, as opposed to hallucinating.


Praveen: Have you guys heard of Substack?


Margot: Yes.


Praveen: One of my favorite Substackers happened to be a neuroscientist before. He talks about it like we are always dreaming. It's just that when you're awake, your dream happens to correspond a lot with reality. If it doesn't, that's what hallucinating.


Margot: That's really interesting. Wow, I like that.


Nick: I love that. Great. Listen, thank you so much, Praveen, for the chat. I've really enjoyed it. Actually, it's been quite a rich conversation. There's quite a lot of different topics in there that I think people find quite interesting to read. We'd love to keep in touch, and obviously as our work develops, to continue to use you as a reference point for certain things, if you don't mind.


Praveen: Of course. I find it interesting.


Margot: Really lovely to meet you, Praveen. I feel very privileged. Thanks so much.


END OF INTERVIEW

 
 
 
bottom of page