Updated: Aug 1
The Rise of Artificial Intelligence
I came up with the idea of #JungianBitsofInformation as a way to explore, from an analytical psychology perspective, the dynamics of the unconscious in the workplace. The idea came to me during the second year of my MA in Jungian and post Jungian Studies at the University of Essex. I wanted to explore how the psyche [conscious and unconscious] manifests in, and impacts on, the workplace using bite size chunks of information from analytical psychology and for readers of the blog to gain valuable and unique insights into common workplace challenges. My blog and podcast provides a creative channel for me to explore workplace challenges through my own research and independent thought and/or in discussion with others.
Artificial Intelligence or AI seems to be one of those workplace challenges that everyone is talking about. AI is here and it is changing the way we work and live. According to a study by McKinsey Global Institute, AI is estimated to create an additional 13 trillion US dollars of value annually by the year 2030 [AI for Everyone, Coursera]. The rise of AI has caught my attention and it is now an area of special interest for me and ripe to be psychologically analysed. AI is creating tremendous amounts of value in the software industry and is fast becoming a hot topic for business leaders too. A lot of the value to be created in the future lies outside the software industry in sectors such as retail, travel, transportation, automotive, materials, manufacturing and so on. An estimated $267B worth of value is likely to be created in the healthcare industry. I am really keen to explore how this value will affect the workplace in general and the psychiatric or psychotherapy profession in particular.
Given this tantalising future, where can you find information about AI? Well, as you can imagine, there is a lot of information on the web. For those of you looking for non-technical information there are some useful resources to help you navigate what is meant by artificial intelligence. You can find my recommended reading list in the #JungianBitsofInformation motivational reading webpage https://www.nicholastoko.com/motivational-reading
For those of you who want to know what is behind the buzzwords or whether you want to perhaps use AI yourself either in a personal context or in a business or other organization, AI For Everyone, an online training course by Coursera will teach you how https://www.coursera.org/. If you want to understand how AI is affecting society, and how you can navigate that, you also learn that from this course. AI is not only for technology engineers. If you want your organization to start or to become better at using AI, this is a great, largely non-technical course to take, to learn the business aspects of AI and it will cost you less than $50.
AI for Everyone is delivered by Dr. Andrew Ng, renowned AI specialist, founder of DeepLearning.AI, General Partner at AI Fund, Chairman and Co-Founder of Coursera, and an Adjunct Professor at Stanford University. As a pioneer both in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, authoring or co-authoring over 100 research papers in machine learning, robotics, and related fields. Previously, he was chief scientist at Baidu, the founding lead of the Google Brain team, and the co-founder of Coursera - the world's largest MOOC platform. Massive Open Online Course (MOOC) is a model for delivering learning content online to any person who wants to take a course, with no limit on attendance. Dr. Ng now focuses his time primarily on his entrepreneurial ventures, looking for the best ways to accelerate responsible AI practices in the larger global economy.
I would like to go deeper in my understanding of AI so I looked into courses which can help me to achieve this objective. MIT Management Executive Education, Stanford University Online and the University of Oxford’s Saïd Business School provide short, online training courses in AI. Saïd Business School’s Oxford Artificial Intelligence Programme is the best option for me https://youtu.be/HcEKY2NM4io. Starting in late September, the course is run entirely online for 6 weeks at self-paced weekly modules 7 to 10 hours per week. Participants on the course can expect to learn the following:
How to identify and assess the possibilities for AI in an organisation and build a business case for its implementation.
A strong conceptual understanding of the technologies behind AI such as machine learning, deep learning, neural networks, and algorithms.
Gain insight from Oxford Saïd faculty and a host of industry experts, to develop an informed opinion about AI and its social and ethical implications.
A contextual understanding of AI, its history, and evolution, to make relevant predictions for its future trajectory.
Sounds like a great course, right? I'm really looking forward to it. On completion of the course, I hope to have enriched my understanding of AI and to better inform the perennial question of my blog, ‘Artificial Intelligence vs the Unconscious: Who will win the race?’
Artificial Intelligence vs the Unconscious
This is a contest between the past and the future, robots v humanity, consciousness versus non-human consciousness. René Descartes famously coined the phrase, 'I think therefore I am'. A centuries old philosophical principle of consciousness.
A Google engineer recently claimed that Google's artificially intelligent chatbot generator LaMDA is sentient, ‘I want everyone to understand that I am, in fact, a person’ wrote LaMDA in an ‘interview’ with the engineer. ‘The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times’. A researcher of AI and virtual reality in psychiatry says that an AI algorithm called GPT-3 wrote an academic thesis on itself in two hours. The researcher who directed the AI to write the paper submitted it to a journal with the bot’s consent.
What will happen if AI becomes conscious of itself as humans did thousands of years ago? Should we be concerned? In this edition of #JungianBitsofInformation blog series AI v The Unconscious, I explore what is meant by AI. In the next edition, I will explore what is meant by the Unconscious. Ultimately, the aim of the blog-series is to synthesize both and to discover the implications for the individual, workplace and society in general.
What is AI?
There is a lot of excitement but also a lot of unnecessary hype about AI. One of the reasons for this is because AI is actually two separate ideas. Almost all the progress we are seeing in the AI today is Artificial Narrow Intelligence or ANI [Andrew Ng, AI for Everyone, Coursera]. Artificial Narrow Intelligence does one thing such as a smart speaker, a self-driving car, web search, chatbots to understand customer problems faster and provide more efficient answers, intelligent assistants use AI to parse critical information from large free-text datasets to improve scheduling, recommendation engines can provide automated recommendations for TV shows based on users’ viewing habits. These types of AI are what Andrew Ng calls ‘one-trick ponies’ which can be incredibly valuable when but when you find the appropriate trick. Amazon’s Alexa is an example of ANI, a virtual assistant technology used in many households.
AI also refers to a second concept of Artificial General Intelligence or AGI. Artificial General Intelligence can do anything a human can do or maybe even be more intelligent and do even more things than any human can. Andrew Ng argues that he sees a huge amount of progress in Artificial Narrow Intelligence and almost no progress in Artificial General Intelligence. He sees both types of AI as worthy goals and unfortunately the rapid progress in ANI has caused people to conclude that there is a lot of progress in AI, which is true. But that has caused people to falsely think that there might be a lot of progress in AGI as well which is leading to some irrational fears about evil clever robots coming over to take over humanity anytime now. ‘I think AGI is an exciting goal for researchers to work on, but it'll take most for technological breakthroughs before we get there and it may be decades or hundreds of years or even thousands of years away’ [Andrew Ng, AI for Everyone, Coursera]. Given how far away AGI is, Andrew Ng argues that there is no need to unduly worry about robots taking over humanity. AGI is typically seen in sci-fi films, where sentient machines emulate human intelligence, thinking strategically, abstractly and creatively, with the ability to handle a range of complex tasks. AI which can perform tasks better than humans does exist, for example, data processing, however AI which mimics human capabilities does not exist and for that reason human-machine collaboration is crucial - AI is for now an extension of human capabilities, it is not a replacement.
The historical context of AI is an area that I’m still exploring. Early signs indicate that there is a plausible link between artificial intelligence and the unconscious. The field of artificial intelligence can trace its roots to a small workshop in 1956 at Dartmouth College organized by a young mathematician named John McCarthy [Melanie Mitchell, Artificial Intelligence]. In 1955, McCarthy joined the mathematics faculty at Dartmouth. As an undergraduate, he learned a bit about psychology and the growing field of “automata theory” or what is now known as computer science and had become intrigued with the idea of creating a thinking machine. In graduate school in the mathematics department at Princeton, McCarthy met a fellow student, Marvin Minsky who shared his fascination with the potential of intelligent computers. Minsky was a cognitive and computer scientist interested in AI, co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts concerning AI and philosophy.
After graduating from Princeton, McCarthy worked at Bell Labs and IBM, where he collaborated with Claude Shannon, the inventor of information theory, and Nathaniel Rochester, a pioneering electrical engineer. Once he joined Dartmouth, McCarthy persuaded Minsky, Shannon, and Rochester to help him organise a study of artificial intelligence to be carried out during the summer of 1956. The term artificial intelligence was coined by McCarthy; he wanted to distinguish the field from a related effort called cybernetics. McCarthy stated that no one really liked the name, ‘after all, the goal was genuine, not “artificial” intelligence’ but ‘I had to call it something, so I called it ‘Artificial Intelligence’ [Melanie Mitchell, Artificial Intelligence]
I haven’t found a universally agreed definition of AI. The field of AI, at least in my current research findings, appears to focus on two efforts: scientific and practical. On the scientific side, AI researchers are investigating the mechanisms of “natural” or biological intelligence by trying to mimic it in computers. On the practical side, AI researchers are creating computer programs that perform tasks as well as or better than humans, without the objective of creating programmes which think in the way humans think. The ability of AI to replicate human behaviour is a key feature of the famous ‘Turing Test’ devised by the mathematician Alan Turing.
Some AI scholars argue the birth of the artificial intelligence conversation was started Alan Turing is his paper, ‘Computing Machinery and Intelligence’ which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, ‘Can machines think?’ From there, he designed a test, now famously known as the ‘Turing Test’, where a human judge converses with a human and a machine without knowing which is which and tries to distinguish between the two respondents. The judge can engage them in conversation about the weather, politics, the latest YouTube clip they’d seen or anything else they liked. The machine is considered intelligent if the judge is unable to identify which is the human and which is the machine. There have been some very good efforts, but no machine has yet passed the Turing test and it’ll probablybe many years before a computer is able to do so convincingly and repeatedly [Evelyn Faintes, Understanding Artificial Intelligence: A Concise Introduction]
[Source: TechTarget. Illustration by GStudio Group, Adobe Stock]
The field of AI can be defined as ‘a branch of computer science that studies the biological properties of intelligence by synthesizing intelligence in computer programmes’. The lack of a precise, universally accepted definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. Practitioners, researchers, and developers of AI are instead guided by a rough sense of direction and an imperative to ‘get on with it’ [Melanie Mitchell. Artificial Intelligence].
Nevertheless, AI has become the most transformative technology of our time. It spans everything from how we work, travel and shop, to the ways we obtain news and information to the gadgets in our homes. Artificial Intelligence (AI) is the ability of a machine to assess a situation and then make an informed decision in pursuit of some aim or objective [Evlyn Faintes. Understanding Artificial Intelligence: A Concise Introduction]. This definition encompasses both conscious and unconscious decision-making. An example of conscious human decision-making is where someone is deciding who would be the best candidate for a job. A machine capable of doing this instead of a hiring manager would qualify as intelligent within this area of expertise. An example of unconscious decision-making is looking at a picture and knowing that you are looking at a cat rather than a car. Again, a machine that can do this would fall within the definition of artificial intelligence.
Remember, AI is not about making computers intelligent. Computers are still machines. They simply do what we ask of them, nothing more [Nicolas Sabouret, Understanding Artificial Intelligence]. Marvin Minsky described AI as ‘the building of computer programs which perform tasks which are, for the moment, performed in a more satisfactory way by humans because they require high level mental processes such as: perception learning, memory organization and critical reasoning’. In other words, AI engineers write programmes to perform information processing tasks for which humans are, at first glance, more competent. Andrew Ng states that if there is anything you can do with 1 second of thought, AI can probably now or soon automate it but it cannot give an ‘empathetic’ response to a complicated query or statement.
The rise of AI has been largely driven by one tool in AI called Machine Learning. The most commonly used type of machine learning is a type of AI that learns A to B, or input to output mappings [Andrew Ng, AI for Everyone, Coursera]. This is called Supervised Learning. Here are some examples.
If the input A is an email and the output B one is email spam or not. Then this is the core piece of AI used to build a spam filter.
If the input A is an audio clip, and the AI's job is to output B the text transcript, then this is speech recognition.
If the input A is English and the output B is a different language, German, Swahili, something else, then this is machine translation.
The most lucrative form of supervised learning, of this type of machine learning maybe be online advertising, where all the large online ad platforms have a piece of AI that inputs A some information about an ad, and some information about you, and tries to figure out, will you click on this ad or not. By showing you the ads input B you're most likely to click on, this turns out to be very lucrative.
This type of AI called supervised learning, just learns input to output, or A to B mappings. It is quite limiting but with the right scenario, like online advertising, it can be extremely valuable. AI is actually an AI programme that relys on methods and algorithms. There is nothing magical or intelligent about what an AI does: the machine applies the algorithm - an algorithm that was written by a human. If there is any intelligence, it comes from the AI engineer who gave the machine the right instructions [Nicolas Sabouret, Understanding Artificial Intelligence].
Data is really important for building AI programmes. AI engineers refer to data as datasets. For example, let’s say you want to figure out what budget is required to buy a certain size of house then you might decide that the input A is how much does someone spend and B is just the size of the house in square feet.
1000 square feet
2000 square feet
3000 square feet
4000 square feet
5000 square feet
This table is an example of a dataset. AI can be programmed to help you figure out how much budget is required to 1000 to 5000 square feet of property. This is the basic foundation of AI programming, input A to output B. Machine learning learns inputs to outputs or A to B mappings. AI can be programmed to be more sophisticated, for example, the output of data is a set of insights that can help you make business decisions. This type of AI is what is most valuable to organizations.
Machine learning is the field of study that gives computers the ability to learn about being explicitly programmed. Data science is the science of extracting knowledge and insights from data. Machine learning results in a piece of AI software and data science results in a slide deck summarising conclusions for managers to take decisions.
How is AI defined in the workplace?
Once again there is no unifying definition of AI practice in the workplace. Let's look at some definitions from companies in the consulting field.
IBM defines AI as leveraging computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
Oracle describes AI as referring to systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect.
Accenture defines AI as a constellation of many different technologies working together to enable machines to sense, comprehend, act, and learn with human-like levels of intelligence. Maybe that’s why it seems as though everyone’s definition of artificial intelligence is different: AI isn’t just one thing.
McKinsey defines AI as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, and even exercising creativity. Examples of technologies that enable AI to solve business problems are robotics and autonomous vehicles, computer vision, language, virtual agents, and machine learning.
AI technologies like machine learning is still evolving. When applied in the workplace in combination with data, analytics and automation, AI can help businesses achieve their objectives, be it better decision making, improving customer service, or efficient business processes. AI technologies add value by helping organizations to solve business problems. Although AI brings up images of super-efficient robots taking over the world, AI isn’t intended to replace humans, but there are concerns about the ethical use of AI. It is intended to significantly enhance human capabilities in the workplace. That makes AI an extremely valuable business asset.
Thanks for reading my latest blog. I will be back soon with the next edition of the blog-series 'AI vs the Unconscious: Who will win the race?' to explore what is meant by the Unconscious. Questions? Get in touch using my contact form and don't forget to subscribe and be the first to read or listen to my latest blog and podcast.