Updated: May 4
The ethical and legal concerns relating to AI must be taken seriously by organisations looking to implement AI. While there is consensus among scientists, lawyers, and activists that, with AI development and implementation increasing so rapidly, ethical and accountability considerations should be taken into account, it is still uncertain who should define the ethical framework for AI implementation, and who should be responsible for enforcing it (Hao, 2018).
There is a lot of uncertainty attached to the ethical and legal impact of AI applications. The potential risks associated with AI must be identified and limited in order to guide AI development and implementation, and to minimise the potential negative impact of the technology on humans in society (Latonero, 2018).
A new study shows that more than 70% of business leaders are taking steps to ensure ethical use and deployment of AI within their business operations Some business leaders even establish ethics committees to review the use of AI to ensure that everything is above board prior to implementation of the technology (Oxford AI Programme, 2022).
A set of ethical rules were devised by science fiction author Isaac Asimov or the Three Laws of Robotics also known as The Three Laws or Asimov's Laws https://youtu.be/AWJJnQybZlk
The Three Laws of Robotics are:
1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such order would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov’s laws are not practical in an organisation and instead organisations should develop their own prescriptive and specific ethical guidelines to ensure transparency and accountability for the way AI is deployed.
Once the leadership team of an organisation has decided to implement AI, they should consider the legal and ethical considerations of the technology in order to be prepared for the consequences that could result from their decision. Legal and ethical considerations form part of the three common pitfalls of AI - Privacy, Replication, and Bias.
#JungianBitsofInformation's AI Implementation Framework
For the purposes of deploying AI technology to strategic workforce planning, as an example, I would create an AI Framework designed to ensure that the organisation’s leaders take the ethical concerns of AI seriously and enables them to take the necessary steps to ensure that the technology deployed within the business does not harm the organisation or its stakeholders (Accenture, 2018).
I would set up an AI Delivery Board representing leaders from across the organisation to oversee the deployment of AI technology in accordance with the framework. I would also set up an AI Stakeholder Group, a working group reporting to the delivery board which consists of (a) representatives nominated by each delivery board member whose role is to deploy AI within their business area and in accordance with the framework and (b) a representative group of employees from a range of diverse backgrounds whose main role is to inform the development of the organisation’s AI policy.
The framework is designed to ensure a step-by-step approach is taken by the delivery board and stakeholder group towards the development of an AI policy. The first step is to agree a Mandate to implement AI technology. The second step is to agree a Shared Vision for an ethical and legal AI ecosystem. The third step is to agree a Problem Statement which explains the problem(s) that AI is seeking to solve. The fourth step is to agree a set of Guiding Principles which underpin the deployment of machine learning across the business. The fifth and final step is to develop an AI Policy which details the organisation’s approach to AI and instils best practice standards which must be applied to their AI applications.
I would facilitate dialogue within and between the stakeholder group and delivery board to agree a set of Guiding Principles. The purpose of the principles is to ensure a consistent approach to the deployment of Machine Learning (a form of Artificial Intelligence), and to develop a real sense of accountability for its outcomes. The principles will be far-reaching considering the three common pitfalls of AI - privacy, replication, and bias that can negatively impact the lives of certain groups of individuals subjected to the algorithm.
The guiding principles must contain detailed provision for the prevention of algorithmic bias in Machine Learning including the following ethical principles based on recommendations and studies from the Oxford AI Programme 2022 (Principles 1, 2 & 3), Ali Jahanian et al (MIT News. 15 March 2022) from the Computer Science and AI Intelligence Laboratory (CSAIL) at MIT (Principle 4), and Aminah Aliu, Innovator at The Knowledge Society (Principle 5).
#JungianBitsofInformation's Guiding Principles for Ethical and Legal AI
1. Ensure data is representative of the diverse characteristics of the organisation. This follows the general law of statistics - the better the data, the better the AI prediction that can be made on its basis. When training machine learning algorithms and to avoid bias in a prediction, it is mandatory to gather more and improved data until it is fully representative of the diverse profile of the organisation.
2. Training models should be updated on a regular basis to ensure predictions are based on data that is current and is not based on past data patterns which can add even more bias to predictions over the period of time that the training model has not been updated.
3. Use mathematical de-biasing models where appropriate in line with organisational policy e.g. for AI predictions where the organisation is actively seeking to increase the representation of race or gender in a specific profession, the algorithms may be altered manually to avoid biased predictions in favour of white males.
4. Adopt a two-way machine learning model approach - use a generative model or Machine Learning model that is trained using real data to generate synthetic data, which is similar to real data, however, synthetic data is edited to remove characteristics such as race and gender from the original datasets. The synthetic data is subsequently used to train another model, a contrastive representation learning model, for actual AI predictions.
5. Carry out an equality impact assessment of protected characteristics at the design, build and implementation stages of Machine Learning deployment and regular post-implementation audit checks of algorithms.
6. Everyone involved in the deployment of AI technology must complete mandatory training and sign up to an ethical code of conduct in relation to the organisation’s AI framework and policy. The training includes but is not limited to ethics, equality, diversity & inclusion, privacy, replication and bias. Business leaders may be required to participate in external training programmes to further their understanding of the legal and ethical considerations of AI.
7. AI practitioners involved in the deployment of Machine Learning must be suitably certified, accredited and/or qualified in AI to ensure the organisation’s ethical standards are maintained at all times.
AI can perform highly complex problem-solving (such as unravelling intricate cancer diagnoses), but it can also suffer major setbacks (such as the potential for racial discrimination) (Oxford AI Programme, 2022). In recent months #JungianBitsofInformation started to offer a new service to organisations. An Artificial Intelligence service: to identify opportunities for AI in your organisation and guidance on the ethical considerations to address the common pitfalls of AI with a unique perspective from #analyticalpsychology. Contact me for more information, it'll be great to hear from you!