Guide to Artificial Intelligence in Jersey’s Finance Industry

13 November 2024
In this guide

History of AI

1950s
“Can machines think?”This was the question posed by Alan Turing in his 1950 paper ‘Computing Machinery and Intelligence’. While the term ‘artificial intelligence’ wasn’t coined until 1956 by John McCarthy, the 50s was the start of the first Golden Age of Artificial Intelligence.

In the 50s, Turing also introduced what we now know as the Turing Test: a machine capable of tricking a human it is chatting with into believing it was a human. The assumption was that the machine was capable of thinking.

1950 - 1970
Computer Capacity GrowsBetween the 1950s to the 1970s, work on AI was flourishing as the capacity of computers continued to grow, with programs such as ELIZA showing promise that artificial intelligence was expected in the near future.

 

1970s
First AI WinterThe first ‘AI winter’, in which the optimism of AI declined occurred during the 1970s when the reality of the technological and theoretical limitations of the period was recognised. At the time, computers were unable to store enough information or process it fast enough for AI to be a reality.

 

1980s
Expert SystemsDuring the 1980s there was prolific development of expert systems which lead a second surge of AI. This was driven by the advent of machine learning. Expert systems are designed to solve complex problems by doing if-then rules, based on a knowledge base. Towards the end of the 1980s, the limitations that these systems had in terms ability of their ability to acquire knowledge led to the second AI winter.
1990 - 2000
Second AI WinterA bottleneck of expert systems meant that until the early 2000s, there wasn’t much movement towards AI and the second AI winter continued

 

2000s onwards
Generative AIIncreasing IT capabilities meant a surge in development towards AI. In this period, big data and deep learning starts becoming more mainstream, laying the base towards other AI applications and the development of generative AI in the form we know it today.

Glossary of Key AI Terms

Artificial general intelligence

Artificial general intelligence (often shortened to general intelligence) is the ability to accomplish virtually any goal or cognitive task including learning equivalent to human intelligence without input

Artificial intelligence

Non-biological intelligence

Backpropagation

The algorithms that enable artificial neural networks to learn, through a process of incrementally reducing the error between known outcomes and model predictions during training cycles

Deep learning

A concept loosely based on the brain that recognises patterns in data to gain insight beyond the ability of humans; for example, to distinguish between the sonar acoustic profiles of submarines, mines and other sea life, a deep learning system doesn’t require human programming to tell it what a certain profile is, but it does need large amounts of data

Deep neural network

Uses sophisticated mathematical modelling to process data in complex ways, through a greater number of layers than a neural network

Generative models

Existing data is used to generate new information; for example, predictive text looks at past data to predict the next word in a sequence

Intelligence

The ability to achieve complex goals

Narrow intelligence

The ability to achieve a narrow set of goals, such as playing chess

Natural language processing (NLP)

When a computer interprets and understands human language and the way and context in which it’s spoken or written; the aim is to deliver more human-like outputs or responses

Neural network

A group of interconnected ‘neurons’ that have the ability to influence each other’s behaviour

Machine learning

The ability of a machine to learn without being programmed; the algorithms used improve through experience, either predictively using historic data or generatively using new data

Predictive analytics and models

Similar to machine learning but narrower in scope, predictive analytics has a very specific purpose, which is to use historical data to predict the likelihood of a future outcome; for example, risk-based models on when a stock may fall

Reinforcement learning

A type of machine learning technique that enables an AI system to learn in an interactive environment by trial and error using feedback from its own actions and experiences

Robotic process automation (RPA)

Software that’s built to automate a sequence of primarily graphical repetitive tasks

Supervised learning

Supervised learning uses labelled datasets to train algorithms in order to predict outcomes and recognize patterns.

Artificial Intelligence Technologies
Refinement of Generative AI
Advances in Computer Vision
Ethics and Regulation
Automated Customer Service
Fraud Detection and Security
Algorithmic Trading
Regulatory Compliance (RegTech)
Low-code and No-code AI Platforms
Cyber Security
Personalised AI Services
Credit Scoring and Risk Management
Personalised Banking
Regulatory Compliance (RegTech)
AI Governance
Autonomous Systems
Breakthroughs in Artificial General Intelligence (AGI)
Wealth Management and Robo-advisors
Quantum Computing in Finance
Decentralised Finance (DeFi)
AI and Ethics

Key aspects to consider in relation to the use of AI in an ethical manner:

Transparency

Transparency refers to the ability to understand and explain the decisions made by AI algorithms. As an example, AI models used for investment trading can be designed to produce not only trading recommendations but also explanations for the underlying reasoning behind the recommendation. This can involve generating an auditable report that outlines the key factors, indicators, and market conditions that influenced an investment decision.

This transparency can help in validating the integrity of the AI-driven decisions and actions, ensuring that they align with regulatory requirements and market expectations. Having the ability to review the decision-making rationale can assist in identifying potential biases or anomalies in automated actions and decisions allowing for measures to be taken to address any issues.

Data Traceability

Understanding the origin and quality of the data used to train and validate algorithms is key to the ethics of AI. These datasets may contain sensitive information and may be subject to biases or inaccuracies. For instance, AI systems can track the lineage of financial data, documenting which sources it has received its information from such as market feeds, customer transactions, or economic indicators. Additionally, AI algorithms can be trained to flag potential biases or inaccuracies within the data, providing transparency into the quality and integrity of the information used to arrive at a decision or action.

Decision Traceability

Tracing the decision-making process of AI systems to understand how inputs are transformed into outputs is also key for transparency. This includes documenting the sequence of calculations, rules, or features used by AI algorithms to generate predictions or recommendations.

In risk assessment, AI algorithms can provide a clear audit trail of the factors and data points that contributed to the assessment of financial risk, offering transparency into the decision-making process. This can include the identification of key variables, statistical models used, and the underlying rationale for risk predictions.

User Interface Design

AI systems should have user-friendly interfaces that facilitate understanding and interaction. Visualisations, dashboards, and interactive tools can help users explore AI outputs, interpret results, and gain insights into underlying patterns or trends.

Communication and Disclosure

When using AI, financial institutions should provide transparent explanations to external stakeholders, including clients, regulators, and the public of how AI is used in their products and services, including its limitations, risks, and potential impacts.

Continuous Monitoring and Auditing

Regular assessments of model performance, data quality, and compliance with ethical and regulatory standards is needed to maintain transparency and ensure the responsible deployment of AI. This could form part of a compliance monitoring programme, or similar review, to ensure businesses are complying and managing their risks on an ongoing basis.

Fairness and bias in AI are complex technical challenges that require careful consideration and specialised techniques to address effectively. It requires continuous monitoring and evaluation and should form part of periodic reviews undertaken.

 

Human Oversight and Intervention

Integrating human oversight and intervention mechanisms into AI systems can help detect and correct biases that may not be apparent from data alone. Instead of relying solely on accuracy, using alternative evaluation metrics can provide a more comprehensive assessment of model performance, especially on imbalanced datasets.

When including human oversight as a bias prevention mechanism, it is important to recognise that humans can also be blind to biases and therefore shouldn’t be the sole measure used to defend against biases in AI algorithms.

Local Industry Perspective
Resources and Cost
Resource availability and cost were significant concerns, particularly for organisations facing cost pressures and a need to demonstrate a return on investment.
Accuracy
Reliable and accurate results from AI systems are crucial to organisations when considering reputation.
Data Quality
The quality of data for AI models is crucial for accuracy and organisations must consider their impact on operations.
Regulator
A clear message and direction from the financial services regulator on the use of AI would support adoption.
Collaboration
Collaboration between industry participants and technology providers is essential to drive innovation in the industry and develop a robust ecosystem.
Findings

Respondents’ Profile

80%

have worked in FRPS for 10+ years

40%

have been in their role for less than 2 years

Findings

Understanding of AI

59%

have a moderate understanding of AI

Only 2%

rate their understanding as low or very low

Findings

Training and Upskilling

70%

haven’t received any training or education in AI

Findings

Challenges with Adopting AI

Inaccuracies
Data Privacy and Security
Findings

Opportunities and the Positive Impact of AI

Enabled New Capabilities
Improved Accuracy
Increased Efficiency
Reduced Workload
Findings

The Threat of AI

75%

don not perceive Ai to be a threat

However

job security is the biggest threat identified

Findings

Using AI for Tasks

Reporting
Communication
Analysis
Document Creation
Skills
Through automating routine tasks, AI provides an opportunity to free up employees’ time to focus on higher-value activities such as strategic decision-making, planning and risk management, whilst ensuring optimal data governance and security. However, as AI increasingly automates tasks, employees will also need to develop new skills – both technical and ‘soft’ – to effectively use these tools and technologies. Financial institutions will need to invest in training and development programs to help employees acquire new skills and adapt existing skills to the changing landscape.
Data Bias Identification
Data Manipulation
Data Science & Analysis
Data Visualisation
Knowledge of Big Data Technologies
Programming
Communication
Problem-solving
Adaptability
Collaboration and Leadership
Creativity
Critical Thinking
Emotional Intelligence
Ethical Awareness
Resilience