- Jersey Finance
- |12/5/25
In the 50s, Turing also introduced what we now know as the Turing Test: a machine capable of tricking a human it is chatting with into believing it was a human. The assumption was that the machine was capable of thinking.
Artificial general intelligence (often shortened to general intelligence) is the ability to accomplish virtually any goal or cognitive task including learning equivalent to human intelligence without input
Non-biological intelligence
The algorithms that enable artificial neural networks to learn, through a process of incrementally reducing the error between known outcomes and model predictions during training cycles
A concept loosely based on the brain that recognises patterns in data to gain insight beyond the ability of humans; for example, to distinguish between the sonar acoustic profiles of submarines, mines and other sea life, a deep learning system doesn’t require human programming to tell it what a certain profile is, but it does need large amounts of data
Uses sophisticated mathematical modelling to process data in complex ways, through a greater number of layers than a neural network
Existing data is used to generate new information; for example, predictive text looks at past data to predict the next word in a sequence
The ability to achieve complex goals
The ability to achieve a narrow set of goals, such as playing chess
When a computer interprets and understands human language and the way and context in which it’s spoken or written; the aim is to deliver more human-like outputs or responses
A group of interconnected ‘neurons’ that have the ability to influence each other’s behaviour
The ability of a machine to learn without being programmed; the algorithms used improve through experience, either predictively using historic data or generatively using new data
Similar to machine learning but narrower in scope, predictive analytics has a very specific purpose, which is to use historical data to predict the likelihood of a future outcome; for example, risk-based models on when a stock may fall
A type of machine learning technique that enables an AI system to learn in an interactive environment by trial and error using feedback from its own actions and experiences
Software that’s built to automate a sequence of primarily graphical repetitive tasks
Supervised learning uses labelled datasets to train algorithms in order to predict outcomes and recognize patterns.
Transparency refers to the ability to understand and explain the decisions made by AI algorithms. As an example, AI models used for investment trading can be designed to produce not only trading recommendations but also explanations for the underlying reasoning behind the recommendation. This can involve generating an auditable report that outlines the key factors, indicators, and market conditions that influenced an investment decision.
This transparency can help in validating the integrity of the AI-driven decisions and actions, ensuring that they align with regulatory requirements and market expectations. Having the ability to review the decision-making rationale can assist in identifying potential biases or anomalies in automated actions and decisions allowing for measures to be taken to address any issues.
Understanding the origin and quality of the data used to train and validate algorithms is key to the ethics of AI. These datasets may contain sensitive information and may be subject to biases or inaccuracies. For instance, AI systems can track the lineage of financial data, documenting which sources it has received its information from such as market feeds, customer transactions, or economic indicators. Additionally, AI algorithms can be trained to flag potential biases or inaccuracies within the data, providing transparency into the quality and integrity of the information used to arrive at a decision or action.
Tracing the decision-making process of AI systems to understand how inputs are transformed into outputs is also key for transparency. This includes documenting the sequence of calculations, rules, or features used by AI algorithms to generate predictions or recommendations.
In risk assessment, AI algorithms can provide a clear audit trail of the factors and data points that contributed to the assessment of financial risk, offering transparency into the decision-making process. This can include the identification of key variables, statistical models used, and the underlying rationale for risk predictions.
AI systems should have user-friendly interfaces that facilitate understanding and interaction. Visualisations, dashboards, and interactive tools can help users explore AI outputs, interpret results, and gain insights into underlying patterns or trends.
When using AI, financial institutions should provide transparent explanations to external stakeholders, including clients, regulators, and the public of how AI is used in their products and services, including its limitations, risks, and potential impacts.
Regular assessments of model performance, data quality, and compliance with ethical and regulatory standards is needed to maintain transparency and ensure the responsible deployment of AI. This could form part of a compliance monitoring programme, or similar review, to ensure businesses are complying and managing their risks on an ongoing basis.
Fairness and bias in AI are complex technical challenges that require careful consideration and specialised techniques to address effectively. It requires continuous monitoring and evaluation and should form part of periodic reviews undertaken.
Human Oversight and Intervention
Integrating human oversight and intervention mechanisms into AI systems can help detect and correct biases that may not be apparent from data alone. Instead of relying solely on accuracy, using alternative evaluation metrics can provide a more comprehensive assessment of model performance, especially on imbalanced datasets.
When including human oversight as a bias prevention mechanism, it is important to recognise that humans can also be blind to biases and therefore shouldn’t be the sole measure used to defend against biases in AI algorithms.
have worked in FRPS for 10+ years
have been in their role for less than 2 years
have a moderate understanding of AI
rate their understanding as low or very low
haven’t received any training or education in AI
don not perceive Ai to be a threat
job security is the biggest threat identified