- Jersey Finance
- |13/8/25
In the 50s, Turing also introduced what we now know as the Turing Test: a machine capable of tricking a human it is chatting with into believing it was a human. The assumption was that the machine was capable of thinking.
Artificial general intelligence (often shortened to general intelligence) is the ability to accomplish virtually any goal or cognitive task including learning equivalent to human intelligence without input
Non-biological intelligence
The algorithms that enable artificial neural networks to learn, through a process of incrementally reducing the error between known outcomes and model predictions during training cycles
A concept loosely based on the brain that recognises patterns in data to gain insight beyond the ability of humans; for example, to distinguish between the sonar acoustic profiles of submarines, mines and other sea life, a deep learning system doesn’t require human programming to tell it what a certain profile is, but it does need large amounts of data
Uses sophisticated mathematical modelling to process data in complex ways, through a greater number of layers than a neural network
Existing data is used to generate new information; for example, predictive text looks at past data to predict the next word in a sequence
The ability to achieve complex goals
The ability to achieve a narrow set of goals, such as playing chess
When a computer interprets and understands human language and the way and context in which it’s spoken or written; the aim is to deliver more human-like outputs or responses
A group of interconnected ‘neurons’ that have the ability to influence each other’s behaviour
The ability of a machine to learn without being programmed; the algorithms used improve through experience, either predictively using historic data or generatively using new data
Similar to machine learning but narrower in scope, predictive analytics has a very specific purpose, which is to use historical data to predict the likelihood of a future outcome; for example, risk-based models on when a stock may fall
A type of machine learning technique that enables an AI system to learn in an interactive environment by trial and error using feedback from its own actions and experiences
Software that’s built to automate a sequence of primarily graphical repetitive tasks
Supervised learning uses labelled datasets to train algorithms in order to predict outcomes and recognize patterns.
The evolution of Generative AI models is expected to accelerate, with models becoming more nuanced in understanding context, irony, and the subtleties of human communication.
Progress in computer vision will likely yield models that are not only more accurate but also more efficient, capable of running on devices like smartphones and Internet of Things (IoT) devices. This can lead to more real-time applications, such as instant visual translations or advanced augmented reality experiences.
As AI becomes more pervasive, there will be a greater need for ethical guidelines and regulatory frameworks to manage issues of bias, privacy, and fairness. Expect more institutions to form AI ethics boards, and governments to begin drafting and implementing regulations to control AI deployment, especially in sensitive areas such as surveillance, facial recognition, and personal data usage.
AI-driven chatbots and virtual assistants will become more sophisticated, handling a wider array of customer inquiries and transactions, which will help financial institutions reduce costs and improve customer satisfaction.
Enhanced machine learning models will provide more accurate detection of fraudulent activities by recognising patterns across vast datasets that human analysts might miss.
AI will continue to be integrated into trading strategies, improving the speed and efficiency of market transactions, and enabling high-frequency trading firms to capitalise on minute market changes.
With advanced technologies like artificial intelligence and machine learning, RegTech can provide real-time monitoring of transactions and activities. This helps in early detection of suspicious activities and ensures timely reporting to regulatory bodies
As AI improves, we will see transformative tools designed to make AI accessible to a broader audience, including those without specialised technical skills. By employing graphical interfaces and pre-built elements, low-code platforms offer a balance that appeals to users with some technical knowledge, allowing for minimal coding in developing AI applications.
The deployment of AI offers potent tools for real-time threat detection and monitoring, fundamentally transforming how financial institutions protect their assets and assess the security of their IT infrastructure. However, the swift integration of AI necessitates that the industry proactively addresses potential vulnerabilities introduced by these systems, such as susceptibility to novel forms of cyber-attacks and challenges to data integrity. To effectively manage these risks, institutions must develop robust AI governance frameworks and invest in specialised cybersecurity measures. This proactive approach is essential for maintaining trust and securing financial transactions in the increasingly digital landscape of financial services.
Personalisation engines will become more adept at predicting individual needs and preferences, enabling hyper-personalised recommendations in retail, adaptive learning plans in education, and customised treatment in healthcare. These systems would leverage continuous feedback loops to improve their accuracy over time.
For example, a financial advisory firm could use AI to provide personalised investment advice to its clients. The technology analyses the information provided by the client, and through continuous monitoring of financial markets and economic indicators alongside the client’s risk appetite and financial goals, will provide personalised investment recommendations. Through continuous learning, AI adapts to changes in the client’s situations and provides alerts to help them stay informed and make proactive decisions.
Machine learning models will incorporate a broader set of data points, including non-traditional and unstructured data, to evaluate credit risk more accurately and offer personalised lending rates.
AI will enable more personalised banking experiences through sophisticated algorithms that analyse an individual’s spending habits, investment preferences, and financial goals to provide customised advice and product recommendations.
AI applications will streamline regulatory compliance by automating the tracking and reporting of financial transactions, assisting institutions in staying abreast of regulatory changes and reducing compliance-related costs.
At the international level, we will see the emergence and evolution of AI governance bodies similar to the World Trade Organisation for trade or the International Atomic Energy Agency for nuclear energy, which establish and monitor compliance with global AI standards.
Developments in AI could lead to autonomous systems designed to manage client’s investments over the long term with minimal human intervention.
For example, using autonomous systems, the client provides their financial goals, risk tolerance and investment preferences during the onboarding process and the AI uses this data to create a personalised investment strategy, automatically rebalancing portfolios based on intelligence from financial markets and the client’s evolving goals. The system can identify and capitalise on short-term market opportunities while adhering to the long-term investment strategy.
Steps towards AGI could be seen in the form of more ‘transferable’ intelligence, where an AI trained in one domain can apply its understanding to another without starting from scratch. This cross-domain learning ability would be a significant step toward more generalised forms of AI.
The sophistication of AI in analysing market trends and managing investment portfolios will lead to more widespread adoption of robo-advisors, which will provide personalised investment advice at a fraction of the cost of human advisors.
If quantum computing advances as anticipated, it could solve complex financial models exponentially faster than classical computers, potentially revolutionising areas like asset pricing, portfolio optimisation, and risk assessment.
DeFi is a financial system that operates on a decentralised network of computers rather than a central authority such as a bank or government institution. AI might play a pivotal role in this emerging space by providing intelligent contract management, risk assessment, and liquidity analysis, further removing the need for traditional financial intermediaries.
Transparency refers to the ability to understand and explain the decisions made by AI algorithms. As an example, AI models used for investment trading can be designed to produce not only trading recommendations but also explanations for the underlying reasoning behind the recommendation. This can involve generating an auditable report that outlines the key factors, indicators, and market conditions that influenced an investment decision.
This transparency can help in validating the integrity of the AI-driven decisions and actions, ensuring that they align with regulatory requirements and market expectations. Having the ability to review the decision-making rationale can assist in identifying potential biases or anomalies in automated actions and decisions allowing for measures to be taken to address any issues.
Understanding the origin and quality of the data used to train and validate algorithms is key to the ethics of AI. These datasets may contain sensitive information and may be subject to biases or inaccuracies. For instance, AI systems can track the lineage of financial data, documenting which sources it has received its information from such as market feeds, customer transactions, or economic indicators. Additionally, AI algorithms can be trained to flag potential biases or inaccuracies within the data, providing transparency into the quality and integrity of the information used to arrive at a decision or action.
Tracing the decision-making process of AI systems to understand how inputs are transformed into outputs is also key for transparency. This includes documenting the sequence of calculations, rules, or features used by AI algorithms to generate predictions or recommendations.
In risk assessment, AI algorithms can provide a clear audit trail of the factors and data points that contributed to the assessment of financial risk, offering transparency into the decision-making process. This can include the identification of key variables, statistical models used, and the underlying rationale for risk predictions.
AI systems should have user-friendly interfaces that facilitate understanding and interaction. Visualisations, dashboards, and interactive tools can help users explore AI outputs, interpret results, and gain insights into underlying patterns or trends.
When using AI, financial institutions should provide transparent explanations to external stakeholders, including clients, regulators, and the public of how AI is used in their products and services, including its limitations, risks, and potential impacts.
Regular assessments of model performance, data quality, and compliance with ethical and regulatory standards is needed to maintain transparency and ensure the responsible deployment of AI. This could form part of a compliance monitoring programme, or similar review, to ensure businesses are complying and managing their risks on an ongoing basis.
Fairness and bias in AI are complex technical challenges that require careful consideration and specialised techniques to address effectively. It requires continuous monitoring and evaluation and should form part of periodic reviews undertaken.
Human Oversight and Intervention
Integrating human oversight and intervention mechanisms into AI systems can help detect and correct biases that may not be apparent from data alone. Instead of relying solely on accuracy, using alternative evaluation metrics can provide a more comprehensive assessment of model performance, especially on imbalanced datasets.
When including human oversight as a bias prevention mechanism, it is important to recognise that humans can also be blind to biases and therefore shouldn’t be the sole measure used to defend against biases in AI algorithms.
Out of 108 responses received, under half (40%) had been in their current role for 0-2 years; 24% had 2-5 years, and 21% had 10+ years.
Interestingly, a large majority of respondents (80%) have had at least 10 years of experience in the financial and related professional services sector (FRPS) as a whole indicating a wealth of experience in the sector and the seniority of respondents (70% being at Director and above level).
have worked in FRPS for 10+ years
have been in their role for less than 2 years
The majority of respondents (59%) indicated moderate levels of understanding of AI. Only 2% rated their current understanding of AI as very low or low.
have a moderate understanding of AI
rate their understanding as low or very low
When asked whether they had received training or education on AI, the vast majority of respondents reported never having received any specific training in their role, highlighting a clear need for employers to introduce AI training and encourage upskilling to future-proof both the business and employees’ careers.
haven’t received any training or education in AI
Of the 36% of respondents who have worked with AI, nearly two thirds (64%) have experienced challenges working with AI in their current roles, with the top challenges cited as data privacy and security and inaccuracies.
Other challenges identified by the respondents included:
Respondents clearly recognised the opportunities that AI offers both in the workplace and in their specific roles. Many highlighted positive impacts, with those already using AI noting improvements in accuracy, efficiency and a reduced workload. These benefits align closely with the broader industry opportunities mentioned across all responses.
Encouragingly, when asked if AI is seen as a threat, three quarters of the respondents did not view AI as a threat.
Job security was cited as a top concern amongst those who view AI as a threat.
don not perceive Ai to be a threat
job security is the biggest threat identified
Organisations currently using AI systems or tools were asked about the tasks AI was being used for.
The most common ‘task types’ AI is used for include document creation, reporting and analysis and communication.
Awareness of bias within datasets used in AI is essential. The ability to assess data for bias ensures fairness while enhancing accuracy by identifying and correcting issues that may distort predictions. In addition, data bias identification addresses ethical considerations by promoting responsible decision-making and bolsters trustworthiness, ultimately fostering greater adoption and effectiveness.
Data manipulation remains a vital skill in finance roles. High-quality data is essential for training accurate AI models, making the ability to pre-process, clean, and transform data into suitable formats crucial. Additionally, data manipulation skills enable professionals to integrate diverse data sources into cohesive datasets for analysis and modelling, which is essential for informed decision-making.
Data analysis and interpretation will continue to be essential for finance professionals, as insights from vast financial datasets can be extracted using AI-based techniques. Despite AI’s automation capabilities, human oversight remains crucial for ensuring the accuracy and relevance of AI-generated insights. Data analysis skills allow professionals to discern underlying patterns, trends and anomalies.
Proficiency in data visualisation will be even more crucial with AI adoption. Visualisation tools help professionals communicate complex financial insights clearly, aiding decision-making. They facilitate analysis, interpret AI model outputs and monitor performance for regulatory compliance. exploratory data.
Big data technologies are a comprehensive suite of tools and frameworks designed to adeptly handle large volumes of both structured and unstructured data. These technologies can collectively help organisations to extract valuable insights from extensive datasets, facilitating informed decision-making and fostering innovation.
Proficiency in programming languages such as Python or R is highly valued as these languages offer powerful tools and libraries for data analysis, machine learning, and statistical modelling. This enables finance professionals to extract insights, develop predictive models, and seamlessly integrate AI-driven solutions within existing workflows.
Effective communication skills remain key as AI integration increases. Having the ability to bridge the divide between technical experts and non-technical stakeholders is crucial in explaining complex concepts to diverse audiences.
AI adoption introduces unique challenges that require innovative critical thinking skills.
Proficient problem-solving empowers organisations and individuals to enhance the efficacy of their AI systems. While AI algorithms excel at processing large volumes of data and identifying patterns, human intervention is essential for interpreting results, contextualising insights, and devising actionable solutions. Employees who can problem-solve are extremely valuable to the workforce as they have the ability to troubleshoot and continuously optimise AI systems for better performance, accuracy and outputs.
The rapid progression of AI technologies highlights the need for both adaptability among professionals and relevant training resources to be provided by employers. The ability, and desire, to learn and adapt will prove essential in the job market as AI becomes more embedded in the financial service sector.
Collaboration is a fundamental component for the successful incorporation of AI within organisational structures, demanding proficient coordination with multidisciplinary teams to implement and embed AI solutions into operational workflows.
Effective and forward-thinking Leadership plays a pivotal role in the adoption of AI by guiding organisational change, driving innovation, fostering collaboration and facilitating external partnerships.
The significance of creative thinking in leveraging AI technologies should not be underestimated. Thinking outside the box enables professionals to explore unconventional approaches to the various areas where AI is deployed, for example, improve investment strategies, mitigate risks and enhance customer services.
In the workplace, critical thinking is central to optimising outcomes across diverse daily functions and analytical endeavours and this can be applied to the application of AI within an organisation. Critical thinking involves the objective analysis of information, the interrogation of assumptions and the consideration of diverse perspectives – all of which are imperative to making well-informed decisions.
Emotional intelligence helps in grasping and managing the emotional impact of AI adoption on individuals, building trust and promoting teamwork. This is essential for those in managerial positions who are introducing AI systems and methodologies within teams that have not previously applied it, as they may encounter different attitudes to the implementation of AI within their business.
Ethical awareness is increasingly recognised as a vital soft skill essential for the effective integration of AI. This competency involves understanding the ethical implications inherent in AI technologies, for example:
Therefore, the importance of incorporating ethical considerations into the design and decision-making processes of AI systems is crucial.
Resilience encompasses the capacity to bounce back from setbacks and adapt to challenges encountered. It requires maintaining motivation, demonstrating perseverance and fostering a positive attitude in the face of adversity. Resilience plays a crucial role in overcoming obstacles inherent in AI-related projects, such as technical glitches, data quality issues and resistance to organisational change. Professionals with resilience can extract valuable insights from failures, refine solutions iteratively and drive continuous improvement in AI initiatives.