FINx Your Questions Answered: AI

26 Feb 2026

The level of engagement across #FINx2025 made one thing clear: there is strong interest in the topics discussed. While we weren’t able to cover every question during the live sessions, we’ll be addressing them as part of our FINx: Your Questions Answered series.

Thank you to the expert panellists from the event who offered further clarity and insight, namely Professor Alan W. Brown, Jill Britton, Tony Moretta and Emma German, for their contributions.

Professor Alan W. Brown
Professor Alan W. BrownProfessor in Digital Economy, University of Exeter, UK
Jill Britton
Jill BrittonDirector General, Jersey Financial Services Commission (JFSC)
Tony Moretta
Tony MorettaCEO, Digital Jersey
Emma German
Emma German Principal, Monoceros Law
Do you believe in exponential tech change? If so, how close are we to the inflexion point and how will Jersey institutions cope with the pace of change?

Alan Brown: I believe we are witnessing a very significant shift in technical capability. The speed of the technology advancement and its adoption are remarkable. There is no doubt that, in many areas, AI-driven technologies are in many cases already operating at, or beyond, human capability (e.g. in some health diagnostics areas). This will increase in the coming months. Jersey, like others, will need to find ways to cope with these changes and understand the implications for all aspects of business and society. Greater focus on building more agile change mechanisms will be vital.

When talking about AI, do you purely mean large language models (LLMs) or are you referring to different technologies?

Alan Brown: AI covers a broad area of technology. I usually distinguish three broad overlapping categories.
Generative AI is LLM based and aimed at simple predictions based on large amounts of training data. The responses are refined over time as the system learns from feedback.
Predictive AI uses historical data and a variety of data science techniques to analyse new situations and predict their characteristics. It is being widely used in areas such as health, science and manufacturing.
Physical AI addresses many physical devices being controlled and operated by AI. They learn from their surroundings and improve operation through feedback. For example, in warehouses and car assembly plants.

There is growing concern about an AI bubble. Do you think this is valid? How are we positioned to handle the situation if the bubble bursts?

Alan Brown: I don’t think there is any doubt we are in an AI bubble. The over-hype and commercially-driven excitement about AI is inevitable due to the massive investments taking place. The key question, as with all bubbles, is what remains after the bubble bursts? And how can that be leveraged to best effect? For AI, the new algorithms, data analytics techniques and data sources will remain and offer great value to organisations as they move forward.

What can you do to support the safe adoption of AI with controlled risks and at scale? There appears to be a disconnect between enthusiasm and realistic adoption.

Alan Brown: As with all digital transformation, strong governance approaches are needed to be successful. The adoption of new technologies always demands change management to be used as a key element of the governance approach. AI is no different in this regard. Equally, expectations must be managed to align with the reality of adopting and scaling new technology. We have enough experience to know that this will take time and require careful management to be successful.

Are we running a risk of not being product agnostic, with some of the focus on local developers who might not be the best offering or work across jurisdictions?

Digital Jersey: Yes, there is a manageable risk. Jersey must remain product agnostic and ensure local developers complement (rather than replace) globally competitive solutions. Digital Jersey can mitigate this by maintaining open procurement principles, encouraging multi-vendor ecosystems and supporting local developers to upskill and partner with international firms.

With Jersey being limited in capabilities as a small island, can we ever truly be digitally competitive and independent without relying on outsourced tools?

Digital Jersey: Jersey can be digitally competitive, but not by being fully independent of external tools. Instead, competitiveness comes from becoming a rapid adopter, integrating best-in-class global technologies and building unique value through local specialisation, regulatory nimbleness and talent.

How can the financial services industry in Jersey manage the fragmented tech offering landscape at the moment? How can Digital Jersey help with this?

Digital Jersey: Financial services can manage fragmentation through standardised AI governance, shared best practice frameworks and cross industry upskilling. Digital Jersey is providing structure by chairing the AI Council, producing shared guidelines, running AI skills programmes and connecting organisations with vetted global and local vendors.

With AI emerging will a representative of Jersey's education sector be part of the forum and will AI be implemented into our education curriculum? 

Digital Jersey: There are three Government of Jersey representatives on the AI council, leading on all parts for the Government. Additionally, Digital Jersey is working with CYPES and Skills Jersey to develop the Digital Jersey STEM Pathway, which coordinates opportunities in and after education.

How is Digital Jersey going to help educate people on AI?

Digital Jersey: Digital Jersey will educate people through formal training (AI Leadership training), accessible public learning (AI Insight Series), online resources, and partnership-driven programmes that raise AI literacy across the Island.

Will AI integration in businesses risk reducing internship/trainee opportunities for young people if entry-level tasks are managed by AI?

Digital Jersey: Some traditional entry level tasks will be automated, which may reduce conventional internships. However, new opportunities will emerge in AI augmented roles, including data work, AI operations, prompt engineering and digital process oversight. The challenge is redesigning early career pathways, not eliminating them.

Do you expect that businesses will adopt a new hiring strategy to incorporate entry-level roles surrounding AI interactions?

Digital Jersey: Yes. As organisations embed AI into operations, hiring strategies will shift toward roles such as AI support analysts, prompt engineers, context engineers, data stewards and automation coordinators. AI assisted roles may become the new entry level norm, combining business understanding with AI tool interaction.

Are there any regulations that organisations need to comply with from the JFSC regarding AI?

JFSC: We are not introducing any specific regulations or rules regarding using AI in business. We will be publishing some guidance, following consultation with Jersey’s AI Council, which we hope will be helpful and support businesses that want to use AI. We expect this to be issued in Q1 of 2026.

How is the JFSC using AI?

JFSC: We’ve taken a practical, people first approach. All staff now have access to Copilot Chat and we’re trialling full Copilot with senior leaders. We’ve also onboarded a dedicated AI resource to help the JFSC develop AI-enabled process improvements, such as early HR and Comms agents.

Will the JFSC be telling firms exactly how the AI used in examinations works? This would seem to be a key part of governance and transparency.

JFSC: We’ll be transparent about the principles and safeguards behind any AI we use in examinations, but we won’t publish proprietary technical models. What firms can expect is clarity on how AI supported decisions are governed, the factors we consider and the human oversight built in to decision making because explainability and accountability are essential to our approach.

Is the JFSC using AI to integrate supervision visit data from firms and, if so, does it tell firms it is doing so and if not why not?

JFSC: We are not currently using AI to integrate supervision visit data. We are exploring where AI may assist in future supervisory analytics, but any such use would follow a risk based, transparent and proportionate framework. If we were to deploy AI in this area, we would ensure firms understand the purpose, governance and safeguards involved, consistent with our commitment to openness and responsible adoption.

How does the Data Protection (Jersey) Law apply to the use of AI in business and what practical guidelines can organisations follow to reduce the risk of data protection breaches?

Emma German:

Application of the Data Protection (Jersey) Law
The Data Protection (Jersey) Law (DPJL) applies only to the processing of personal data, meaning any data relating to an identified or identifiable living person. A person can be identified directly or indirectly by reference to an identifier such as a name, identification number, location data, online identifier (including IP address), or factors specific to their physical, physiological, genetic, mental, economic, cultural or social identity.

The DPJL is therefore only relevant where AI systems involve data about living people, not data about things or purely commercial or technical information.

The DPJL also applies to:

  • pseudoanonymised data – where that data can still be attributed to an individual by using other additional information (i.e. data other than that originally used to anonymise it) – this is relevant to AI solutions trained on numerous datasets where re identification risks may arise through correlation or inference; and
  • special category data – certain categories of personal data are subject to enhanced protection. These include data revealing, relating to, or concerning racial or ethnic original, political opinion, religious beliefs, genetic and biometric data, health data, sex and sexual orientation and criminal activity.

The DPJL sets out how personal data should be processed, requires a lawful basis for the processing so and grants individuals rights, including rights of access, rectification, objection and erasure (often referred to as the “right to be forgotten”).

When and how the DPJL applies to AI solutions
Whether and to what extent the DPJL applies to a particular AI solution depends on various factors including:

  1. its purpose and use case;
  2. the nature and provenance of the training data;
  3. the data input when the AI solution is deployed;
  4. the roles of the parties using the AI solution (e.g. controller, processor, joint controller); and
  5. the context in which the data outputs are used, particularly where they influence decisions about individuals.

Where AI solutions are trained exclusively on non-personal data (for example, anonymised datasets or data relating only to products, infrastructure or processes) the DPJL may not apply. However, anonymisation must be assessed carefully (see above regarding pseudoanonymised data).

Territorial Scope of the DPJL
The DPJL applies to:

  • data controllers and processors established in Jersey;
  • data controllers or processors established elsewhere who use equipment in Jersey to process personal data (except for transiting through Jersey); and
  • processing relates to Jersey data subjects; is for the purpose of offering goods or services to persons in Jersey; or monitoring the behaviour of such persons.

In practice, this means the DPJL may apply where, for example:

  • an AI solution is developed or co-developed in Jersey and the developer acts as a data controller or processor;
  • a Jersey business deploys an AI system that processes or otherwise involves personal data;
  • a Jersey resident uses an AI solution that processes or otherwise involves their personal data;
  • personal data relating to Jersey data subjects is used to train or fine tune AI solutions; or
  • AI outputs are used to make or inform decisions affecting individuals in Jersey.

Cross-jurisdictional implications
Jersey businesses deploying AI solutions must also consider the data protection regimes in other jurisdictions which may apply concurrently, including where:

  • data subjects are located;
  • processors or sub processors are established; or
  • infrastructure or cloud services are hosted.

International data transfers
Where AI solutions involve international data transfers (for example to the US), Jersey businesses must ensure the DPJL requirements are met regarding data transfer provisions. This requires due diligence on:

  • where data is stored and accessed;
  • whether data is retained, reused or used for model training;
  • contractual protections and transfer mechanisms; and
  • the supplier’s governance, security and transparency practices.

Recent litigation in the US, Cruz v. Fireflies.AI Corp, highlighted the risks of using AI-powered meeting transcription and voice analysis/speaker identification tools on platforms like Fireflies, Zoom and Microsoft Teams. In the case, the plaintiff alleges that California-based tech company Fireflies.AI Corp. is illegally harvesting and storing individuals’ biometric voice data without their knowledge or consent and without retention safeguards required by Illinois’ Biometric Information Privacy Act. Jersey businesses must therefore ensure that conduct supplier due diligence and relevant risk assessments before deploying similar technology.

Practical guidelines
From a data protection perspective, deploying AI solutions should be approached as an extension of existing DPJL compliance obligations. Practical steps include matters such as:

  • Understanding the data
    Identify what personal data is used, whether special category data is involved, and whether data may be inferred or re identified.
  • Lawful basis and purpose limitation
    Clearly document why personal data is being processed and the lawful basis relied upon. Data collected for one purpose should not be repurposed for AI training or analytics without further assessment.
  • Data protection by design and by default
    Embed data minimisation, access controls and security safeguards into the AI system from the outset.
  • Transparency and documentation
    Ensure privacy notices, internal policies and data processing agreements accurately describe AI use, including any automated decision making.
  • Rights and consent management
    Ensure individuals have consented where required and can effectively exercise their DPJL rights, including access and erasure, even where AI models are involved.
  • Data security and lifecycle management
    Understand how data flows through the system, whether inputs are retained, and whether they are used to train or improve models.
  • Risk assessment and governance
    Conduct Data Protection Impact Assessments (DPIAs) where AI use presents a high risk to individuals and ensure appropriate internal oversight.
  • AI specific policies and training
    Maintain clear AI governance policies and ensure staff understand permitted and prohibited uses.

Regulatory guidance
Jersey’s Data Protection Law is closely aligned with GDPR and the UK Data Protection Act. As a result, guidance issued by the UK Information Commissioner’s Office (the ICO) on AI is a helpful resource for Jersey businesses. Whilst the ICO guidance is UK specific, it is likely to be persuasive in Jersey as to how the Jersey Office of the Information Commissioner (JOIC) would approach similar issues. See: Guidance on AI and data protection | ICO.

The guidance covers:

  • accountability and governance;
  • lawfulness and purpose limitation;
  • fairness and bias mitigation;
  • transparency and explainability; and
  • accuracy and ongoing monitoring.

AI-Generated Imagery
JOIC has also issued a statement raising serious concerns about realistic AI-Generated Imagery that depicts identifiable individuals without their knowledge and consent. See: Jersey OIC These concerns also relate to non-consensual intimate imagery, defamatory depictions, and other harmful content with heightened risks to children and other vulnerable groups of cyber-bullying and/or exploitation.
Jersey businesses must ensure that their use of AI solutions does not enable or facilitate such harms and ensure that appropriate safeguards are in place.

In addition, the Government of Jersey recently consulted on proposals to strengthen online safety and privacy protections. The proposals included measures aimed at improving the removal of illegal content from social media platforms, websites, and search engines and clarifying what types of images and videos may be lawfully shared. If introduced, this legislation would align Jersey more closely with protections in the UK under their Online Safety Act.

If you joined us at FINx or you’re exploring how tokenisation is moving within your organisation, you’ll find real-world use cases and market readiness to regulatory considerations and the role jurisdictions like Jersey play in enabling tokenised structures.

Thank you to the expert panellists from the event who offered further clarity and insight: Sarah Townsend, Andrew Evans, Suzanne Howe and Elliot Refson for their contributions.

Andrew Morfill, CTO and one of FINx’s keynote speakers from 2025, gives his insights on your stablecoin-related questions.

Gregg Hutchings, Programme Director, Financial Services Skills Commission and one of the keynote speakers at FINx 2025, gives his insights on your digital skills-related questions.