Financial Institutions were one of the early adopters of AI, so it's definitely not the new kid on the block any more – but even so, we are seeing a new wave of initiatives that will see it used more widely. With that of course come both opportunities and risks. So where have we come from, and where are we headed? What excites and concerns firms and their regulators? We take a look at how AI is set to change the ways FIs do business.

What is the FCA doing?

The FCA has been moving fast, setting up first its AI Lab in late 2024 to support innovators in developing new AI models and solutions. The AI Lab and the FCA's "Spotlight" initiative will, among other things, help the FCA to understand what stakeholders are thinking and how that thinking is evolving. One component of the AI Lab is AI Live Testing, with applications for the first cohort already closed, but a second cohort expected before the end of the year. It had prefaced this with an engagement paper (in April 2025) on the potential benefits, opportunities and challenges its proposals raised.

This was widely welcomed. Respondents thought the environment would be helpful for:

  • Real-world insights: live production testing was seen as a valuable mechanism for understanding how AI models perform under real-world conditions;
  • Overcoming Proof of Concept (PoC) paralysis: many firms reported that AI PoCs often demonstrate technical merit but fail to progress due to concerns such as regulatory uncertainty and skills shortages;
  • From principles to practice: respondents noted a lack of guidance on how to operationalise and measure key AI principles such as fairness, robustness, safety and security. AI Live Testing could help bridge this gap by providing a structured, repeatable process for assessing performance;
  • Creating trust: respondents emphasised that traditional assurance methods are insufficient in the face of rapidly evolving AI capabilities and that trust in AI must be intentionally designed and transparently demonstrated;
  • Addressing first-mover reluctance: some firms hesitate to use AI in sensitive areas without greater regulatory clarity;
  • Regulatory comfort: being given ‘regulatory comfort’, potentially through individual guidance or other tools, can substantially de-risk innovative AI use and encourage firms to bring beneficial products and services to market more quickly;
  • Collaboration: respondents noted that AI Live Testing is a welcome step forward between the regulator and the industry to jointly navigate the challenges; and
  • Model metrics: AI Live Testing can foster collaboration and help develop a shared technical understanding on complex AI issues such as model validation, bias detection and mitigation and ensuring robustness.

Alongside the AI Live Testing, the FCA has introduced its "super-charged sandbox", which will imminently allow firms to test their use cases with greater computer capabilities, enhanced datasets and more advanced tooling, the AI Spotlight project and its AI Sprint event which it held back in January. A key pillar of the FCA's strategy for 2025 to 2030 is to make it a smarter regulator.

From its own perspective, we are seeing the FCA forging ahead on using large language models to analyse text in applications and predictive AI to help its supervisory teams.

Where we are now

Last year, we took a stocktake of what at the time we thought the future of AI in financial services could look like. Doing another stocktake now, this is what we're seeing:

  • Mixed messages coming from FIs when extending their use of AI – they see the most potential for "safe" use in compliance and automation, and are keen to use it to improve data quality. Many FIs are still nervous of how AI can help address and mitigate bias;
  • Large proportion of FIs, particularly banks, now use AI at least to some extent in financial crime prevention, helping at both customer onboarding and transaction monitoring stages – automating the usual and flagging the unusual;
  • The FCA is constantly encouraging firms to leverage and innovate using AI for optimal efficiency both operationally and for the customer experience – and there are signs that this is happening beyond use of AI for basic chatbot customer service functions;
  • Operationally, there are clear opportunities for AI to process large amounts of data for use in risk assessment and monitoring and some advisory services – and equally that should be valuable in product development
  • But statistics show that there are a still a lot of FIs that aren't confident they have enough understanding of how AI works and how to mitigate the risks;
  • Explainability is the biggest cross cutting challenge;
  • Using AI as part of the customer journey and interface has both attractions and risks – it can be taught and calibrated for example, to use language and techniques compliant with Consumer Duty expectations, but a significant concern is how it could be taught to recognise vulnerable customers; and how failure properly to do that can lead to exacerbation of the initial failure by embedding incorrect biases in the systems as the AI learns to reach wrong conclusions. So in some ways the Consumer Duty could be a significant blocker, in that the assessment of good outcomes is so subjective that in many cases few generalisations are trustworthy;
  • Some FIs have not worked out how AI fits into their wider strategies, risk appetites and governance structures – this approach might be due in part to the lack of specific regulation. . This is amplified where senior leadership teams are investing in new AI technologies and innovation and therefore want to encourage adoption and uptake, however, these new technologies require training, time and often culture changes in an organisation before we see large-scale benefits;
  • The FCA is clearly on a drive to encourage AI and for firms to use its Live Testing and Sandbox opportunities but some firms may be reluctant to engage as they don't want to 'show their hand' to the regulator too early, especially in the current environment of uncertainty around the direction regulation is going to take. Maybe once firms have been through this exercise and seen the results, they may be more confident to become more experimental. The FCA is very keen to engage at all stages, since as everyont is still learning, it will need to get the right balance between intervening potentially too early when harm is possible but its likelihood not clear against taking action only when harm has occurred;
  • Also, as understanding and use of AI becomes more common, there will be more individuals within FIs who become confident in their understanding of its limitations and opportunities. We're seeing pilots and trials of AI being key to developing this knowledge;
  • Agentic AI is the next big emerging trend, as can be seen from the DRCF's recent call for views;
  • All that said, there will still be a significant need for humans – not only to set the parameters, but also to monitor, test and adjust behaviours, and, particularly in the case of vulnerable customers, to provide the opportunity for discussion and explanation where AI chatbots are either unable to help (whether because of progamming issues or because of the nature of a customer problem) or simply because the customer wants to speak to a human. The more AI is used at the customer interface, the higher the need for humans to have the appropriate skills and authority to challenge effectively the outcomes driven by the AI. One key concern on accountability is working out where, how much, and at what stage, there is a need.

What else is going on?

The Treasury Committee in Parliament set up an inquiry on AI in financial services in February 2025 to look at the potential impacts of increased use of AI in financial institutions, off the back of research showing that 75% of firms were already using AI in some form. It wanted to hear about things like:

  • Whether some areas of the market were, or were better suited to, adopting AI more quickly than others, and whether FIs were on the whole acting more quickly than other business sectors;
  • The best use cases and the key barriers for use of AI, and whether the industry should be adopting GenAI with little or no risk in any particular area;
  • The key risks arising from AI, including third-party and supplier concentration risks, GenAI hallucination and herding behaviour, and whether AI increases cybersecurity risks;
  • Risks and benefits to consumers, particularly vulnerable ones, and what safeguards need to be in place to protect customer data and prevent bias; and
  • How regulation should address use of AI.

It followed this up with evidence sessions in June, speaking to leading academics and industry associations. Most recently, in mid-September, the Treasury Committee wrote to 6 large providers, asking for their help as part of its inquiry. It asked them each a set of questions, including whether they have a particular strategy for AI in financial services, what preparations they have in place for failures or outages in their cloud system and any AI systems they host and what the impact would be on the firm of being designated as a "critical third party" for their provision of services to regulated firms. No entity has yet been designated under the critical third party regime, which has been read for use since the beginning of 2025, and which has as its aim to extend regulatory oversight of, and place operational resilience and reporting requirements on, suppliers who do not themselves carry on financially regulated activities but who provide services to the regulated sector where disruption to or failure of those services could cause systemic disruption in the sector. When the regime was under consultation, the Government stressed it did not expect many providers to be designated, but those who were were likely to be providers of services such as cloud services, AI models and market data. Part of the thinking behind the regime is that compliance by regulated firms with relevant outsourcing rules cannot of itself address the significant concentration risks that the largest providers pose and so regulation needs to manage the operational risks of the providers while also requiring the regulated firms to manage their own operational resilience. From the responses just published, it is interesting that some providers expect to become designated while others would be surprised to be. In part, however, this could be because of the depth and breadth of service the various respondents provide. The responses also give a good insight into the protection and resilience measures the providers have in place as well as indications of where some see the financial sector further developing its use of AI.

What next?

It's clear that appetite and indeed necessity to use AI is growing. With increased engagement will come increased understanding, but we think it's fair to say that many FIs are still nervous of pushing the existing use cases too far. We've previously outlined how FIs can build robust frameworks to harness the potential of GenAI while keeping up risk management, compliance and trust. Our report (which we released in April 2025) asked over 300 businesses over many sectors to tell us how they were using AI and what the most significant challenges they faced are.

While the FCA is certainly doing its best to encourage innovation and testing in a safe atmosphere, this of itself won't allay all the concerns. What will be helpful, though, is to see how the results of that testing reflect how real the concerns are – whether from a security perspective or a conduct perspective, particularly around biases and recognition of vulnerabilities.

If firms get more comfortable with those, and with their ability to have staff at appropriate levels who understand the opportunities and capabilities of AI in a way that enables them to assess the risks to the firm and its operational resilience, the next big question – which the Treasury Committee is now starting to address – is whether the main providers are likely to become designated as critical third parties, what the consequences of this would be, and how they will react if they are designated.

And once all that is done, there's the small matter of getting customers to trust in the outcomes – and evidencing Consumer Duty compliance.

This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.