Originally published on 23 July 2025 by Law360
The Financial Conduct Authority is in a hurry to take action on artificial intelligence in financial services.
It ran a six-week consultation in April on its proposals for a new AI live testing service with plans to launch in September,[1] and has already opened up applications to participate, as well as publishing terms of reference for how it will operate.[2]
The FCA is clear about why it is setting up its new AI live testing service as part of its existing AI Lab. It explains in its April engagement paper on the service that it wants to give firms the confidence and certainty to invest in AI systems to drive growth, and to deliver positive outcomes for U.K. consumers and markets.[3]
However, an FCA and Bank of England survey published in November 2024 found that 13% of firms considered lack of regulatory clarity around the FCA’s consumer duty a constraint on adopting AI. Data protection and privacy were noted by respondents as the greatest regulatory constraint, with 52% identifying it as a large or medium constraint. High regulatory burden was also generally identified as a higher constraint than lack of clarity about regulation.[4]
Interestingly, Innovate Finance, a trade association for fintechs, said in a report released on July 2 that discussions revealed that a fear of media backlash is a major obstacle to adoption of AI in the U.K.[5] The report suggests that clear, practical and accessible advice and guidance on the ethical use of AI from regulatory authorities could help.
The FCA says that AI live testing will explore the challenges that firms face before live AI deployment in U.K. financial markets, and it will provide participants with access to appropriate AI and regulatory expertise. As with the existing regulatory sandbox, the FCA may provide firms using AI live testing with individual guidance, waivers or modifications to requirements.
Of course, the FCA would not be a regulator if it did not also say that it wants to understand the risks of AI and how they can be managed. The FCA plans that where market or sectorwide issues are identified, the insights gained may contribute to its regulatory approach more generally.
So, does the FCA’s AI live testing service pass muster? And how does it fit in with what the FCA is already doing?
To answer these questions, we need to take a step back and look at what the FCA and the government are seeking to achieve more broadly.
The FCA’s Innovation Hub, first launched in 2014 as Project Innovate, has steadily expanded in scope, comprising services such as the regulatory sandbox — real-user testing for innovative products since 2016 — and the digital sandbox — anonymized test datasets Omar Salem and software development tools since 2020, all intended to support the growth of fintech in the U.K.
The FCA’s world-first regulatory sandbox has since been emulated in at least 56 countries.[6] The prime argument for the sandbox approach is that by selecting novel products, innovation is encouraged instead of stifled, with new products brought to market more quickly.
Indeed, an independent study published in The Review of Finance in April 2023 found that sandbox firms are 50% more likely to raise funding than their peers and, on average, secure 15% more investment.[7]
However, sandbox acceptance rates are low at around 27%, and therefore there is likely to be a significant selection bias, meaning that businesses that were already likely to succeed are picked to participate in sandboxes.[8]
The AI Lab, first launched in 2024 with product showcase and collaboration initiatives, was recently extended with the launch of the supercharged sandbox, a similar initiative to the digital sandbox, but providing access to Nvidia Corp.’s AI hardware and development toolkit.
Supercharged sandbox is intended for early-stage AI development, to encourage prototyping and proof-of-concept innovation. Supercharged sandbox is open for registration until Aug. 11, to provide three months’ access from Sept. 30.
The supercharged sandbox and AI live testing are two sides of the same coin: The supercharged sandbox is intended to encourage the development of unique, innovative AI, while AI live testing can resolve regulatory uncertainty by providing formal assurance to nervous firms that their use of AI is FCA-compliant.
The FCA has explained that it sees the sandbox as focused on developing proof of concept with access to data, with AI live testing supporting firms with proof of concept, but that need regulatory comfort and market deployment testing.[9]
The FCA’s statutory operational objectives are to protect consumers, ensure market integrity and promote competition. Additionally, the FCA has a secondary objective to facilitate the international competitiveness of the U.K. economy and its growth.
Each of the FCA’s objectives, both primary and secondary, can be supported by the responsible and effective deployment of AI in financial services. There is a certain neatness to the dual purposes of the AI testing service, to encourage firms to deploy AI technology, and to help the FCA understand how to best regulate AI.
There are important strategic choices that the FCA has taken in its approach to AI that provide important context for the launch of AI live testing.
The FCA recognizes the transformative potential of AI for financial services, but it is also alert to the risks that could undermine consumer trust and market stability if left unchecked.
In a 2022 joint discussion paper with the Bank of England, which underpins its approach to AI, the FCA explored the benefits and risks that AI may pose in areas from consumer protection to financial stability and competition.[10]
The areas of potential risk identified in the discussion paper include bias and discrimination, and AI being used to facilitate collusive pricing or raising the costs of entry to markets. A lack of explainability in financial models can result in firms holding incorrect levels of regulatory capital.
Traditional computer systems rely on formal logic. For example, they may operate on the basis of “if an applicant earns less than £x, reject their loan application.” Most AI systems, however, use neural networks and billions of items of interconnected data, mimicking a human brain.
Explanations of how such an AI system arrives at its decision are post-hoc, and do not reflect the almost infinitely complex mathematics of weighting different links between data to produce an answer. This so-called black box nature makes it challenging for firms, regulators and consumers to understand how decisions are made, which can erode trust and make it hard to question or correct errors.
Some machine learning techniques are capable of deriving formal logical rules from complex data, so-called explainable AI, but explainability is a compromise, typically at the expense of performance.
The FCA and Prudential Regulation Authority are also concerned about so-called data drift, where the data a model uses in production differs from the data it was trained on, with decreased performance, and concept drift, where the data being used evolves such that it undermines the model.
To address the risks identified by the FCA, its AI live testing service is designed as a proactive, collaborative measure. Rather than waiting for problems to emerge after deployment, firms are encouraged to test their AI models in a controlled, supervised environment.
This allows for robust predeployment testing, where firms can identify and address potential issues before their models are exposed to the wider public, as well as ongoing monitoring, meaning that the FCA expects firms to continue monitoring AI performance after deployment.
In contrast with the European Union, which already has an AI Act being brought into force in August, the U.K. has adopted a wait-and-see approach, preferring caution and consultation to early legislative intervention. An AI bill is, however, expected in the 2026 King’s Speech, addressing both safety and copyright issues.
The FCA published its AI update in April 2024, focusing on testing, collaborating and gathering further information to understand the effects of AI on U.K. financial markets.[11] It does not seek to regulate AI separately, framing it within the context of existing FCA regulation and pursuant to the consumer duty.
The FCA’s approach to AI seems to be influenced by the concept of “same activity, same risk, same rule,” as set out in the European Commission’s 2019 Final Report of the Expert Group on Regulatory Obstacles to Financial Innovation,[12] operating on the assumption that there is no need for new AI-specific regulation.
Innovate Finance’s report, mentioned previously, endorses the FCA’s “no new regulations needed” approach, and expresses support for AI live testing as part of a wider program of building confidence in AI. It notes that while most firms are using AI for some applications, uptake is still cautious and limited.
The report argues that the FCA must provide clearer guidance on its expectations, particularly in front-office applications, as well as offering advice on how to manage AI risks. It says that AI can help monitor regulatory compliance, as well as promote equity and diversity by improving credit scoring and providing financial advice to the 12.4 million adults in the so-called advice gap.
It suggests a process of reviewing all barriers, both regulatory and lack of expertise or guidance to AI, so that the U.K. can enjoy the full benefits of the technology.
Despite the FCA’s position that new regulation for AI is not needed, its engagement paper notes that the U.S. National Institute of Standards and Technology and Singapore’s AI Verify Foundation are developing formal frameworks and standardized technical tests. This forms part of growing regulatory and academic interest in algorithmic audit, the process of formally auditing algorithmic systems, particularly AI.
A paper published in May 2024 by the Royal Society argues that as algorithms increasingly underpin critical decisions in finance, healthcare and other sectors, there is a pressing need for algorithm audits that mirror the rigor of financial audits.[13]
While the FCA has not published formal testing protocols, it provides an illustrative example of an AI credit risk model, suggesting that testing could include checks on the robustness of training data. It could also include metrics to show the effects of the AI model on certain demographics, small businesses and startups, and on the firm’s viability, as well as possible procyclical effects of the model throughout the economic cycle.
In addition, automated systems require human supervision and intervention at some point, and testing can ensure that humans are involved in a timely fashion.
The question remains open as to what extent AI is the same as other technology and is perhaps one that AI live testing will help explore. In the meantime, firms can decide whether they want to take part.
Selection for AI live testing will be competitive, with participation linked to a particular AI use case, with some flexibility to change this with agreement of the FCA.
Core selection criteria for AI live testing are as follows:
By guiding their firms and clients through the application and participation process, lawyers can help them gain early regulatory insight, build credibility with the FCA and proactively manage risk as they develop AI capabilities.
However, not all firms will qualify, and lawyers have an important role in assessing whether clients are ready. This includes ensuring they have a working proof of concept, have undertaken thorough predeployment testing and can demonstrate a robust monitoring plan for their AI systems, post-deployment.
In the absence of AI-specific regulation in the U.K., lawyers also have a particularly important role to help firms navigate the uncertain space using existing rules, not least the consumer duty. This requires a flexible, informed interpretation and a willingness to advise in an environment of ambiguity.
Ethics will play a central role. Lawyers must support clients in designing ethical frameworks for their AI systems that address explainability, accountability and appropriate governance. Also important is documenting how those frameworks are implemented in practice.
Testing decisions, risk assessments, oversight measures and escalation procedures should all be properly recorded, both to meet potential regulatory expectations and to serve as a defensible compliance trail if needed.
Finally, there is a wider opportunity for the legal profession to shape the regulatory approach to AI in financial services. With questions still open about how AI should be treated under existing principles, lawyers advising firms involved in AI live testing will be at the forefront of influencing how future rules are framed.
Omar Salem is a partner at Fox Williams LLP.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] https://www.fca.org.uk/news/press-releases/fca-set-launch-live-ai-testing-service.
[2] https://nayaone.s3.eu-west-1.amazonaws.com/fca/public/event/50/resource/
ai-live-testing-terms-of-reference_1752057892.pdf.
[3] https://www.fca.org.uk/publication/call-for-input/ai-testing-pilot-engagementpaper.
pdf.
[4] https://www.bankofengland.co.uk/report/2024/artificial-intelligence-in-uk-financialservices-
2024.
[5] https://ww2.innovatefinance.com/wp-content/uploads/2025/07/artificial-intelligenceunlockingthe-
potential-of-uk-financial-services-and-empowering-consumers-draft-01.07.25.pdf.
[6] https://www.tandfonline.com/doi/full/10.1080/00036846.2025.2495886.
[7] https://academic.oup.com/rof/articleabstract/
28/1/203/7140150?redirectedFrom=fulltext&login=false.
[8] https://www.fca.org.uk/data/innovation-market-insights.
[9] https://www.fca.org.uk/news/speeches/harnessing-ai-and-technology.
[10] https://www.bankofengland.co.uk/prudentialregulation/
publication/2022/october/artificial-intelligence.
[11] https://www.fca.org.uk/publication/corporate/ai-update.pdf.
[12] https://finance.ec.europa.eu/publications/final-report-expert-groupregulatory-
obstacles-financial-innovation-30-recommendations-regulation_en.
[13] https://royalsocietypublishing.org/doi/10.1098/rsos.230859.