Automating routine and tedious tasks, providing predictive analytics, enabling algorithmic trading, and facilitating more effective risk management are amongst the many possible applications of artificial intelligence (“AI”) in the financial services sector.
Whilst the development and utilisation of innovative AI-powered technologies is an exciting prospect for financial services firms, data scientists and machine learning engineers alike, the utility (and therefore, the monetary value) of AI is dependent on the extent to which it can be lawfully used.
In a joint discussion paper published by the Financial Conduct Authority (FCA), the Bank of England (BoE), and the Prudential Regulation Authority (PRA), the primary drivers of AI risk in financial services were identified as relating to three key stages of the AI lifecycle: (i) data; (ii) models; and (iii) governance, each of which we have considered below.
1. Data Use
From sourcing large amounts of data and creating datasets for training, testing, and validating, through to the continuous analysis of data once the AI system is deployed, the safe and responsible AI adoption in UK financial services is underpinned by the use of high-quality data. Regulated firms must therefore consider:
(a)Data quality, sourcing, and assurance – Poor quality or inappropriate data can compromise any process that relies on it. The way in which data is sourced and aggregated can impact the overall quality of the data, and the intended outcome of a model. The UK’s current regulatory framework aims to address these specific risk components of the data lifecycle. For example, the Basel Committee on Banking Supervision has published various principles aimed at strengthening prudential risk data aggregation by ensuring the accuracy, integrity, completeness, timeliness, and adaptability of data. The PRA expects banks to adhere to these principles. The PRA’s current rules also require insurers to have internal processes and procedures in place to ensure the completeness, accuracy, and appropriateness of the data used in the calculation of their technical provisions.
Data sourcing is a critical consideration not only because it will fundamentally shape the AI system, but also because of the potential liability that may arise if that data was sourced through inappropriate means. This means ensuring that you are compliant with any relevant data protection regulation e.g. ensuring you have a legal basis to collect and process the data (see below for further detail). In addition to data privacy concerns, there are intellectual property considerations to bear in mind. Whilst this is not explicitly mentioned in the joint discussion paper, if the data being used is proprietary, it is important to have the necessary permission to use it otherwise you open the project to possible civil litigation.
(b)Data privacy, security, and retention: Data security is important in ensuring information is protected from malicious threats such as unauthorised access, theft, and corruption. The joint discussion paper emphasises that firms must ensure that data privacy and anti-money laundering / counter terrorist financing regulations are complied with when sourcing and utilising data in the development of AI.
The UK’s Information Commissioner’s Office (the data protection regulator in the UK) updated its guidance on AI earlier this year which included new sections on (i) what to consider when carrying out a data protection impact assessment in relation to AI (ii) how and when to notify data subjects that their data is being used to train AI and (iii) fairness in AI systems, a concept explored in more detail below. A key concern outlined in the guidance is the core principle of the GDPR that only the minimum amount of data needed to fulfil the purpose of processing is utilised. As outlined above, this can often be at odds with the large volumes of data needed to train an AI system. In order to comply with this obligation, the risk management function of the AI project must implement practices that are designed to ensure that data minimisation is considered from the initial design phase (see the ICO’s guidance to better understand the techniques that can be used for data minimisation). Where an entity outsources the AI production to a third party, this consideration should form part of the due diligence process.
(c)Data architecture, infrastructure, and resilience – Firms must have strong data architecture and risk management infrastructure under the FCA and PRA’s current rules and guidelines.
2. Model Development
When it comes to building AI models, the joint discussion paper highlights that firms must ensure that the models are:
(a)Robust – AI models must be built to be robust to errors and to handle unforeseen circumstances. Firms must consider the possibility of model drift (i.e., where the model’s performance deteriorates over time as the data distribution changes). Models should be periodically retrained and validated to maintain their performance.
(b)Fair – AI models must be built to ensure that they are fair to all groups of people, regardless of their gender, race, or other characteristics. Firms should ensure that they do not use biased data or introduce biases unintentionally while building the models. They must also consider the possibility of unintended consequences and monitor the models’ performance to ensure that they are not perpetuating existing biases.
Diversity and inclusion continues to be a top regulatory priority: for example, the PRA and FCA have recently released a consultation paper on measures to boost diversity and inclusion in the financial services sector (CP23/20). Firms will need to ensure that AI models comply with current and upcoming regulatory rules and guidance in this area.
Firms must also ensure that they are delivering good customer outcomes when developing and using AI. See, for example, the detailed rules and guidance on the Consumer Duty contained in the FCA Policy Statement (PS22/9) and Finalised Guidance (FG22/5).
(c)Explainable – AI models must be explainable to stakeholders. Firms should ensure that transparent models are used and provide clear explanations for the decisions made by the models. This may be a challenge for firms, as AI models (particularly generative AI models) may be difficult to explain (both in terms of how it works and the reasons for its outputs).
3. Model Governance
The joint discussion paper also highlights that governance is a critical factor in ensuring the safe and responsible adoption of AI in the financial sector. The following are some of the key recommendations provided in the paper for AI governance:
(a)Accountability – Firms should ensure that they have clear accountability structures in place for the AI models they develop, and that the roles and responsibilities of the stakeholders involved in the AI lifecycle are clearly defined and communicated. The joint discussion paper highlights that the PRA and the FCA’s existing rules and guidance, in particular, those implementing the Senior Managers and Certification Regime (SMCR), emphasise senior management accountability and responsibility, and are relevant to the use of AI. At present, there is no dedicated senior management function (SMF) for AI, with technology systems currently being the responsibility of the SMF24 (Chief Operations functions) and the overall management of the risk controls of a firm being the responsibility of the SMF4 (Chief Risk function). Therefore, a key consideration for firms is who should be responsible for the use of AI within the firm.
(b) Risk management – Firms should have a robust risk management framework in place for the AI models they develop. They must ensure that they identify and assess the risks associated with the models and implement appropriate risk mitigation measures. Whilst the joint discussion paper recognises that the use of AI may, in some cases, be used to minimise risk and increase stability for a firm, it may also have the effect of amplifying many of the existing risks to financial stability in the financial services sector more generally. For example, the use of similar datasets and AI algorithms may create uniformity across models and approaches at multiple firms, which could amplify procyclical behaviour and lead to herding in certain use cases, such as algorithmic trading.
(c)Board composition, collective expertise, and engagement – The joint discussion paper notes that there may be a lack of understanding of the challenges and risks arising from the use of advanced technologies at firms’ senior management and board levels, both individually and collectively, leading to a skills and engagement gap. This could lead to a risk of ineffective governance. There are requirements and expectations on firms to address this skill gap, including: (i) the PRA’s expectations that boards should have the diversity of experience and capacity to provide effective challenge across the full range of the firm’s business and boards should pay close attention to the skills of its members; and (ii) the FCA requirements for issuers to publish information on board diversity policies in their corporate governance statement.
Future Regulation of AI
The FCA, PRA and BoE recently published a feedback statement to their joint discussion paper. Whilst this feedback statement summarised the responses received in relation to the discussion paper, it did not include policy proposals, nor did it signal how the UK supervisory authorities are considering clarifying, designing, and/or implementing current or future regulatory proposals on this topic.
The development of AI models in the financial sector offers significant opportunities for financial institutions to reduce costs, increase efficiency, and improve customer experiences. However, there are risks associated with its adoption, for example: hallucinations and unpredictable behaviour, opaqueness, and a firm’s lack control over the foundation models used in Generative AI (such models typically being built and trained by third parties). As the use of generative models (such as ChatGBT, Llama, BERT and RoBERTa) increases, it is likely that the UK regulator’s scrutiny of this technology will too.
At a minimum, firms must ensure that they comply with data protection regulations, have permission with regards to the data being fed to the AI system, build robust, fair, and explainable AI models, and implement appropriate governance frameworks, and should keep up to speed with further guidance released by the regulators in this area. By doing so, firms may be able to realise the full potential of AI while minimising the risks associated with its adoption.
Need more information about the above people and legal expertise? Talk to one of our lawyers: +44 (0)20 7628 2000
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
1 year 1 month 4 days
Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors.
1 year 1 month 4 days
Google Analytics sets this cookie to store and count page views.
YouTube sets this cookie via embedded YouTube videos and registers anonymous statistical data.