Whilst we aren’t quite in the age of Skynet or the ED-209 from Robocop, AI is starting to play an increasingly important role in the prevention of crimes. Currently, the financial services sector is experiencing a boom in areas such as:

  1. Fraud detection and prevention
  2. Anomaly detection and behaviour biometrics
  3. Network analysis
  4. Regulatory compliance
  5. Cyber security

What regulatory compliance concerns should organisations think about if they want to use AI in these areas?

1. Compliance with Financial Services requirements

The starting point would be to check whether the financial services regulatory body that has oversight over the activity or jurisdiction has implemented any regulatory requirements for AI and their specification.  This may include certification or regulatory approval prior to deployment.

2. Reporting requirements

If you are using AI in order to monitor behaviour, assess applications or detect suspicious activity, then you should ensure that any detections or assessments in relation to the above are properly reported to the regulator if such requirements exist. This involves integrating the AI with your reporting software/channels and may ultimately require human oversight.

3. Data privacy

AI requires a substantial amount of information both in its creation and refinement through use. Where financial services are provided (especially to retail), this will involve personal data.  As such, all implementation of AI should be in compliance with the GDPR. Updating privacy policies, carrying out data protection impact assessments, and cross-border transfer analysis are merely one part of data protection compliance and AI. This may prove challenging for DPOs, where the technology is cutting edge and expertise is still developing; bespoke training may be necessary to assist with a better understanding and assessment.

4. Transparency

Following on from data privacy, where AI is used in crime detection or automated decision-making, it is important to make the use of AI transparent and clear for affected individuals. There may also be regulatory requirements to provide adequate explanations for AI-based outcomes or decisions.

5. Unintentional bias and discrimination

Current iterations of AI can demonstrate unfortunate biases. This arises from flaws or limitations in the data sets used to train AI. This can be particularly problematic where AI is used for anomaly detection or application assessment.  In this situation, it is important to put in place safeguards that prevent any disproportionate impact on individuals or groups based on protected characteristics such as race, gender, or religion, especially where AI is used in lending and credit-related activities. Examples of appropriate safeguards include audits and fairness assessments to identify and mitigate bias.

6. Liability and accountability

AI is still prone to mistakes, and in the financial services sector, this gives rise to the possibility of errors with high-risk consequences for those affected. Companies should maintain policies and terms that are clear with regard to liability if something goes wrong.

7. Internal guidelines

It is good housekeeping to develop your own internal guidance. This would outline acceptable use cases, data handling procedures, and a set of ethical principles that should be followed throughout the AI lifecycle.

Although the rise of AI is happening at breakneck speeds, there is increasing regulatory certainty in its use and oversight. A proactive approach is required to identify any applicable regulatory concerns and put in place appropriate mitigating measures. This is also important to build trust with customers and relevant stakeholders such as regulators.


Authors

Register for updates

Search

Search

Portfolio Close
Portfolio list
Title CV Email

Remove All

Download