Artificial intelligence (AI) has been around for a long time, however, it is only fairly recently that we have seen its use cases spread into our daily lives, with it often being used to streamline simple but time consuming processes (we are still a long way from seeing “general” human-like AI which you would expect to see in films according to the experts). With the gradual uptake of AI, one might wonder what the GDPR has to say on the matter. The answer is not very much – this is perhaps not very surprising given that the GDPR is technology neutral and intended to regulate a broad range of data processing activities.

However, that is not to say that the GDPR does not have consequences for AI. Indeed, the GDPR can prove very restrictive to those seeking to develop or implement AI solutions and some commentators cite the regulation as being one of the reasons Europe lags behind the US and China in terms of AI adoption. We set out some of the key issues below.

GDPR Principles

The GDPR is a principles based regulation and organisations are required to keep these principles at the forefront of their mind when carrying out their data processing activities. The problem with this is that often these principles sit uncomfortably with AI and the way it is developed. We address some of these tensions below:

  • Fairness – this forms part of the GDPR’s “Lawfulness, fairness and transparency” principle and requires that personal data are not processed in a way that is detrimental, discriminatory, unexpected or misleading to the data subject. The reason this is relevant in the context of AI is because AI systems are prone to having biases (whether conscious or unconscious) baked into them in the course of their development. There have been a number of high profile incidences of this (see here and here). To mitigate this risk, controllers which use AI systems must consider the impact that these will have on the individuals concerned and whether any safeguards can be adopted to minimise these risks (normally this would be achieved by carrying out a data protection impact assessment).
  • Transparency – also falls under the “Lawfulness, fairness and transparency” principle and requires that you are open with individuals as to what you do with their data. In the algorithm-heavy world of AI, this can be tricky. In extreme instances, it may not even be possible to explain why an algorithm has come to a particular solution due to the developers not knowing themselves – some AI algorithms are developed using a method called “unsupervised learning” which involves little human oversight. In such circumstances, the algorithm can reach decisions which the developer simply can’t explain.  This is called the black box problem.
  • Data minimisation – this specifies that personal data should be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”. Although not necessarily incompatible with the principle of data minimisation, AI systems generally require to be fed large volumes of data, especially in the development phase, in order to become viable and it may not be possible for developers to ascertain at this stage whether the data being fed into the system is “necessary” to achieve the purposes of the processing.
  • Purpose limitation – tied in with the principle of data minimisation is the principle of purpose limitation, which requires that personal data only be collected for “specified, explicit and legitimate purposes” and not used in a way that is incompatible with the original purpose. The tension with this principle is that often the output of an AI system can be unpredictable, meaning the system may be being developed for a specific purpose and end up being used for a completely separate purpose altogether. This has led to calls by some commentators for the purpose limitation principle to be eschewed during the development phase of AI solutions.
  • Accuracy – this requires that personal data is accurate and kept up-to-date. As discussed in the context of fairness, feeding an AI system inaccurate data could lower the quality of the output resulting in potentially erroneous decisions being made.

Automated decision-making

One area where the GDPR does impose requirements which specifically relate to AI is in the context of automated decision-making. Automated decisions are those which are made without any human intervention.

Article 22 of the GDPR generally prohibits the use of automated decision making by controllers where this would produce a legal or significant effect on the data subject concerned. However, there are exceptions to this (e.g. where the individual has consented). There are further restrictions if any special category data is to be processed as part of the decision.

Finally, controllers relying upon automated decision-making are also required to provide the individuals affected by the decision with transparent information regarding the logic involved with that process (in accordance with the GDPR’s transparency requirements) and anyone affected by an automated decision must be given the right to contest it under GDPR.

Conclusion

As should be clear from the above, data protection law and AI do not always sit comfortably with each other. However, it is important to bear in mind that data protection law is not intended to prevent the development of AI and there will often be a solution provided the individuals involved are willing to do the groundwork and seriously consider privacy issues from the outset.

Contact us

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

Register for updates

Related legal expertise


Related sectors

Search

Search

Portfolio Close
Portfolio list
Title CV Email

Remove All

Download