This article is the second in our series exploring the impact of artificial intelligence (AI) in the workplace, focusing on the legal and regulatory framework.
At present, there is no AI-specific legislation or regulation in the UK that directly addresses human resources or workplace use, although there is a patchwork of existing laws which indirectly impact on the use of AI.
Employers have a duty under the Equality Act 2010 (EqA) not to discriminate, either directly or indirectly, on the basis of protected characteristics such as race, sex, age, disability, religion or belief, or sexual orientation.
‘Human being status’ is not currently listed as a protected characteristic for the purposes of the discrimination regime (although it is possible that arguments may be put forward to extend the coverage of race discrimination). This means there will be no recourse under discrimination law if an AI system discriminates against a human being — for example, on the assumption that their input is in some way inferior.
Under section 13 of the EqA, direct discrimination occurs when an individual receives less favourable treatment because of a protected characteristic. Direct discrimination may arise where, for example an AI recruitment tool trained on historical data in which mostly men were hired learns to downgrade CVs that include terms associated with women, such as “women’s hockey club captain”. Another example would arise if AI is used during online interviews to assess facial expressions, tone, or speech patterns, and such assessments disadvantage neurodivergent candidates or those with speech impediments.
Under section 19 of the EqA, indirect discrimination arises where an apparently neutral practice places individuals with a protected characteristic at a particular disadvantage. For example, AI-based CV screening tools may automatically reject applications with gaps in employment history — a practice that disproportionately affects women and carers who are more likely to take career breaks.
Employers relying on AI systems may find it difficult to defend discrimination complaints if they cannot explain how a particular decision was made, particularly once an inference of discrimination has been established and the burden of proof has shifted to the employer.
It is therefore imperative that employers understand how each system operates, ensure that it is properly and regularly tested and monitored, and maintain an accessible evidence trail for each decision.
Data protection laws in the UK will soon change in relation to the use of personal data to carry out ADM.
Currently, the UK GDPR (Article 22) gives individuals the right not to be subject to decisions which are based solely on automated decisions (that is, without any meaningful human intervention) and that produce legal or similarly significant effects on individuals — such as deciding whether a candidate is accepted or rejected in the recruitment process. Employers who rely solely on AI output to, for example, assess or ‘filter’ CVs or analyse video interviews risk breaching UK GDPR.
However, with the enactment of the Data (Use and Access) Act 2025 which reforms certain aspects of the UK GDPR and Data Protection Act 2018, there will soon be a more permissive framework within which employers can make such solely automated decisions (i.e. AI-led decisions). The relaxation here is only to a degree. The restrictions under the existing law will continue to apply to special category data and employers will still be required to incorporate certain safeguard measures for sole ADM, including allowing individuals to challenge decisions made by sole ADM and request human intervention. The usual rules regarding lawfulness (that is, establishing a lawful basis under UK GDPR), transparency (having a clear privacy policy) and the need for a data privacy impact assessment, among other aspects of UK GDPR compliance, will also continue to apply.
Employees are increasing submitting data subject access requests in order to look for information to support potential grievances, often with the support of AI. This comes at a time when the use of AI by businesses is increasing the amount of data (including in the form of automated transcripts and action points following meetings) that will need to be disclosed.
On the back of the increase of data subject access requests, employers are themselves seeking to rely on AI to help with their responses, although it is important to tread carefully as responsibility to carry out appropriate searches and respond appropriately to a DSAR lies with the controller (who can’t blame the AI). DSARs need to be handled on a case-by-case basis and legal nuances often apply which AI may be unable to pick up.
An unfair dismissal occurs when an employer terminates an employee without a fair reason or fails to follow a fair procedure.
As AI-driven analytics become more common in performance monitoring, there is a risk that dismissal decisions could be based wholly or partly on automated assessments. Employment tribunals are likely to examine closely whether it was reasonable for an employer to rely on AI-generated data when deciding to dismiss an employee.
Situations may arise where staff effectively report to an AI system (as already seen in some ride-hailing platforms), which may lack the balance, patience, and tolerance expected in managing performance issues. This could lead to claims of machine-led bullying, potentially giving rise to constructive dismissal if severe enough.
Widespread adoption of AI may result in cost savings and, consequently, redundancies. There will be no need to apply concepts such as pooling or bumping to the AI system that has replaced the employee.
It will be challenging for employees to argue that services outsourced to a new AI-driven service provider fall under the Transfer of Undertakings (Protection of Employment) Regulations 2006 (TUPE). This may leave the outgoing service provider with unexpected redundancy costs. Accordingly, service providers should ensure that contracts account for the possibility that TUPE may not apply when the contract ends.
The adoption of AI is likely to require changes to employee roles, and for these roles to evolve continually as the technology develops. This comes as the Government seeks to limit the use of the “fire and rehire” mechanism, which may result in employers resorting to other mechanisms to introduce changes (such as offering seeking consent with the offer of some sort of consideration), or even redundancies.
There is growing evidence that employees are submitting more grievances, often drafted with AI support. A grievance remains valid even if AI-assisted, and employers should meet employees in person to clarify issues, particularly given AI’s tendency to generate inaccuracies or “hallucinations”.
It is anticipated that employers make seek to introduce informal mediation of resolution processes so as to try and avoid an avalanche of time-consuming and potentially detrimental (from an industrial relations perspective) grievances.
If OpenAI or similar tools were used to draft a grievance, confidential information may have been shared externally. This could create separate disciplinary concerns, which must be handled carefully to avoid any perception of victimisation for raising the grievance.
If a grievance leads to a claim, any prompts used to draft or respond to it may be disclosable and could reveal the intent of the parties — with possible ramifications for both sides.
The precise impact of AI on employment litigation remains uncertain, though early signs suggest that claimants are using AI to draft claims — sometimes resulting in unclear, inaccurate, or poorly structured submissions.
Claimants relying heavily on AI may also struggle to understand their own cases as proceedings progress. Over time, however, AI tools are likely to become more sophisticated and capable of supporting litigation preparation and presentation.
There is also evidence of AI being used to generate excessive or repetitive correspondence, leading tribunals to consider applications to limit the extent of such communications.
Employers remain responsible for issues arising from the use of AI in HR contexts. However, it is anticipated that employers will increasingly attempt to pass liability to AI providers or to seek insurance to cover these risks. In the context of driverless cars, AI systems and vehicle manufacturers have pledged to pick up liability for potential crashes, but there does not appear to be such a willingness to do so in the context of HR-related matters.
Although the EU AI Act is an EU Regulation and will therefore not form part of UK law, it has wide territorial scope, which means that it can impact operators outside the EU as well as within. It may, for example, apply to UK organisations that place or supply AI systems on the EU market, or make the output of such systems available to the EU.
The Act imposes significant obligations on high-risk AI systems, including those used for recruitment or performance management. How the Act applies will depend on the role an organisation plays (as designated by the Act (e.g. provider, deployer, importer or distributer)) and may mean that a range of requirements apply including maintaining meaningful human oversight mechanisms, risk management protocols, keeping up-to-date documentation and carrying out assessments among other requirements.
The European Commission has recently announced plans to reform the AI Act (and EU GDPR among other laws) so it may change before it fully takes effect (the bulk of its provisions were set to take effect in August 2026, though this deadline may be extended).