AI tools such as ChatGPT, Gemini and Claude are increasingly being used by employees not only to carry out work tasks but also to raise grievances, submit data subject access requests (DSARs), and even prepare Tribunal claims.
Employers are already finding that AI is increasing the number of disputes and making them far more time-consuming and less predictable. However, there are effective ways to counter the computer and fight back against the machine.
Employees are now using AI to:
These developments can significantly increase the administrative burden on HR and legal teams. Yet they also present weaknesses that employers can, and should, expose.
Uploading internal correspondence, client information or personnel records into AI platforms may amount to a serious breach of confidentiality or even a data protection violation.
Employers should:
AI tools can generate emotive or aggressive correspondence. Employers should not hesitate to classify AI-generated abuse or unfounded allegations as harassment or bullying, applying existing policies.
Employers should also be prepared to:
The Employment Tribunals have shown that they are willing to criticise claimants who flood employers with correspondence. Such tactics may lead to costs awards against claimants on the basis that a claim has been pursued vexatiously or unreasonably.
AI systems frequently produce plausible-sounding but inaccurate information. Many employers have faced detailed complaints about legislation that simply does not apply to the dispute.
Employers should be alert to:
Employers should take legal advice and make sure the claims being threatened are credible – otherwise valuable time can be wasted responding to legal claims that don’t actually exist or which aren’t relevant to the particular circumstances.
Employers should look to exploit AI’s errors. At grievance hearings or in Tribunal proceedings, focus on:
If the employee pursues a Tribunal claim, the employer may be able to have the claim struck out as misconceived or even secure a costs award.
An employee who relies heavily on AI may struggle to explain or defend their written position at an internal hearing or in the Tribunal (since AI is not an eligible companion in a formal hearing).
Employers should therefore use in-person meetings to test the employee’s position. Inconsistencies between written and oral submissions are almost inevitable – and can be highly persuasive when assessing credibility, investigating complaints, managing disciplinary or grievance hearings, or defending Tribunal claims.
AI tools are not legal advisers. Employers can highlight that AI-generated submissions, including the employee’s prompts and questions, are not privileged and should be disclosable in Tribunal litigation.
AI has the potential to make employment disputes more complex, but it does not change the fundamentals. Employers remain entitled to protect confidential information, uphold standards of trust and conduct, and insist on accuracy and accountability.
With clear policies, careful management of grievances and DSARs, and strategic use of the evidential weaknesses that AI can create, employers can limit the disruption caused by this emerging trend – and show that human judgment still has the upper hand.