AI tools such as ChatGPT, Gemini and Claude are increasingly being used by employees not only to carry out work tasks but also to raise grievances, submit data subject access requests (DSARs), and even prepare Tribunal claims.

Employers are already finding that AI is increasing the number of disputes and making them far more time-consuming and less predictable. However, there are effective ways to counter the computer and fight back against the machine.

The New Challenge for Employers

Employees are now using AI to:

  • Draft detailed grievances, often quoting legal principles or case law they do not understand.
  • Pursue complaints doggedly, responding immediately and comprehensively, and inundating the employer with correspondence, criticism and further complaints. Sending more than a dozen emails a week is not uncommon.
  • Submit DSARs covering vast categories of data.
  • Produce Tribunal pleadings that appear impressive but regularly rely on inaccurate facts or misunderstood law.

These developments can significantly increase the administrative burden on HR and legal teams. Yet they also present weaknesses that employers can, and should, expose.

How Can Employers Rise to the Challenge?

1. Confidentiality and Data Protection

Uploading internal correspondence, client information or personnel records into AI platforms may amount to a serious breach of confidentiality or even a data protection violation.

Employers should:

  • Introduce a clear AI usage policy, including rules on confidentiality, data security and misuse. Reiterate that company data must not be entered into public AI tools.
  • Train managers and HR teams to recognise AI-generated communications. (Ironically, AI can be used to assess the likelihood that an employee’s correspondence was generated by AI.)
  • Emphasise that uploading confidential data into AI systems could breach both contractual and data protection obligations.
  • Treat misuse of AI tools at work, or breach of confidentiality, as potential disciplinary matters.

2. Harassment and Misuse of Technology

AI tools can generate emotive or aggressive correspondence. Employers should not hesitate to classify AI-generated abuse or unfounded allegations as harassment or bullying, applying existing policies.

Employers should also be prepared to:

  • Explain that time is needed to respond to correspondence.
  • Consolidate multiple messages into a single response.
  • Keep a clear record if they are being bombarded by the complainant.

The Employment Tribunals have shown that they are willing to criticise claimants who flood employers with correspondence. Such tactics may lead to costs awards against claimants on the basis that a claim has been pursued vexatiously or unreasonably.

AI systems frequently produce plausible-sounding but inaccurate information. Many employers have faced detailed complaints about legislation that simply does not apply to the dispute.

Employers should be alert to:

  • Non-existent cases or legislation cited in grievances.
  • Factual “details” that cannot be substantiated.

Employers should take legal advice and make sure the claims being threatened are credible – otherwise valuable time can be wasted responding to legal claims that don’t actually exist or which aren’t relevant to the particular circumstances.

Employers should look to exploit AI’s errors. At grievance hearings or in Tribunal proceedings, focus on:

  • Inconsistencies between written and oral evidence;
  • Factual inaccuracies arising from AI “hallucinations”; and
  • The employee’s inability to explain their own case.

If the employee pursues a Tribunal claim, the employer may be able to have the claim struck out as misconceived or even secure a costs award.

4. Employees Who Cannot Defend Their Own Submissions

An employee who relies heavily on AI may struggle to explain or defend their written position at an internal hearing or in the Tribunal (since AI is not an eligible companion in a formal hearing).

Employers should therefore use in-person meetings to test the employee’s position. Inconsistencies between written and oral submissions are almost inevitable – and can be highly persuasive when assessing credibility, investigating complaints, managing disciplinary or grievance hearings, or defending Tribunal claims.

5. Disclosure

AI tools are not legal advisers. Employers can highlight that AI-generated submissions, including the employee’s prompts and questions, are not privileged and should be disclosable in Tribunal litigation.

Looking Ahead

AI has the potential to make employment disputes more complex, but it does not change the fundamentals. Employers remain entitled to protect confidential information, uphold standards of trust and conduct, and insist on accuracy and accountability.

With clear policies, careful management of grievances and DSARs, and strategic use of the evidential weaknesses that AI can create, employers can limit the disruption caused by this emerging trend – and show that human judgment still has the upper hand.


Authors

Register for updates