Artificial intelligence is fundamentally changing the workplace, driven by rapid technological advances and widespread, albeit inconsistent, adoption across businesses. While some employers are still experimenting, often using AI to assist with research, draft emails, or manage calendars, others are already strategically embedding AI into end-to-end workflows, seeking competitive advantages through enhanced service delivery and cost savings. These early adopters are using AI to steal a march on the competition.

The power of AI can be illustrated by driverless technology. If one vehicle narrowly avoids a crash, every vehicle in the network can learn from that event, unlike humans whose learning is typically limited to personal experience or small groups. Indeed, it has been suggested that a particular make of driverless car has only crashed when operated by a human. Extrapolated to the workplace, this ability for AI to “learn once, apply everywhere” suggests transformative potential for operational efficiency, compliance, and even behaviour management.

Predictions vary, but some suggest AI will materially impact over 70% of job roles, potentially displacing up to 30% of the workforce in the near term. Entry-level roles are already being affected, and this trend is likely to accelerate. While roles may change or disappear, it’s essential to remember that new opportunities will also emerge, often in ways we cannot yet predict. Historically, technological disruption has given rise to new industries and types of work. Moreover, AI cannot replicate human qualities such as strategic decision-making, intuition, and creativity, at least not yet. For now, many tasks still require human oversight or intervention.

On a more granular level, AI raises difficult questions around liability. What happens if an AI system discriminates against, harasses, or otherwise mistreats an employee? In the automotive world, manufacturers have often accepted responsibility for autonomous vehicle errors. But the employment context is more complex. Employers are likely to remain liable for the outcomes of decisions made by AI systems operating under their control. That risk is heightened by the current lack of transparency around how many AI systems reach decisions. If a decision leads to a discrimination claim, how can an employer rebut an inference if they can’t explain the rationale?

Meanwhile, employees are increasingly using AI tools themselves, often without their employer’s knowledge. Some may no longer be performing core elements of their role at all. With AI doing the heavy lifting, it becomes easier to work two jobs remotely, even from a beach in the Bahamas. These developments raise serious questions around data security, misuse of confidential information, and control over business-critical processes. Employers must ask: how is AI training itself, and is it learning from our proprietary data?

AI is already widely used in recruitment, from screening CVs to scoring application forms. This has created a technological arms race, with candidates using AI to craft tailored applications, even instructing systems to “rewrite this so it doesn’t look like AI.” Such dynamics risk turning the hiring process into a battle of prompts and algorithms. Moreover, AI systems used in recruitment may inadvertently perpetuate bias, particularly if trained on historical data that reflects existing inequalities.

Looking ahead, HR considerations such as performance management is likely to become more data-driven. AI may assess employee objectives by analysing all communications and outputs, reducing subjectivity and removing room for obfuscation. But the trade-off may be a loss of context, nuance, and the human touch that makes performance conversations meaningful.

When it comes to career progression, the question may no longer be “Who is best for the role?” but “Would a machine be better suited?” And if junior tasks are automated, how will employees gain the foundational experience needed to progress? Employers may need to invest in simulators or structured development pathways to fill these gaps, a costly but necessary adaptation.

AI is also reshaping industrial relations. Employees now have access to relatively sophisticated legal tools, even if the advice currently provided is not always accurate or strategic. This has already led to an increase in grievances, and the sophistication of those grievances. Some employees are using AI to draft claims and adopt litigation tactics previously reserved for those with legal representation. This, in turn, increases the cost and complexity of managing disputes.

At the same time, the growing use of AI to record, summarise, and analyse workplace interactions creates a digital paper trail that can be weaponised. Meeting notes or action points may unintentionally portray employees in an unfavourable light. This heightens the burden of responding to grievances and data subject access requests, with some employers now offering settlement payments just to avoid the administrative cost of compliance.

In some sectors, such as logistics or ride-hailing, we already see workers being managed by algorithms with little or no human oversight. This trend is likely to spread. Without proper regulation, there is a real risk of dystopian outcomes for certain segments of the workforce. How does one impress an AI manager that never sleeps, never holidays, and always remembers your performance metrics? Watercooler moments become irrelevant when your every communication is monitored. The soft skills traditionally used to build rapport and navigate organisational politics may lose their currency.

All of this can feel overwhelming. But it’s still early days. No one knows exactly how this will play out. Some voices in the financial markets suggest AI-related stocks are overhyped and overvalued. The truth likely lies somewhere in between, with false starts, unintended consequences, and regulatory growing pains along the way.

One pressing concern is the skills gap. Many workers displaced by AI will lack the capabilities needed for emerging roles, which are likely to be more complex and technology-driven. This presents major societal challenges if large numbers are made redundant in a short time. It also creates a pressing need for retraining and reskilling, something employers and governments must prioritise as AI adoption accelerates.

The legal and regulatory implications will be addressed in part two of this article.

(This article was “mostly” drafted by a real human being).


Authors

Related legal expertise


Related sectors

Register for updates