AI is used in the technology industry at all stages of the supply chain. At the design stage, brands are using it to help predict trends in order to design the most sellable products.
At the manufacturing stage, AI machines and robots are used to reduce costs and waste (for example, by tracking store footfall and site visits and forecasting accurate demand levels) and to provide quality assurance in the end product.
Within the supply chain, predictive logistics can be used to predict where bottlenecks may occur during delivery and mitigate delays caused by, for example, weather conditions.
At the front end, AI is being rolled out to enhance the provision of online services: for example, search optimisation chatbots and the use of generative AI to create accurate product descriptions.
A number of potential legal issues arise with the AI used in the above scenarios.
In June this year, MEPs adopted the European Parliament’s negotiating position on its new draft law regulating AI, known as the AI Act. The intention of the Act is to provide clarity around the obligations and requirements on AI developers, deployers and users.
The Act sets out a new regulatory framework which classifies AI by the different levels of risk it poses to the user:
(i) Unacceptable risk: systems which pose a clear threat to the safety, livelihoods, and rights of people (for example, social scoring by governments) will be banned outright.
(ii) High risk: technology used in specific high-risk sectors (for example, educational training, employment, law enforcement and the administration of justice) are subject to strict requirements (for example, adequate risk assessments and appropriate human oversight) prior to being put into the market.
(iii) Limited risk: systems with specific transparency obligations (for example, generative AI chatbots like ChatGPT) should ensure full compliance with their transparency obligations (for example, making users aware that they are interacting with a machine to ensure they make full, informed decisions).
(iv) Minimal or no risk: the vast majority of technology can be freely used as it poses minimal risk to the user (for example, spam filters).
The Parliament’s priority is to ensure that AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
Earlier this year, the UK Department for Science, Innovation and Technology published its White Paper entitled “A pro-innovation approach to AI regulation”. This set out a framework based on five principles:
i. Safety, security, and robustness: AI should be safe, and risks should be identified, assessed, and managed;
ii. Appropriate transparency and explainability: the person or entity should know AI is being used and be able to understand the decisions it makes;
iii. Fairness: AI should not contravene the legal rights of individuals or businesses e.g. discriminate or create unfair market outcomes;
iv. Accountability and governance: use and supply of AI should be overseen and clearly accounted for; and
v. Contestability and redress: where AIs make a harmful decision or a decision which creates a material risk, there should be a route to challenge that decision.
The focus of the White Paper appears to be on regulating the use of AI rather than the technology behind it, as the UK government seeks to establish the UK as a “science and technology superpower”.
The five-principle framework also includes a range of tools for AI deployers to use, including a regulatory sandbox, assurance techniques, and technical standards.
The regulatory sandbox will provide a controlled testing area for companies to experiment with new AI technologies with greater flexibility but within safe and ethical guidelines.
Whilst the Paper did not include plans for appointing an AI regulator, last year, the UK government announced the launch of the UK AI Standards Hub to help “advance trustworthy and responsible AI” with a focus on “governance tools and innovation mechanisms”.
The Paper places the onus on existing UK regulators (such as the FCA, the CMA and the ICO) to assess the risks and produce guidance based around the framework using their own sector-specific expertise. This non-statutory approach contrasts with proposals in both the US and EU to further legislate use of AI tools and systems.
More recently, in light of the growing usage of generative AI and the concerns voiced abouts its “existential risks”, Rishi Sunak announced that the UK will host a global summit on safety in AI.
The technology sector will benefit from the regulatory light-touch innovation-focussed approach of the UK as companies can take advantage of the complementary tools being introduced, like the regulatory sandbox to test out new AI software .
However, many technology companies are cross-jurisdictional and, therefore, may face issues as the UK AI and data privacy regulatory regimes diverge from those of the US and the EU and may not be able to reap the benefits of the UK’s light-touch regulatory approach.
For most technology companies, the priority of any use of AI is to improve the user-experience in terms of efficiency, personalised experience, and ease.
To facilitate their usage of AI amongst the evolving regulatory landscape in this area, companies should consider: