AI use cases

AI is used in the technology industry at all stages of the supply chain. At the design stage, brands are using it to help predict trends in order to design the most sellable products.

At the manufacturing stage, AI machines and robots are used to reduce costs and waste (for example, by tracking store footfall and site visits and forecasting accurate demand levels) and to provide quality assurance in the end product.

Within the supply chain, predictive logistics can be used to predict where bottlenecks may occur during delivery and mitigate delays caused by, for example, weather conditions.

At the front end, AI is being rolled out to enhance the provision of online services: for example, search optimisation chatbots and the use of generative AI to create accurate product descriptions.

Potential issues

A number of potential legal issues arise with the AI used in the above scenarios.

  • Intellectual property: IP law has yet to catch up with developments in AI technology. Whilst the controlling algorithms and underlying data are capable of being protected by copyright, database and patent rights, the output of AI is harder to protect. The key question here (as yet unanswered) is who owns the IP in the output: the machine or the human behind it?
  • Data protection: Where AI processes personal data, businesses must ensure they have appropriate compliance mechanisms in place. Some of the above-mentioned use cases could use technology that processes special category data (for example, virtual fitting rooms and their body mapping technology process individuals’ biometric data) and thus will need to ensure a higher degree of compliance with data privacy legislation.
  • Bias: AI can only make decisions based on the data it has available and, if that data comes from only one source (whether that be one type of individual or a singular business or type of business), the output of that AI is likely to be skewed. This problem can be mitigated by the provision of data from more diverse sources. For example, testing the body mapping technology on a range of body types, gender, and races to ensure the AI is trained to recognise and map a diverse range of bodies and is capable of recreating the same within the virtual fitting room.
  • Accuracy and reliability: AI cannot always be relied on to make accurate and reliable decisions. For example, an AI which predicts potential supply chain disruptions would be unlikely to predict something left-field like the COVID-19 pandemic. Similarly, generative AI has the propensity to create false information (known as hallucination) where it does not understand the question being asked of it and, therefore cannot generate a correct answer, so it invents one instead! This can have embarrassing consequences (see, by way of allegory, the US lawyers who were forced to admit their reliance on ChatGPT when their referenced case law was found to not exist), and potentially costly consequences (for example, if an AI predicted a huge demand for Crocs which never materialises, and a company unnecessarily spends money producing a glut of product which never sells ).

The AI Act: latest EU regulation

In June this year, MEPs adopted the European Parliament’s negotiating position on its new draft law regulating AI, known as the AI Act. The intention of the Act is to provide clarity around the obligations and requirements on AI developers, deployers and users.

The Act sets out a new regulatory framework which classifies AI by the different levels of risk it poses to the user:

(i) Unacceptable risk: systems which pose a clear threat to the safety, livelihoods, and rights of people (for example, social scoring by governments) will be banned outright.

(ii) High risk: technology used in specific high-risk sectors (for example, educational training, employment, law enforcement and the administration of justice) are subject to strict requirements (for example, adequate risk assessments and appropriate human oversight) prior to being put into the market.

(iii) Limited risk: systems with specific transparency obligations (for example, generative AI chatbots like ChatGPT) should ensure full compliance with their transparency obligations (for example, making users aware that they are interacting with a machine to ensure they make full, informed decisions).

(iv) Minimal or no risk: the vast majority of technology can be freely used as it poses minimal risk to the user (for example, spam filters).

The Parliament’s priority is to ensure that AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

The latest UK AI regulation

AI White Paper

Earlier this year, the UK Department for Science, Innovation and Technology published its White Paper entitled “A pro-innovation approach to AI regulation”. This set out a framework based on five principles:

i. Safety, security, and robustness: AI should be safe, and risks should be identified, assessed, and managed;

ii. Appropriate transparency and explainability: the person or entity should know AI is being used and be able to understand the decisions it makes;

iii. Fairness: AI should not contravene the legal rights of individuals or businesses e.g. discriminate or create unfair market outcomes;

iv. Accountability and governance: use and supply of AI should be overseen and clearly accounted for; and

v. Contestability and redress: where AIs make a harmful decision or a decision which creates a material risk, there should be a route to challenge that decision.

The focus of the White Paper appears to be on regulating the use of AI rather than the technology behind it, as the UK government seeks to establish the UK as a “science and technology superpower”.

The five-principle framework also includes a range of tools for AI deployers to use, including a regulatory sandbox, assurance techniques, and technical standards.

The regulatory sandbox will provide a controlled testing area for companies to experiment with new AI technologies with greater flexibility but within safe and ethical guidelines.

Whilst the Paper did not include plans for appointing an AI regulator, last year, the UK government announced the launch of the UK AI Standards Hub to help “advance trustworthy and responsible AI” with a focus on “governance tools and innovation mechanisms”.

The Paper places the onus on existing UK regulators (such as the FCA, the CMA and the ICO) to assess the risks and produce guidance based around the framework using their own sector-specific expertise. This non-statutory approach contrasts with proposals in both the US and EU to further legislate use of AI tools and systems.

Growing concerns in AI safety

More recently, in light of the growing usage of generative AI and the concerns voiced abouts its “existential risks”, Rishi Sunak announced that the UK will host a global summit on safety in AI.

What is the impact?

The technology sector will benefit from the regulatory light-touch innovation-focussed approach of the UK as companies can take advantage of the complementary tools being introduced, like the regulatory sandbox to test out new AI software .

However, many technology companies are cross-jurisdictional and, therefore, may face issues as the UK AI and data privacy regulatory regimes diverge from those of the US and the EU and may not be able to reap the benefits of the UK’s light-touch regulatory approach.

Next steps for tech companies using AI

For most technology companies, the priority of any use of AI is to improve the user-experience in terms of efficiency, personalised experience, and ease.

To facilitate their usage of AI amongst the evolving regulatory landscape in this area, companies should consider:

  • implementing policies (for both internal and external usage) as to how AI should be used and is developed and implemented by company; and
  • establish an AI governing body/individual to help lead and track the company’s use of AI and its position in respect of current and proposed legislation.

Authors

Register for updates


Related sectors

Search

Search

Portfolio Close
Portfolio list
Title CV Email

Remove All

Download