The Online Safety Bill is a new flagship piece of legislation intended to make the UK the “safest place in the world to be online”. The Bill is set to become law in the United Kingdom after it receives Royal Assent in the coming weeks.  

Senior executives and employees of tech companies cannot ignore this novel piece of legislation for much longer, especially given the civil and criminal liability the Bill imposes for non-compliance. What do tech firms need to know?

The purpose of the UK Online Safety Bill

The purpose of the Bill is to introduce a new regulatory regime to protect children and adults and tackle illegal content on the internet. 

The Bill imposes far-reaching obligations on tech companies by making them liable for users’ safety on their platforms. This is in recognition of the fact that online content and activity can cause serious harm to individuals, and online platforms have a role to play in mitigating this harm, especially to the most vulnerable in society.

The exact date of when the Bill’s provisions will become fully operational is still unknown. The Secretary of State will need to enact secondary legislation and the Office for Communications (OFCOM), which has been appointed as the regulator in charge of overseeing the online safety regime, will need to publish codes of practice on the Bill.

Overview of the UK Online Safety Bill

Under the Bill, certain businesses will be required to implement systems and processes to improve user safety online. The following service providers will be caught by the new piece of legislation:

  1. providers of “user-to-user services”: this captures platforms (such as YouTube) where users are able to encounter content generated directly on the service, or uploaded to or shared on the service, by another user; 
  2. providers of “search services”: this captures platforms (such as Google) that are or include a search engine; and
  3. providers of internet services which publish or display pornographic content.

The Bill sets out an array of duties on tech companies, differentiating between various categories of service providers based on their risk profile.

For example, “Category 1” services target the biggest platforms with the highest numbers of users, which means they will be subject to additional duties as compared to the lower tier of “Category 2” services.

Types of content affected by the UK Online Safety Bill

Illegal content

The first type of content that the Bill addresses is illegal content, such as that relating to child sexual abuse, terrorism, fraud and self-harm. A “triple shield” approach will be used to action this; all in-scope service providers must not only remove illegal content online but also take steps to ensure it does not appear on the platforms at all.

Category 1 services are also under an obligation to remove content that is banned by their own terms and conditions and empower adult users to have greater control over the content they see and who they engage with.

“Lawful but harmful content”

The second type of content that tech firms will need to address is content that is lawful but harmful or age-inappropriate for children. This includes content relating to pornography, online abuse, self-harm, eating disorders or content which promotes or provides instructions for suicide, as well as content that depicts or encourages serious violence or bullying.

Tech firms within the scope of the Bill will be required to impose age restrictions on their platforms through age assurance technologies and must be able to evidence their enforcement of these age limits.

Other duties

Tech companies will also be required to undertake and publish risk assessments in relation to illegal content and children (where services are likely to be accessed by children) as part of their new duties. This will include determining the risk of individuals encountering illegal content on a service, the risk of harm presented by the illegal content and how the operations and functionalities of a service may reduce or increase these risks.

Companies will then be under a duty to put in place proportional measures to effectively mitigate and manage the risks of harm from online content.

Reporting systems will also need to be put in place by service providers to enable users to report illegal content along with a complaints procedure to allow users to seek redress where they feel that companies are not complying with their duties. OFCOM has been tasked with overseeing the implementation of and compliance with the Bill and will in due course publish codes of practice to flesh out the finer workings of the Bill.

The implications

Although penalties under the new bill must be appropriate and proportionate to the failure of a provider, it has the potential to bring about huge financial consequences, with fines of up to £18 million or 10% of a company’s annual global turnover, whichever is greater.  

The costs associated with non-compliance are not the only concern. Companies,  particularly  smaller and medium-sized companies, will also face considerable initial costs to bring their platforms in line with the new safety regime and complying with their novel duties.  

For instance, legal teams will need to be consulted and compliance processes and personnel will need to be put in place to assist companies with meeting their regulatory obligations.

Perhaps most significantly, the Bill provides for criminal sanctions against “senior managers” (as defined in the Bill) and other officers and employees for a number of different duties, the main one being in relation to information-provision duties.

“Senior managers” are defined as individuals who play a significant role in (1) the making of decisions about how the entity’s relevant activities are to be managed or organised or (2) the actual managing or organising of the entity’s relevant activities, that is, those activities relating to a company’s compliance with its regulatory requirements.

Where OFCOM issues an information notice to a provider, it will require the provider to set out the name of a senior manger responsible for complying with the notice. Where the senior manager provides false information, encrypts information such that OFCOM cannot understand it or suppresses, destroys, or alters information, the senior manager risks being fined, imprisoned or both.

Liability for corporate officers, (such as a director, manager, officer, secretary or similar)also exists under the Bill. For example, where a company engages in a false communications offence, and it can be proven that the offence has been committed with the consent of (or even due to the negligence of) an officer of the company, both the officer and the company have committed a criminal offence.

OFCOM also has the power to require officers to attend interviews and answer questions where it is investigating the failure, or possible failure, of a provider of a service to comply with a regulatory requirement.

Extra-territorial reach

Although the new Bill will be a law made by the UK Parliament, it has wider implications for providers of in-scope services based outside of the UK where the provider has links with the UK. This has been deemed essential to fulfil the Bill’s aim to protect users online, since it would otherwise be too easy to avoid the law by being based elsewhere. 

The Bill therefore extends beyond UK-based businesses and also captures services:

  • which target the UK market;
  • which have a significant number of UK users; or
  • where there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by the relevant service.

Controversy

Although some children’s charities say the Bill does not go far enough. Tech companies, including Meta and Signal, have already threatened to leave the UK market if they are forced to break end-to-end message encryption (which is one of their USPs) where there are concerns over child safety.

Such firms are against compromising on their security and may prefer to withdraw their services from the UK market altogether. The UK Government has, for now, conceded on the requirement for companies to scan encrypted messages but has left this open for future consideration, with the option to require companies to scan encrypted messages as a last resort in order to stop child abuse on their platforms.

Another concern centres around the potential impact of the Bill on user privacy, as some provisions in the Bill require companies to monitor and report on user activity. This has sparked much debate on what the right balance is between protecting users from harmful online content and freedom of expression. Critics also flag that some governments may seek to introduce similar, far-reaching online safety legislation under the auspices of online safety and use it to monitor their citizens.

Looking ahead

The Bill has been a long time coming with the green paper on internet safety being published all the way back in 2017.  Six years on, the Bill is not yet in force.

Further legislation will be needed to realise the full spectrum of protections envisaged by the Bill, especially given the number of issues which still remain unresolved at this stage. What remains clear, though, is that the Bill is here to stay. The Labour Party has already indicated it plans to continue the work of the current government in its aim of transforming user safety online.

There will undoubtedly be a significant impact on the way that tech companies are run. As a result of the new regulatory regime,  companies globally will be impacted as many are set to follow the UK with their own online safety regime.

Contact us

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.


Authors

Register for updates

Search

Search

Portfolio Close
Portfolio list
Title CV Email

Remove All

Download