July 18, 2023

Bringing small high-harm platforms into the Online Safety Bill

by Professor Lorna Woods, Professor of Internet Law, University of Essex; William Perrin, Trustee, Carnegie UK; Maeve Walsh, Carnegie Associate

In our final blog to coincide with the Online Safety Bill’s Lords Report Stage, we reiterate the concerns we’ve had throughout the Bill’s passage on the unmitigated risks posed by small but high-harm platforms.

In its current form, the OSB regime splits the Bill into categories: Category 1, Category 2a and Category 2b. Category 1 services, the largest user-to-user services with certain ‘functionality’ (defined in cl 208), receive the toughest risk mitigation duties: having to provide user empowerment tools and effective terms and conditions in relation to content that would formerly have referred to as content harmful to adults. Category 1 services are also under obligations with regard to fraudulent advertising (as are Category 2a search services), and more detailed obligations generally. (See this helpful comparison table, prepared by Dr Alex Antoniou, to see the different requirements on services in each category.) Their current definition in the OSB means that size is an essential criterion by which OFCOM will judge services to be classed in this category.

The threshold criteria for Category 1 services are to be defined in an SI (Schedule 11, para 1) based on the number of users of a service and the service ‘functionalities’ (i.e. what the service allows users to do) as well as ‘other characteristics…or factors….the Secretary of State considers relevant’. OFCOM has just published a call for evidence to inform its advice to Government on the thresholds for these categories.

The Government’s addition, at Commons Report stage, of “characteristics” to the criteria for categorisation was a partial acknowledgement of the gap in the regime. However, as argued by former DCMS Minister Jeremy Wright, this did not go far enough to stop size being the dominant criterion for categorisation of platforms.

However, some small platforms pose a very high risk to users, but, because of their size, won’t meet the threshold for Category 1.  These small high-harm services, which include dedicated hatred and harassment sites, will not be subject to appropriate “Triple Shield” risk mitigation measures for the level of harm they pose to users. This opens a large gap in a risk-based regime, both in terms of protection for users and the ability for OFCOM to intervene.

For example, these small services may have evolved into high-harm sites organically, or they may – once the regime is in force – become such as a means of escaping the regulatory requirements that will otherwise be imposed only on larger sites.

The House of Lords attempted to address this issue at Committee stage, with two amendments put forward: amendment 192 tabled by Baroness Morgan and amendment 192a tabled by Lord Griffiths. Despite cross-party support during committee debate, the Government stated they were ‘not taken by these amendments’. The rationale for this centred around ‘power of policy discourse online’ and ‘highest reach’ being concentrated in the largest platforms. Amendment 192 was therefore withdrawn, and amendment 192a not moved.

Baroness Morgan of Cotes has now tabled a simple amendment (245) to Schedule 11, supported by Baroness Kidron, Lord Stevenson of Balmacara and Lord Clement-Jones, which would allow OFCOM to bring smaller types of sites into Category 1 and make the regime more risk-based by changing the requirement for Category 1 to be a size ‘and’ functionality threshold in Schedule 11 (1)(4) to an ‘or’ criteria. We support this amendment, which closes the gap and provides a greater level of protection from the Triple Shield to adult users of services, requiring OFCOM to consider functionality independently of size when determining the categorisation of a service.