What are the exclusions?
We note that a few specific exclusions are listed at Section 46(8) where physical or psychological harm comes from “content’s potential financial impact”; “safety and quality of goods featured in the content”; or “the way in which a service featured in the content may be performed”. We presume that the concession signalled by the government in relation to online scams (where facilitated by user-generated content)[1] will lead to a revision of the first exemption, which we welcome, subject to understanding the extent of that concession and how it will be reflected in the face of the Bill. The government should consider whether the experience of the pandemic suggests that ed-tech should in fact be in the scope of the regime to support teachers and parents in using innovative tools.
However, mis/disinformation, wider consumer harms and all other definitions of harmful content and illegal content, unless the Secretary of State designates otherwise, have to result in harm to individual: this leads to the exemption of broad swathes of societal harm that are either the result of co-ordinated campaigns or where the aggregation of individual harms is such that a distinct societal harm occurs. What, for example, would be the position for environmental or climate change related misinformation (bearing in mind the WHO identifies the consequences of climate change as including significant health threats)? The draft Bill envisages that harms need not be the direct consequence of the speech but may be indirect. While it is important that this possibility remains in the regime, some further guidance on how indirect such harms may be would be desirable. In his evidence to the DCMS Select Committee, the Secretary of State Oliver Dowden specifically said, in relation to misinformation where there is an “aggregation of harm”, it would have to fit into the Category of “physical or psychological harm” to be in scope[2]. While disinformation on specific issues which lead to physical harm to an individual (e.g. Coronavirus misinformation) might be covered, there is a significant gap in relation to national security and electoral interference.
We have set out in detail risk management measures for the high-risk category of people involved in democratic processes – candidates, office-holders and journalists[3]. While the draft Bill describes how protections that already exist for such people as individuals would apply (in relation to criminal and psychologically harmful harassment and abuse) we judge that more work is required given the central importance to the functioning of democracy as set out in the Committee on Standards in Public Life 2017 report on Intimidation in Public Life.[4]
Concerns have also been raised about the ring-fencing of content of democratic importance and, relatedly, what the definition is of news content and whether it will include citizen journalism. If one cannot “de-amplify” disinformation, especially that which challenges democratic integrity (e.g. from foreign actors), that is an issue. We do not come to a firm conclusion on all the issues arising from these provisions, but note that it will be important to ensure that they are not open to abuse and that it will be challenging to reflect fully the public interest in journalism and in democratic debate. These provisions will require close scrutiny.
[1] The Government’s press release (11/05/2021) set out that: “Measures to tackle user-generated fraud will be included in the Bill. It will mean online companies will, for the first time, have to take responsibility for tackling fraudulent user-generated content, such as posts on social media, on their platforms. This includes romance scams and fake investment opportunities posted by users on Facebook groups or sent via Snapchat. … Fraud via advertising, emails or cloned websites will not be in scope because the Bill focuses on harm committed through user-generated content. The Government is working closely with industry, regulators and consumer groups to consider additional legislative and non-legislative solutions. The Home Office will publish a Fraud Action Plan after the 2021 spending review and the Department for Digital, Culture, Media and Sport will consult on online advertising, including the role it can play in enabling online fraud, later this year.” (https://www.gov.uk/government/news/landmark-laws-to-keep-children-safe-stop-racial-hate-and-protect-democracy-online-published)
[2] https://committees.parliament.uk/oralevidence/2185/pdf/
[3] Online safety for people involved in the democratic process, 24th March 2021 https://www.carnegieuktrust.org.uk/blog/increased-online-safety-for-people-involved-in-the-democratic-process-in-the-uk/
[4] https://www.gov.uk/government/publications/intimidation-in-public-life-a-review-by-the-committee-on-standards-in-public-life