Climate change is a serious threat to the safety and security of United Kingdom citizens. Malicious actors spreading false information on social media could undermine collective action to combat these threats. Yet the draft Online Safety Bill is not designed to tackle a threat to society like climate change disinformation.
The Carnegie UK amendments to the Bill are the radical work needed to bring climate change within scope in a proportionate manner. We describe here how they would work.
The Hazard
Prime Minister Boris Johnson noted when chairing a 2021 United Nations Security Council session “climate change is a threat to our security”; the WHO has identified climate change as one of the top 10 global health threats. Authoritative reports indicate the issue is more pressing than previously thought. In 2019, the House of Commons declared an environment and climate change emergency, described as “the most important issue of our time”. COP26 was seen as a critical opportunity to accelerate the global response in the light of an increasing threat. Yet, the issue remains contentious, and some – perhaps those who have economic interests at stake – seek to sow confusion about the existence of climate change, its causes and the ways to tackle it, which “is likely to have disastrous consequences worldwide in the twenty-first century”. Against this background, the question of what information is being made available to people to inform their choices (and perhaps to encourage behaviour change) is important and the media and social media information sources are central to this.
Existing media regulation on climate change disinformation
On radio and TV, unchallenged assertion of statements about climate that fly in the face of well-established scientific consensus are now unlikely to happen. OFCOM has not banned climate disinformation but requires broadcasters to ensure the audience is not misled as to information’s veracity or the status of the information in the scientific community. For example, OFCOM found the BBC in breach of the impartiality rules when in 2018 it gave a platform to Lord Lawson, with no adequate challenge in the programme; the BBC had already found a breach of its own rules.
Given the scientific consensus around climate change, it might be in future that OFCOM could choose to deal with climate change denial through the application of its rules in the Broadcasting Code on harmful content. OFCOM followed this route in tackling COVID misinformation. It relied on pre-existing research into the harms caused by unsound health or financial advice (“health and wealth claims”). OFCOM’s approach to scientific misinformation in the context of the pandemic was challenged, but the High Court found that its approach complied with free speech requirements and Article 10 ECHR.
Threat from climate change disinformation on social media
Social media is an important source for many people of news and information and evidence is emerging of its role in climate change disinformation. Civil society groups have found that social media platforms are a route for amplification for key climate change deniers and a source of funding for them. Avaaz reported an estimated 25 million views of climate change and environment misinformation on Facebook in the US just over 60 days. Recent analysis of activity on Facebook undertaken during COP26 demonstrates the scale of the challenge in dealing with climate change mis/disinformation. Their research compared the levels of engagement generated by reliable scientific organisations and climate sceptic actors respectively and found that the posts from the latter frequently received more traction and reach than the former. For example, in the fortnight over which COP26 took place, sceptic content garnered 12 times the level of engagement of authoritative sources on the platform; and 60% of the “sceptic” posts they analysed could be classified as actively and explicitly attacking efforts to curb climate change.
While some platforms have started to take some actions, climate change misinformation remains prevalent and there have been suggestions that services should do more.
Online Safety regulation and climate change
As drafted, the draft Online Safety Bill does little to tackle climate change disinformation. Climate change disinformation is a good example of something that is not contrary to the criminal law but could be very harmful indeed to society at large, falling into well-known gaps in the proposed regime.
Carnegie UK’s detailed recommendation for changes to the draft Bill would address climate disinformation, amongst other serious threats to UK public health, safety and national security. Carnegie UK’s approach would improve the systems and processes of social media and search companies so as to reduce the harm from climate dis and misinformation. Carnegie UK suggests that:
- A definition of harm is adopted that can encompass climate disinformation by addressing ‘harm to public safety, public health and national security’.
- A duty is imposed on users to user services and search to take reasonable steps to prevent reasonably foreseeable harm arising from the operation of the platform, but applied in a proportionate manner and only where appropriate. This duty would include harms arising from climate disinformation circulating on the platform flowing from the definition in 1 above. We emphasise that this is not about banning certain types of speech, but rather looking at how the features on the platform and their design, as well as policies, contribute to any harms.
- The duty of care in 2 implies a risk assessment. The companies’ respective risk assessments should be based on OFCOM’s market overview risk assessment. That assessment and the resulting risk profiles that OFCOM will draw up underpin the regime. In the context of climate change misinformation, OFCOM should focus in particular on the role of platforms in distributing and amplifying such content. And in so doing should work closely with expert civil society groups, bodies such as the Royal Society and academic researchers.
- Service operators within scope should also pay particular attention to how malicious actors exploit their services to disseminate climate disinformation. This focus would be in addition to concerns around services’ systems and processes that might arise from any unintended amplification of climate change misinformation as a consequence of design which is focused on high levels of user engagement. OFCOM should be able to make companies repeat this risk assessment process if their assessment is inadequate, a point that is not clear in the draft Bill.
- Climate disinformation should then form part of companies’ risk of harm prevention and mitigation plans and OFCOM will monitor their effectiveness. Where companies have already committed to voluntary action (see e.g. Facebook’s Climate Change Science Centre, Twitter’s ‘pre-bunking’ efforts, and Google’s ad policies), the regulator can help disseminate this. However even the best corporate action may not have picked up all weaknesses in the companies’ product, processes, policies and business model. The regulator (working with civil society) can help ensure that this is followed through, best practice assessed, disseminated, followed through and that measures are improved over time. The assessment should focus on all points of the content distribution cycle, from user onboarding, content creation, dissemination and user tools to curate their information environment, through to moderation, labelling disinformation, engaging with trusted flaggers, respecting fact checkers’ decisions and dealing (transparently and effectively) with complaints.
- Advertising should be brought into the scope of the online safety regime. The draft bill excludes advertising, with the consequent risk that the systems and processes related to advertising delivery and content monetisation arising from advertising might also lie outwith the regime. The Carnegie proposals include advertising, to ensure these processes fall within scope. Here, providers of advertising services on platforms should prevent monetisation of climate disinformation content and/or give advertisers the opportunity to opt out of appearing alongside such content. The regulator could also help assess the risks inherent in different definitions of climate change denial advertising. The study into Facebook’s approach to climate misinformation found 113 instance of adverts containing misinformation, and these were sources of disinformation that had already been flagged; according to this news report the problem was still ongoing during COP 26. We note that Google is moving in this direction but regulatory oversight would help ensure that it delivers.
- We suggest that as part of its market risk assessment OFCOM focus upon areas where greater media literacy could prevent harms. This seems appropriate for tackling climate change disinformation – we note that Facebook currently provides considerable authoritative information to people searching for climate info. The regulator could ensure that such approaches actually work and disseminate and require best practice.
In our opinion the Carnegie approach would be far more effective than trying to use the draft Bill as it stands. The draft Bill is designed not to tackle societal harm. If the government were to shoehorn climate change disinformation as a ‘priority harm’ the structures of the draft Bill are not strong enough to respond proportionately to the seriousness of the issue.
Climate change and technology was a theme of the Future Tech Forum a recent high level multi-stakeholder event chaired by Secretary of State Nadine Dorries MP. If the government takes tackling climate change disinformation seriously, it could do worse than follow the Carnegie amendments.
Carnegie UK Trust is a signatory of the Association of Charitable Foundations commitment on climate change.