What governments and online platforms can do to combat COVID-19 disinformation

By Jeremy West

Senior Policy Analyst, OECD Directorate for Science, Technology and Innovation

Earlier this year, wireless towers in the United Kingdom were vandalised or set ablaze in response to a conspiracy theory – later debunked – linking 5G networks to COVID-19. More than 30 such acts were reported to UK police within the span of a few weeks, as were around 80 acts of harassment – including threats of extreme violence – against telecommunication technicians.

These incidents, and others like them, are harrowing reminders of how misinformation  – the spread of false information, regardless of whether there is an intent to deceive  –  and disinformation – the deliberate spread of false or misleading information with an intent to deceive – online can have dire consequences offline. During a public health crisis, these consequences could even be deadly. While government officials around the world are rightly focused on “flattening the curve” on COVID-19, we must recognise the important role that dis/misinformation plays in the success of those efforts.

Disinformation has spread online like wildfire during the COVID-19 crisis, encompassing everything from conspiracy theories about its origin, to unproven coronavirus “cures”, to the sale of personal protective gear under false or deceptive pretenses. Such disinformation has led some people to ingest fatal home cures or ignore social distancing rules. The results? Grave threats to public health and the economy, as well as dangerous scepticism towards communications from ordinarily trusted organisations. In fact, research has shown that COVID-19 disinformation has spread far more widely online than information from authoritative sources like the World Health Organization (WHO).

Co-operation and co-ordination among governments, Internet companies, public health authorities and other stakeholders is the only way forward.

Online platforms such as Facebook, YouTube and Twitter are conduits for disinformation. While many platforms have taken bold steps to counteract or limit the spread of disinformation, they cannot do it alone. As we explain in a recent OECD policy brief, co-operation and co-ordination among governments, Internet companies, public health authorities and other stakeholders is the only way forward.

Online platforms and public health authorities already collaborate on several fronts. Platforms such as Facebook, Instagram and TikTok have begun redirecting COVID-19-related search queries to information from the WHO, while Google and Twitter give WHO information similar prominence on dedicated COVID-19 pages. Others, including Facebook and Google, support or work with third-party fact checkers to debunk false rumours and more clearly label such content as false. Facebook, Google and Twitter also offer free advertising credits to the WHO and national health authorities to help disseminate critical information.

Although these initiatives are important and contribute to creating a safer environment, they are no panacea. While permanent eradication of COVID-19 disinformation is unrealistic, the following options can help stakeholders to fight it more effectively while maintaining respect for users’ freedom of speech and privacy.

  • Support a multiplicity of independent fact-checking organisations. Because disinformation often damages the credibility of doctors, scientists or government officials, it can be difficult for them to directly refute or debunk it. Leveraging a variety of independent fact-checkers can, however, provide unbiased analysis of information while helping online platforms identify misleading and false content; governments and international authorities can help by supporting and relying on their analyses to restore public trust.
  • Ensure human moderators are in place to complement technological solutions. Although automated systems are an important tool in the battle against COVID-19 disinformation, human intervention is also necessary – particularly in cases that call for more nuanced decisions. This is a difficult dilemma during a pandemic: sending content moderators to the office presents unacceptable public health risks, while asking them to work from home raises concerns around privacy or confidentiality. For online platforms with adequate resources, hiring more moderators as full-time staff may help.
  • Voluntarily issue transparency reports about COVID-19 disinformation. Regular transparency reports would allow researchers, policymakers, and the platforms themselves to identify ways to improve the moderation of COVID-19 disinformation; and a common approach to transparency reporting across countries and companies would facilitate global, cross-platform comparisons. (The OECD is already leading a similar effort with regard to voluntary transparency reporting on terrorist and violent extremist content online.)
  • Improve users’ media, digital and health literacy skills.  People need the skills to navigate and make sense of what they see online safely and competently, and to understand why it is shown to them. This includes knowing how to verify the accuracy and reliability of the content they access and how to distinguish actual news from opinions or rumours. To this end, collaboration between platforms, media organisations, governments and educators is critical, as are efforts to improve health literacy.

The importance of getting online disinformation about COVID-19 under control will grow in the coming months. Across the world, scientists, governments and researchers are working together to develop ways to immunise people against COVID-19. We will need a similar collaborative approach to successfully “flatten the curve” of disinformation about their efforts. Otherwise, the pandemic and its harmful effects may continue well beyond the advent of a safe and effective vaccine.

Learn more:

Leave a Reply