By Angel Petrov
Angel Petrov is a Sofia-based journalist working at the Foreign News Desk of Dnevnik.bg, an online news platform. Previously, he ran an English-language news agency covering Bulgaria and the Balkans. His areas of interest also include the Middle East, the Balkans, and Africa.
In the summer of 2021, a number of global leaders and governments have been severely reprimanding the spread of vaccine misinformation on social media. Yet in Berlin, Brussels, and Washington, tech giants seem omnipotent and completely uninterested in getting verified vaccine information to everyone.
Is that really the case? Are we overestimating the effects of online misinformation? Can fighting misinformation defeat the resistance to vaccination of millions of people throughout the world? To what extent does the issue of vaccine resistance overlap with the lack (or overabundance) of information?
“Killing” accusations and calls for “algorithm accountability.”
In July 2021, US President Joe Biden said that platforms like Facebook are “killing people” by allowing the spread of misinformation via their services. The social network responded by issuing a statement that they are working with the authorities and experts on the issue, and an increasing number of Facebook users have a positive attitude toward vaccines.
The European Parliament has been discussing the so-called “algorithmic accountability” of social networks – opening them to scrutiny and making them accountable for fundamental rights violations. The European Commission issued a draft report on its proposal for a Digital Services Act – a legislative step created with the idea of protecting platforms against “political disinformation, hoaxes and manipulation during pandemics,(or) harms to vulnerable groups”; and keeping up with the standards for regulation, moderation, and working with algorithms.
The response by Facebook, Twitter, TikTok, Microsoft and Google to the EU are reports on the measures they have taken against coronavirus disinformation (the latest ones were issued in June this year). The EU Commissioner for the Internal Market, Thierry Breton, urged them to “assume their responsibility to beat vaccine hesitancy spurred by disinformation”. According to published reports, a TikTok vaccination campaign has been viewed 1 million times and has over 20,000 likes. Google works with health authorities in different countries; Twitter (and to some extent Facebook) uses automated systems to easily spot posts that breach its disinformation policy, and tweets with misleading or harmful content are labelled in a ‘strike’ system that “determines when further enforcement action is necessary”. Brussels expects new reports in September, which will include a new partnership between Twitter, Reuters, and the Associated Press against disinformation.
Two sides of the same coin
Hidden behind these examples of the actions of Internet giants is what remains undone. Their algorithms incentivise negative vaccine content, as users tend to engage more with it. The reader is in a trap. A recent study by a non-profit organisation campaigning for human rights, “Avaaz”, shows how conspiracy content is recommended to new users. In two days, two online profiles received primarily anti-vax content (109 out of a total of 180 recommendations), despite Facebook’s pledge to no longer show recommendations for pages and groups with health-related content.
In May, a study by the London-based NGO Center for Countering Digital Hate demonstrated that 65% of the misleading and false information on vaccines in Facebook originates from only 12 profiles by anti-vaxxers and proponents of alternative medicine. The infodemic, like the pandemic, seems to rely on its own super-spreaders.
The abundance of false or misleading information, and its effect on vaccination compliance throughout the world, however, are not always directly related to the social networks’ algorithms and their (lack of) regulatory policy. Upcoming EU regulation aims to remove as much disinformation as possible. The US wants more or less the same thing. The spread of misinformation, however, is just a symptom – no matter how much social networks try to handle it, online content will not cease to be a cocktail of facts, opinions, misconceptions, outright lies, and inaccuracies with fatal consequences.
Hesitancy and disinformation
Why do many people around the world refuse to get vaccinated even when they have the chance to do so?
The biggest contributor to hesitancy is neither disinformation nor anti-vaccine propaganda (anti-vaxxers are an ideologically motivated minority). The key factor is the lack of information that fully dispels people’s fears. Not everyone is ready to trust science: millions of people are torn because of the perceived uncertainties surrounding side effects and vaccine safety. Science has vetted the risks to be extremely low, but according to the vaccine-hesitant, only time can test this claim.
The factors underlying this hesitancy are known as the 5 Cs: confidence (in vaccines); complacency (to what extent a disease is seen as personally threatening); convenience (vaccine availability); calculation (risk assessment); collective responsibility (the desire to keep others safe). A study across low-, middle- and high-income countries also links hesitancy with lack of certainty and social media, but the lack of balance lies between reporting what is known and admitting what is unknown. A contributing problem might also be “the intensive media coverage” of vaccine side effects.
As David Robinson, a journalist and researcher, puts it – those who come across misleading or false information often accept it because of confirmation bias: the internet only solidifies people’s preconceived views. A vaccine-hesitant person can share misleading information: for example, that they change your DNA or damage the reproductive system. They can also share information of people who did in fact, die after vaccination (which is not necessarily disinformation).
In the UK, hesitancy also has ethnic dimensions. In the US, choosing whether to vaccinate or not seems to be “predetermined” across partisan lines – close to half of all Republicans (and just less than 10% of Democrats) are unvaccinated. This division and media choice (e.g., vaccine-sceptical Fox News vs the New York Times and CNN) are determinative. A study by Gallup shows another contributing factor: the pandemic metrics, especially new cases, hospitalisation, and deaths.
The issue with taking down information
If false or misleading information increases hesitancy, would taking down disinformation and the profiles that spread it solve the problem? Not at all. Terms like “false” and “misleading” cannot really capture the complexity of issues like COVID-19, which until recently were unknown to mankind. Two examples: misinterpreting facts (an issue as old as the world) and the abundance of mistakes on the internet.
Cherrypicked information is not necessarily misinformation. The New York Times recently reported a story about a super-spreader of misleading information, who (correctly) shared the results of a recent Israeli study showing the Pfizer vaccine has only 39% efficacy against the Delta variant. What was omitted was that the vaccine is still 91% effective in preventing severe illness. If cherry-picking data (as dangerous as it is during a pandemic) is qualified as “misleading information”, this would create a free speech precedent and place excessive responsibility on social media to determine what is misleading or not. Interpreting facts is a key component of the decision-making process in today’s society and politics. Should an online user who spontaneously shared that they were worried about this 39% efficacy rate face consequences?
And how should we classify flawed scientific studies whose mistakes, according to experts, add fuel to the infodemic fire? Even when subpar studies are retracted by scientific journals, the damage has already been done. For example, according to a June study, for every three people saved by vaccines, there are two people killed by them. There are many examples that are less serious than that. Studies like this one in the medical journal Lancet look at the stability of the coronavirus on different surfaces. At the time it was published, the study was legitimate and did not contain false information, but it contributed to the spreading panic and to the fear of interacting with the surrounding world (and boosted disinfectant sales). A year later, American and foreign regulators assessed the risk of contracting COVID-19 from infected surfaces as 1 in 10,000. To some extent, the responsibility for blowing this story out of proportion belonged to the media, who failed to highlight that, according to scientists, viruses rarely spread through contact with different surfaces. Would this be retroactively labelled as misinformation or as a part of the long and winding road of studying and understanding COVID-19 (and respectively, of adapting the media narrative about it)?
In underdeveloped democracies, blurring the line between these concepts can be used to limit or manipulate the public debate. Can governments or supranational entities decide what is misleading information without crossing the boundaries which shape contemporary (democratic) culture?
The responsibility of politicians
According to WHO, the World Health Organisation, mankind is being taken over by an Infodemic – an excess of information (including false and misleading claims) during a pandemic. The WHO guidelines assume that responsibility during such crises lies in the hands of governments. However, in many countries, they failed to set an example. In the US and Brazil, leaders like Donald Trump and Jair Bolsonaro demonstrated a negative attitude towards the virus and vaccinations instead of setting a personal example (the White House now wants Trump to push his supporters to get a vaccine). In Europe, far-right politicians have done their best to turn a health issue like immunisation into an ideological one: not only in Italy but also in Spain, they refuse to personally back the claims that jabs can save the lives of many people. In Bulgaria, alongside many leading physicians, the politicians set the sceptical tone toward both the virus and the vaccines.
In fact, according to a recent study commissioned by Dnevnik, it is precisely in Bulgarian society – one of the least vaccinated in Europe – that those who are still not vaccinated point out that the government needs to create an information campaign explaining the benefits of vaccination. Authorities have a communication problem – at the very least because the citizens do not think it has communicated at all.
The issue of hesitancy has many possible solutions (from mobile teams to making vaccines more accessible and taking stricter measures), but neither is a direct function of social media. We can include another 5 C component; convenience (for example, in Germany, there were delays due to the strict rules on which population groups could get the jab and when). When it comes to dealing with disinformation, Peter Cunliffe-Jones, a researcher at the University of Westminster, recommends changing how media literacy is taught and seeking accountability from national governments (and members of the European Parliament) for false claims regarding vaccines.
Attitudes towards COVID-19 vaccines are shaped by many factors: focusing on removing content should not be at the expense of other governmental actions, nor should it be used to transfer the blame for communication failures. False and misleading information on social media is an issue, but it is only one of the heads of the hydra. Even if we remove it, other problems will continue to come up and keep the problem alive.