­How does disinformation work?

By Konstantina Vasileva

Konstantina Vasileva is a Bulgarian media analyst with experience in local and foreign companies and research projects. Currently, she’s a PhD candidate in New Zealand and is witnessing first-hand how the country has been tackling the coronavirus pandemic.

Behind the complex plot of the movie Inception, we can find a simple but impactful idea. Once you plant the seed of doubt (or a new idea), it can take over minds and change people’s views and behaviour. To make it work, you only need one thing: the targeted person has to think they came up with the idea on their own. The current infodemic, which the World Health Organisation defines as “too much information including false or misleading information in digital and physical environments during a disease outbreak”, works in a very similar way. That is why it is so dangerous.

How do we perceive, remember, and trust information? Let us follow each step in this process through the lens of science and media research.

The Pareto Principle

During the late stages of a successful disinformation campaign, there is a mass proliferation of unreliable facts. The primary goal is to make people believe that unreliable information is a “fact” widely known to many other people. This illusion of “common knowledge” efficiently conceals the reality behind it: disinformation is created and distributed strategically by a small circle of people.

Vilfredo Pareto was a 19th century Italian philosopher and economist who concluded that 20% of the Italian population owned 80% of the available land. His idea became popular in management studies in the 1940s and remains so today in many fields, now known as the 80/20 Principle. This ratio is indicative rather than precise, since the mathematical calculation in any particular case is rarely exactly 80/20. The key idea is that most observed outcomes can be traced to a small number of underlying causes, resources, or social-economic agents. This principle applies to generating and spreading false information as well.

According to a study by the Center for Countering Digital Hate in Washington, only 12 people are responsible for spreading almost 2/3rds of the anti-vax content on Facebook. The so-called “Disinformation dozen” features the osteopath Dr Joseph Mercola, who has made millions through alternative medicine, and Robert F. Kennedy Jr, who earns $255,000 a year as the chairman of one of the most influential American anti-vax activist groups. According to studies cited by the Washington Post, the New York Times and the esteemed journal Nature Medicine, the creation and syndication of disinformation is not just financed by a small circle of people: it is a strategically developed and well-coordinated process. And its messages reach more than 59 million online followers.

The key target group are the vaccine-hesitant, who are unsure whether they can trust the official information about vaccines. There are different reasons for this lack of trust, ranging from difficulties with understanding specialised scientific content to issues with trusting authorities. The most concerted efforts to target specific users focus on those who display high levels of fear, anxiety, lack of trust in science and authorities, and an interest in alternative medicine and conspiracy theories like QAnon.

Accusations of censorship against Facebook and online platforms are rising due to their takedowns of false and misleading content. Ironically, social media and the excessive trust some people place in it, play a key role not in censoring, but in spreading, the infodemic. The Pareto Principle works seamlessly in this ecosystem because just a handful of social media sources can generate popular content which reaches an exponentially larger audience. The “Disinformation Dozen” invests in online ads for its content, and tech companies have to balance their desire to look like an ethical player in the market with their otherwise free services, whose main source of income comes from ads.

Alternative thinking vs the scientific consensus

The circulation of false information is not enough to spark an infodemic. Since the dawn of mass media, there have been outlets that spread conspiracy theories and publish sensational content based on half-truths or outright lies. The main difference is that before the internet and the rise of social media, this content could only reach a limited, almost entirely local audience. Views that until recently were considered marginal can now reach an audience of millions with only a minimal investment of resources. Democratising access to information, unfortunately, also removes the barriers to accessing disinformation.

Gunther Eysenbach, a physician and adjunct professor at Victoria University, Canada, created the discipline Infodemiology (the epidemiology of disinformation) almost 20 years ago (Eysenbach, 2002). He describes the infodemic as an “unfiltered stream of information”, and this is key to understanding this phenomenon. Access to an abundance of information is irrelevant if one cannot vet its veracity and separate the wheat from the chaff. Even highly educated people are vulnerable to disinformation when it comes to content in unfamiliar fields. There is a reason why scientific and medical education takes years. It is not about memorising lists of facts, but about understanding and applying the scientific method and learning how to analyse data and interpret the results.

The key criteria for high-quality scientific research revolve around rigour in applying the scientific method, the precision of the research process, and the proper analysis and interpretation of the collected data. In addition to that, research quality depends on adhering to ethical and professional standards. Some of the “expert” heroes of the infodemic, like Andrew Wakefield, Dolores Cahill, Judy Mikovits and Charles Hoffe, face backlash in the scientific community not because they are the fearless scientific version of David, fighting against Goliath. They have been discredited as a result of serious accusations – data theft, a lack of compliance with research quality standards, and a track record of false or unsubstantiated claims.

Science is reliable precisely because its methods help scientists discover such misconduct. If an individual scientist makes a mistake, applying the scientific method allows others to catch the error, correct it, and address it in new research studies. Another thing to keep in mind is the complexity of the topics studied by science. This complexity often leads to contradictory results. When pandemic guidelines are changed or updated, it is not because scientists have no idea what they were doing but because as time goes by, they gain a better understanding and calibrate their advice to new findings.

Those unfamiliar with the scientific process are highly susceptible to disinformation because they base their opinions mainly on two fallacies. The first one is the appeal to authority: whether the expert is famous or has prestigious credentials (in online comments, unscientific claims are often defended with lines like “Mikovits is a famous scientist”, “Andrew Wakefield is a world-renown specialist”, “Then why would world-famous doctors speak against these measures?”…). The second fallacy is endorsing opinions matching the audience’s preconceived views (known in cognitive science as confirmation bias).

Trusting experts based on these fallacies is misleading: science is not about the prestige of individuals or specific studies but about the accumulation of evidence. Individual scientists (even those working for prestigious institutions) can and do make mistakes: that is why science relies on replication (whether other scientists can reproduce the same result) and the accumulation of data and evidence across multiple studies done by different (teams of) scientists. That is why quality is determined by the rigour of the applied methods, not by fame and prestige.

Disinformation campaigns also use fake news about real experts. For example, they plant the rumour that “Prof Ugur Sahin, the creator of the Pfizer “medication” against covid, said he is not vaccinated yet. Why is that if the vaccine is safe and the virus is so deadly?”. According to Poynter, this rumour started on Instagram, and was probably based on a poor understanding of Sahin’s own statement in January 2021 when he said  it was not yet his turn to get vaccinated. According to German regulations, even the vaccine creators had no privileges and had to wait their turn to get vaccinated after people in the highest risk groups. As early as March 2021, BILD and other German media reported that he had received his jab along with the other Biontech employees.

We can find information supporting absolutely everything online. To identify which claims are reliable, we need to be aware of the quality of the media sources covering the issue, and we also need domain knowledge.

From a snowball to an avalanche of disinformation

We already know that we can trace disinformation content to a small number of sources. We also know they target specialised topics, which make people without domain knowledge easily susceptible to manipulation. Now it is time to have a look at the mechanism behind spreading these ideas.

It is now widely known that online trolls can be weaponised and used to model political opinions. The graver issue is why people who are not paid to share false or misleading information choose to do so? Why are intelligent and educated people susceptible to the infodemic? Those who share fake news are sometimes people who are close to us, or influential public figures, not just trolls or bots. How do they fall into this trap?

The key to understanding this is in the seeds of doubt we mentioned in the beginning. We know from scientific studies that a direct confrontation does not change people’s views and even if we support our claims with reliable, fact-checked information, people are unlikely to change their opinion. That is precisely why disinformation campaigns target (and successfully lure) the hesitant who are still seeking information to form an opinion.

It is all about good marketing, and those who spread disinformation market to us our favourite product – our own ego. Disinformation is “branded” as a commodity that grants special status to those who buy into it. Dog whistle phrases like “Get informed”, “open your eyes”, and “read about it” seek to plant the seeds of doubt in official information. Instead of worrying about sharing marginal views, those who believe in conspiracy theories and disinformation feel like they belong in a special club of “enlightened” people (“authorities are sellouts, only a few remain untarnished”, “only a handful of physicians dare to speak the truth”, “I have opened my own eyes, go ahead and believe science jabber”). As a result, those who oppose disinformation are dehumanised with collective pejorative labels (“sheep”, “covidiots”, “hysterics”).

It all begins with planting the seeds of doubt with links and mentions of various pieces of misinformation. After that, the mechanism of human memory takes over. Studies show that we are prone to believing falsehoods if they come from people we trust (Shaw, 2016). When true and false information are mixed together, we tend to remember a combination of both and have a hard time telling which is which (Loftus and Greenspan, 2021). In addition to that, we tend to remember the gist of the content, but not its primary source; and recurrent exposure to a piece of information increases our susceptibility to it and our tendency to trust it (Arndt, 2012). Our memory is susceptible not only to externally generated information – sometimes people fail to see manipulation and changes even in texts they themselves have written earlier (Cochran et al., 2016).

In other words, a single article or comment probably will not have much of an effect, but the accumulation of disinformation (especially when repeated by people we trust) consolidates its claims into our memory. Once this happens, it becomes very difficult to separate fact and fiction. If we have no recollection how and why we learned a piece of information, we internalise it, and this interferes with our ability to judge whether it is true or not. In addition to that, even when we think we react rationally, we are susceptible to various cognitive biases. We are much more likely to notice and remember the information we agree with and to disregard the information that does not fit with our confirmation bias (Kahneman, 2011).

It is not an accident that fake news has titles like “Shock!”, “Horror!” and an alarmist tone (“Dr Charles Hoffe: Experimental (vaccines) are guaranteed to kill those who got jabbed in a few years!”). Our memory and attention prioritise negative information. In cases of real danger, this negative bias helps us quickly scan our environment and react to threats. But in everyday life, it can play a trick on us and make us more susceptible to negative content (Soroka et al., 2019). Also, the more emotionally engaged and affected we are, the harder it is for us to engage in a rational discussion.

Imagine a Facebook group. You have joined along with some friends and have been following a specific topic for a while. The mainstream news on the subject focus on fact A and, to a lesser extent, on fact B. You hear these facts all the time on TV, radio, and in the foreign press. However, recently there have been some mentions of fact C: a claim very different from A and B.

You do not know whether fact C is true, but it sounds interesting and important (you are in the Doubt phase). You form a better memory of the claim when you hear it repeated by some TV pundits, but not by everyone (Memory consolidation). In a span of a few weeks, you keep noticing mentions of fact C in comments and articles (Repetition). Then you see news on the topic in another Facebook group, but nobody seems to mention fact C. You are starting to think that if others do not know about it, it might be a good idea to tell them (Internalizing). Once you share your own comments on fact C, you are even more likely to spot conversations about it. You even start to get annoyed if people forget to mention it. If somebody contradicts it, you seek information online to disprove them. Before you know it, fact C has ceased to be a piece of external information: it has become part of your own opinion.

If fact C is true, that is not really an issue. You have discovered something interesting and want to share it. But what happens if Fact C is a piece of disinformation? It no longer matters: the memory of the primary source of this information is blurred by the repeated exposure to it. If your friends or relatives see what you wrote, they would be likely to treat it as true because it comes from somebody they trust, not an anonymous online profile. And this is how the vicious circle begins.

To summarise, disinformation originates from a small number of highly active online profiles (which often syndicate a limited number of talking points). Social media makes it easy for the infodemic to spread because sharing obscures the primary source of information. It is hard to spot disinformation, especially on highly specialised topics (where we lack domain knowledge). We might be sceptical toward sketchy online profiles, but we tend to trust the people close to us, even if we do not know if they have fact-checked a post before sharing it. We consolidate into memory the things we read repeatedly and unwillingly take unchecked information for a fact. Those who deliberately share disinformation are a minority – the infodemic is dangerous because it is an unconscious process. Many people share false claims with the best of intentions without realising the harm they are doing.

We can stop disinformation from snowballing into an avalanche. The key steps toward that are: never form an opinion on important issues based on social media interactions and always fact-check everything – especially those claims which go against the scientific consensus.

What's New