This is what you’ll learn from this article:
- Conspiracy theories are the less probable the less plausible (the ‚crazier‘) they are intrinsically.
- Conspiracy theories are the more probable the less common the cited evidence for them is.
- Conspiracy theories are the more probable the more accurately they predict future evidence.
‘Conspiracy theory’ has never been a friendly tag on a view, but these days its negative connotation has grown to be somewhere in the neighborhood of ‘Nazi’ or ‘Holocaust denier’.
It has no doubt become a battle cry to denigrate unpleasant views.
Battle cry or not, labeling something as a ‘conspiracy theory’ doesn’t tell us anything about its truth or falsity. Surely one should ‘check the facts’, as in particular German-speaking media claim to do in diverse broadcasts? But understandably, this is beyond the ken of ordinary people, and it presumably leaves them overwhelmed.
At least as long as they don’t receive help. And help is what I seek to offer here. I want to give readers a ‘device’ to roughly estimate how plausible a theory is. This device is called the Bayes theorem. Don’t worry, the worst part of it is the symbolism (which I preserve for the sake of brevity), the math involved is absolutely basic and accessible for a 6th-grader.
Here it is:
The Bayes theorem explained
P (t/e&k) is what we want to get at. It is the probability that the (conspiracy) theory t in question is true. More technically, it is the probability of a theory being true given evidence e and background knowledge k. Allow me to draw your attention to three important points here:
- The Bayes theorem works with probabilities. There is no such thing as a ‘strict proof’ or ‘absolute certainty’ as regards theories about human actions (which is what conspiracy theories basically are).
- Applying the Bayes theorem to a given theory presupposes the existence of evidence. If the proponent does not give evidence for his theory, no verification or falsification is possible. No credit should be given to theories in favor of which no evidence is presented.
- One could in principle fill in numerical values (between 0 and 1) for the probabilities and calculate the fraction. However, in our present context, the assignment of numbers does not make sense. We will content ourselves with rough estimates as to whether a probability is rather high (>> 0.5) or rather low (<< 0.5).
- The Bayes theorem is well-confirmed as a means to judging the viability of a theory. I will not justify this further. Interested readers should refer to Epistemic Justification by Richard Swinburne (Oxford University Press 2003), chs. 3 and 4.
Now, what do the other terms mean?
P (e/t&k) is the probability of evidence e given t and background knowledge k. In other words, if t were true, to what extent would we expect e to obtain? Or, to what extent does t predict e?
Example 1: Jones is suspected of having robbed the safe of his company (t) (for example, because money had been missing and he is one of only a handful of people with access to the safe). If now his fingerprints were found on the safe (e), this would be in full accord with t. The fingerprint evidence e is to be expected on t. Consequently, P (e/t&k) is high. Background knowledge here consists in, for example, knowledge that finding a fingerprint on an object means that the person bearing that fingerprint has touched that object, and that fingerprints are unique.
Example 2: Let t be the theory that a certain government deliberately forbears to protect its citizens, for example by being notoriously lax with border controls. On this assumption, a sudden hermetic closure of the borders is a piece of evidence out of line with t, because t does not predict it. P (e/t&k) will be low in this case.
P (t&k) is the probability of t given background knowledge, or the prior probability of t. In other words, how plausible is t intrinsically, irrespective of evidence? Or, to put it more colloquially, how ‘crazy’ or ‘reasonable’, respectively, does a theory sound by itself?
Example 1: There are claims around that viruses don’t exist but are in fact just inventions of the pharmaceutical industry to boost their business. The prior probability of this claim is low. This is mostly because the prior probability of the negation (“Viruses do exist”) is high. There is nothing biologically strange about nucleic acids enveloped in a protein shell using the reproductive apparatus of host cells. Viruses share nucleic acid and protein language with eukaryotic and bacteria cells, parasites are a well-known phenomenon in biology, and we have electron-microscopic images of viruses. Also, the pharma lobby would have to be exceedingly powerful to suppress all critical voices among virologists in case viruses really didn’t exist.
Example 2: Another speculation is that the Chinese deliberately spread the virus in Europe and America in order to bring down the European and U.S. economies. The prior probability of this claim is relatively high, given the fact (background knowledge!) that the Chinese and the U.S. have long been in some kind of economic war.
P (e&k) is the probability of e given background knowledge. How likely is e intrinsically, given what we know about the world? Is e a rare or a common thing? (Note that the higher this probability becomes, the lower the probability of the theory in question will be, because very common events do not support a theory about uncommon human activities).
Example 1: A politician notoriously known for advocating mandatory vaccines suddenly speaks about the dangers of vaccinations in public (e). The occurrence of this piece of evidence is highly improbable, given our background knowledge: that person endorsed the opposite view for a long time; and human psychology teaches us that it is often hard to change deeply entrenched beliefs, and even harder to state them publicly, especially if one has to fear consequences.
Example 2: An upsurge in excess mortality during the winter months (e) is nothing unusual, given our background knowledge of human physiology, seasons and statistics. P (e&k) is accordingly high.
The higher the two probabilities in the numerator are, the higher the overall probability of the theory in question is. However, the term in the denominator ‘counterbalances’ the numerator terms: if some evidence cited in favor of a theory is very common (i.e., P (e&k) is high), this actually weakens the theory.
What to keep in mind when using the Bayes theorem
For applying the theorem to an actual conspiracy theory, some points need to be heeded:
Corona conspiracy theories are designed to explain, not predict
In the current situation, theories are often designed to explain already available evidence. The setting is generally not that we have a theory, and only later some piece of evidence emerges that is (or is not) predicted by the theory. Therefore, P (e/h) will in general be high, because Corona conspiracy theories are usually constructed after the occurrence of the evidence. For example, the evidence that Bill Gates recently praised the German government (which has vaccines as a top priority) certainly supports the theory that the German government created a ‘Corona panic’ among the people to pave the way for vaccines indirectly funded by Gates, but that theory was constructed after the occurrence of the evidence, and so could not predict it. And similarly with other theories that emerged during the Corona situation. P (e/h) is therefore not a good selection device for Corona conspiracy theories.
Assess the ‚craziness‘ first
It is certainly a good idea to start with P (t&k). If the theory ‘sounds too crazy’, it is rational to set it aside, at least so long as no strong evidence is presented in favor of it. After all, mathematically, the probability of t cannot be high if the prior probability of t is low, even if P(e/t&k) is high (which, as noted above, will not be surprising in the present context). Conversely, a high prior probability gives a theory some credence.
Is the evidence common or uncommon?
The second important factor is the prior probability of e (P(e&k)). Is e a very common phenomenon, or something rare? In order to assess this, one may have to compare present data to past ones, because some phenomena are so remote from our daily experiences that our personal impression may be an unreliable guide. For example, we may not be sufficiently informed how politicians usually act or acted during crises, and we may not be familiar with death statistics and virology.
Perhaps the crucial epistemic mistakes are committed here. At the same time, this prior probability seems to be a fruitful target for putative propaganda.
For example, it is certainly a drastic experience to have a Corona-positive, 83-year-old family member with several pre-morbidities die of pneumonia. But it is not good evidence for the theory that SARS-CoV-2 is a highly dangerous virus, because our background knowledge tells us that old and weak people are prone to fall severely ill or die from a number of pathogens, including those that do no harm to the vast majority of people.
By the same token, media pictures of rows of coffins in Italy create the impression that COVID-19 is an exceptional pandemic, but studying mortality rates may reveal that excess mortality is not higher than usual; or, even if excess mortality is elevated, the investigation of factors other than SARS-CoV-2 may show that deaths cannot solely be attributed to the Coronavirus.
How could P (e&k) be low? Some scenarios:
- Excess mortality in a country might be significantly higher than it was in most of the past decades. For example, the current excess mortality could last have been reached when a huge natural catastrophe claimed thousands of lives. By contrast, if the excess mortality had been at the current level in 10 out of the last 20 years, P (e) will be relatively high, because the evidence is a quite common phenomenon.
- Very many people show severe (and similar) symptoms or die, even young and healthy people. This certainly does not happen regularly, at least not in Western countries. P (e) is accordingly low. Such evidence warrants searching for an explanation, perhaps in terms of a conspiracy theory.
A quick Bayes theorem user manual
So, how should one proceed in estimating the viability of a conspiracy theory? Here’s a ‘Bayes theorem quick user manual’:
- First, check whether the theory is buttressed by evidence. If not, forget about it.
- Second, check the prior probability of the theory, i.e. whether it sounds ‘crazy’ or ‘reasonable’. You should normally not have to do background research for that. ‘Crazy’ theories ought to be treated with special caution.
- Third, assess the probability of the evidence, i.e. how rare or common it is. You may have to do some research on this. Remember that the commoner the evidence is, the less plausible will a conspiracy theory be that draws on this evidence.
You might now complain that I’ve broken my promise: after all, an assessment in particular of P (e&k) does require some specific background knowledge or research. Granted – but this shouldn’t apply to all cases, and anyway, why not learn something new?
And even if you won’t do research yourself, I hope to have convinced you that it is more reasonable to weaken the battle cry “Conspiracy theory!” to a “This sounds odd, but I’ll suspend judgment before I’ve checked the facts”.
Image: iQoncept / shutterstock.com