The "Fire Alarm Bias": How Our Cognitive Quirks Blind Us to Low Probability Threats
I was hesitant to write this earlier because COVID was very politicized, but I think things have "calmed down" a bit now that a few years have passed.
I think it's widely accepted that a pandemic did indeed happen, that COVID was "bad", that a lot of people died from COVID, and that if we had acted sooner we could have prevented a lot of these deaths. (If you disagree with some of these axioms, you might not find this blog post compelling).
That said, back in the early days of COVID, in 2020, a lot of the media downplayed its threats. Some examples:
“Don’t Worry About the Coronavirus, Worry About The Flu” - Buzzfeed
“Get a grippe, America. The flu is a much bigger threat than coronavirus, for now.” - Washington Post
“Coronavirus, with zero American fatalities, is dominating headlines, while the flu is the real threat.” - The Daily Beast
“So if you’re freaked out at all about the coronavirus, you should be more concerned about the flu.” - CNN's Anderson Cooper
Prediction is hard. I’m not saying this ironically or sarcastically. Tetlock's celebrated "superforecasters" assigned a mere 3% probability to there being over 200,000 coronavirus cases within a month from February 20th, 2020 — when the actual number ended up vastly exceeding that. And the stock market, that vaunted barometer of economic expectations, reached an all-time high on February 12th, suggesting analysts anticipated smooth sailing in the months ahead. These are not idle speculators, but the most powerful predictive institutions at our disposal. And they were catastrophically wrong.
So the lesson I want to emphasize here is not to just "git gud" at predicting. Instead, I want to emphasize that we need to get better at probabilistic reasoning.
Even as the severity of the COVID-19 crisis grew, perplexing guidance emerged from the highest public health authorities. The WHO, the CDC, and other leading organizations recommended that the public need not bother with face masks. This was not, as some believed, to preserve scarce supplies for frontline healthcare workers. Instead, their position had been consistent for years - through the Swine Flu pandemic and the SARS outbreak, they consistently maintained that masks were unnecessary for the general population.
Scott Alexander of SlateStarCodex provides a persuasive explanation for how this happened:
If you really want to understand what happened, don’t read any studies about face masks or pandemics. Read Smith & Pell (2003), Parachute Use To Prevent Death And Major Trauma Related To Gravitational Challenge: Systematic Review Of Randomized Controlled Trials. It’s an article in the British Journal Of Medicine pointing out that there have never been any good studies proving that parachutes are helpful when jumping out of a plane, so they fail to meet the normal standards of evidence-based medicine.
[…]
Doctors will not admit any treatment could possibly be good until it has a lot of randomized controlled trials behind it, common sense be damned. This didn’t come out of nowhere. They’ve been burned lots of times before by thinking they were applying common sense and getting things really wrong. And after your mistakes kill a few thousand people you start getting really paranoid and careful. And there are so many quacks who can spout off some “common sense” explanation for why their vitamin-infused bleach or colloidal silver should work that doctors have just become immune to that kind of bullshit. Multiple good RCTs or it didn’t happen.
[…]
The real medical consensus on face masks came from pretty much the same process as the fake medical consensus on parachutes. Common sense said that they worked. But there weren’t many good RCTs. We couldn’t do more, because it would have been unethical to deliberately expose face-mask-less people to disease. In the end, all we had were some mediocre trials of slightly different things that we had to extrapolate out of range.
Just like the legal term for “not proven guilty beyond a reasonable doubt” is “not guilty”, the medical term for “not proven to work in several gold-standard randomized controlled trials” is “it doesn’t work” (and don’t get me started on “no evidence”). So the CDC said masks didn’t work.
Again, going back to probabilistic reasoning: it seems plausible that masks may help with slowing the spread of COVID, in the sense that you could visualize the mechanism through which that prevention would work. And even if your visualization or prediction were wrong, the cost of wearing a mask was low enough that the benefit of a reduction in the pandemic risk would surely be worth it for most individuals. In other words, whether or not masks work, the “smart move” would have been to wear the mask anyway, just in case.
Correspondingly, if the most accurate and powerful predictors available to humans said that there was a 3% chance of COVID becoming a pandemic, the takeaway that the media should have been emphasizing is not “Don’t worry about COVID” but rather “3% chance that global economy will be devastated, hundreds of thousands will die, and you will be forced to self-quarantine at home for months at a time.”
Unfortunately, I think there are systematic biases that will lead the media to err towards downplaying novel threats like COVID.
The tendency to discount low-probability, high-impact risks is a well-documented cognitive bias. Psychologically, we are often more averse to the social and personal costs of raising unsubstantiated alarms than we are to the potentially catastrophic consequences of inaction. Consider Eliezer Yudkowsky’s essay on the purpose of a fire alarm:
What is the function of a fire alarm?
One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building.
In the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn’t react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time. This and other experiments seemed to pin down that what’s happening is pluralistic ignorance. We don’t want to look panicky by being afraid of what isn’t an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.
[…]
A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.
The fire alarm doesn’t tell us with certainty that a fire is there. In fact, I can’t recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is weaker evidence of fire than smoke coming from under a door.
But the fire alarm tells us that it’s socially okay to react to the fire. It promises us with certainty that we won’t be embarrassed if we now proceed to exit in an orderly fashion.
This "fire alarm bias" can manifest both in the risk assessments of individual decision-makers and in the reporting decisions of media outlets.
Journalists may be hesitant to sound the alarm on scenarios that have a high chance of not materializing, fearing they'll be ridiculed for spreading undue panic. And the public, in turn, may be inclined to dismiss warnings about tail risks as alarmist, preferring to cling to a sense of normalcy until the threat is overwhelming. This mutually reinforcing dynamic can create a powerful force toward complacency, even in the face of potentially civilization-altering risks.
Understanding this underlying bias is crucial, because it suggests the media's COVID-19 coverage was not simply a one-off aberration, but rather symptomatic of a deeper cognitive quirk that could lead to the systematic downplaying of other catastrophic threats - like the risks associated with advanced artificial intelligence.
Here are some examples of today’s media coverage of superintelligent AI existential risk:
“The AI doomers are trying to scare us. Here is what we should really be worried about.” - The Atlantic
“AI doomsday warnings a distraction from the danger it already poses, warns expert” - The Guardian
Hopefully the parallels between “Don’t worry about super AI, worry about regular AI” and “Don’t worry about COVID (Super-Flu?), worry about regular flu” are obvious: In the beginning, COVID had killed 0 people, whereas the flu had already killed millions. The claim isn’t that today’s AI problems (e.g. with surveillance, bias, misinformation, etc.) are nonexistent, just like we weren’t claiming that nobody has ever died from the flu. Similarly, the claim isn’t that Super AI is doing more damage today than regular AI, just like we weren’t claiming in 2020 that more people were dying from COVID than were dying from the flu. The claim back in 2020 was that COVID had a 3% chance of becoming much, much worse than the flu within the near future, and that we should take that 3% chance seriously.
Now consider this quote from PC Magazine:
a survey of hundreds of AI experts predicts the chances of artificial intelligence destroying humanity is actually quite small, at about 5%.
(emphasis added)
It doesn’t really matter whether the odds are 5% or 1% or 0.1%. Even a 0.1% chance of destroying humanity is catastrophic and needs to be taken seriously. The potential devastation would be absolute and irreversible.
The tendency to downplay novel, low-probability risks - what I’ve been calling the "fire alarm bias" - has distorted both public and media perceptions of crises like the COVID-19 pandemic. This bias toward complacency arises from our natural aversion to the personal and social costs of raising unsubstantiated alarms, combined with a false sense of security derived from the absence of definitive proof. It is a bias deeply embedded in how we as individuals reason about risk, and how our key institutions - from the media to public health authorities - assess and communicate emerging threats.
If we are to avoid repeating the tragic failures of the COVID-19 response, we must find ways to overcome this pernicious bias. That means developing a far more nuanced and probabilistic approach to risk assessment. It means demanding that experts quantify their levels of confidence, rather than relying on qualitative assurances. And it means cultivating a media culture that is willing to sound the alarm when the potential stakes are high enough.