The COVID-19 pandemic that swept across the globe this year is the defining event of a generation. To say that the disease has taken a toll would be a ludicrous understatement; with medical treatment under-available, entire economies in lockdown, and a mental health crisis raging quietly beneath it all, the virus’s collateral damage alone has inflicted a devastating cost only halfway through 2020. Looking forward, it’s hard to say what the long-term consequences of the pandemic will be—we don’t know what to expect from the next two weeks, let alone the rest of the decade.
Still, it’s easy to forecast one upsetting phenomenon gaining ground in the disease’s wake. This is an issue with which we’ve already battled for decades, and especially in recent years: loss of trust in science.
It’s no wonder that the general public has begun to question scientific credibility. Projections and social distancing rules are pushed into the public’s face one minute, then cast aside and replaced the next—it’s exhausting to keep up with. The World Health Organization (WHO) has been particularly guilty of this “flip-flopping.” In March, WHO advised the public against wearing masks, only to reverse course weeks later. That month, they also reported that COVID-19 has a 3.4% fatality rate—but even today, we can’t be sure of the true number, as estimates vary country to country and week to week. Just this week, a WHO statement which claimed that asymptomatic transmission was “very rare” was hastily retracted—in reality, they don’t know how asymptomatic carriers are affecting viral spread.
Conflicting information generally isn’t a consequence of bad intent or some hidden agenda. Epidemiology is difficult at the best of times, and more so in a public health crisis. Modelling disease transmission dynamics, projecting case numbers, investigating the effects of different population lockdown measures—each of these practices is indispensable to emergency policymaking, but numbers upon which to base them are hard to come by during a pandemic.
We can’t exactly infect humans with coronavirus to objectively test the fatality rate. We can’t observe the effects of closing schools on infection rate when they’re shutting down at the same time as workplaces, restaurants, and recreation centres. We can’t know for sure if we’re missing parameters in our models. Science is limited; we don’t know everything, or even close to everything.
But you’ll seldom hear about these epistemic difficulties. What simplifications are researchers making when they predict future case numbers? What populations are being examined; are study samples representative of real-life demographics in the regions where results are being applied? What model organisms are being tested—can the immune system of a rhesus macaque adequately represent that of a human?
The answers to these questions and many more (if they exist) are behind the scenes, locked away in the “Discussion” sections of papers that can cost upwards of $30 just to read. Many journals are (rightly) tearing down paywalls in the interest of public knowledge during COVID-19, but convoluted language can make science just as inaccessible to those without the epidemiological background that many articles assume from their readers.
Even putting these barriers aside, it’s hard to know where to look for reliable science. The peer review process employed by reputable scientific journals consists of experts picking apart every problem and bias in a paper and sending it back for rounds of editing; only then can an objective and scientifically robust piece of work be published. Preprints are articles that haven’t undergone these months of peer review, but can circulate online nonetheless. Preprints are essential in a rapidly evolving health crisis; they make information available that normally wouldn’t be until after the fact, and invite community discussion over a topic without the tedious step of formal publication. On the other side of the coin, these “first drafts” can be misleading to the public when it’s not made clear that they are not rigorously reviewed pieces of science. A result that hasn’t been carefully looked over—say, a proposed fatality rate or a social distancing recommendation—can cause undue stress or premature relaxation.
Besides, a pandemic puts pressure on reviewers to speed things up. Faults can slip through the cracks when they’re rushed through the peer-review process. Even worse, in a world where productivity is the measure of a scientist, so-called “predatory journals” employ shady peer-review practices to publish questionable papers. They make money off of desperate authors willing to pay a fee for their reputations, but the cost to science and public health is not worth it. Ultimately, we can—and do—end up with papers that contain flawed methods and biased recommendations, circulating among policymakers and members of the public who won’t question their validity.
When news and social media sensationalism take such results out of context through a game of broken telephone, the problem is compounded and an overly simplistic picture is painted to the public. Even when papers include lines like “our findings require further verification” or “this result might only apply to a certain demographic,” these caveats are often dropped entirely, leaving behind absolutes that don’t really communicate the original message.
It’s easy to see how distrust in science can follow from all this. A reasonable prediction based on limited data is published, circulated, and taken out of context; a week later, it’s proven wrong by actual events. A measure is recommended based on the best science available at the time; the next month, as with the WHO’s mask advisories, the recommendation is reversed. And even if an observer is willing to understand that scientific forecasting isn’t perfect, the problematic data, bad assumptions, and biased results that they might come across under the guise of objective work can solidify their disillusionment.
But even when our science is imperfect, it’s the best that we have. A policy made with limited information is better than a shot in the dark. Science is not infallible, but we need it. The fundamental problem is a disconnect between the public and the process. When people don’t understand the constraints inherent in epidemiological practices, they won’t understand why it goes wrong.
We’ve seen that this isn’t just something that happens in a pandemic, either. Whether it’s on vaccination or climate change, the fact that science makes mistakes doesn’t mean that we should cast it aside entirely. Skepticism is important, but total disillusionment is dangerous.
Everyone has a role to play in countering this apparent separation between people and science. Scientists must clearly and openly disclose the specific methods that are used in a paper and use the best practices available to them instead of trying to sneak shaky science into journals for the sake of their academic reputations. Journalists and newscasters must report responsibly on interesting or unexpected results and resist the temptation to present tentative figures or complicated results as neat facts. Politicians need to make it clear in their addresses exactly what information their policies are based on, and why those policies could change as more information becomes available.
Perhaps most of all, we the public need to inform ourselves. We need to do our homework—look into those methods, look into the facts that we hear on TV or Twitter, look into the reasoning behind new bylaws. Above all, we need to understand—at least on a basic level—that a scientific paper isn’t a crystal ball; it will be wrong sometimes, especially in a rapidly evolving pandemic, and we need to see that those mistakes can’t always be avoided. Experts have been working exceptionally hard and doing indispensable work to battle this disease, and to dismiss their work because it’s not always perfect, or because there are “bad apples” who produce dodgy papers, is simply unfair. Engage with science and have conversations about it with your family and friends. It’s on everyone to bridge the gap between science and wider society—and what better time to start than now?
Comments are closed.