“Many models are not intended as predictors but as tools to help decisions. So when you see something presented on the news, notice that it is usually a statement of what ‘could’ happen, and listen for what else ‘could’ happen if people react to the epidemic,” Lessler told me. “A lot of people have a tendency to focus on the worst case, but if the model is successful in informing policy then that dire prediction that is getting all the press will be wrong.” 

These feedback loops are further complicated by the asymmetry in how we view information and incorporate it into our behavior, as individuals. Optimists may update their information as part of optimistic update bias (toward taking more risks). Pessimists may be more risk averse even when presented with an “optimistic” model. This is not dissimilar from confirmation bias. Our behavior also depends on epistemic trust: whether we decide to trust one expert forecast over another enough to change our minds and behavior. This recently arose with the pushback against a controversial article in The Atlantic, written by an economist, about the risks of Covid-19 transmission in children.

Science, and specifically epidemiology, is concerned with measurement and truth. Accurate models are important. But at time point A, if a group of individuals listens to the worst-case/pessimistic/precautionary principle model, the likelihood of the worst-case actually occurring may decrease as a result of a shift in the group’s behavior to minimize risk. The opposite is also true: At the same point, if a group of individuals listens to the “dynamic causal”/optimistic model and shifts their behavior to be more liberal, the model shifts toward the worst-case.

“Pandemic forecasting is similar to weather forecasts, which are good for a 10-day outlook, but I couldn’t tell you what the weather will be in the third week of July,” Lessler told me. With infectious diseases, “we can’t say what will happen in three months from now, since we have feedback loops with policy and behavior and uncertainty in the underlying data.”

Let’s come back to J: In Situation 1 he may decide to take that pessimistic model as a nudge to quit smoking. The reverse may happen in Situation 2. Ideally, his doctor would share both projections, and it would be up to J to weigh both options.

Public health is trickier, because decisions made by the individual ripple out to affect their community. Arguably, it’s better to be overprepared and overcautious than under, where millions of lives are at risk, though the externalities to individual liberties and to the economy are also important and impact our choices and evaluation of risk.

Here’s the good news: Over time, the forecasting models of the optimists and the pessimists could appear to converge. So both the scenario and dynamic causal models are, in a sense, correct: Overall and gradually, we tend to make more accurate predictions together. This suggests that once the case numbers dwindle, the models will resemble one another, which signals the end of the pandemic or simply appears to be a reflection of it. Lessler later shared in an email: “All models get to a destination of very low cases. It is just a matter of how long and what happens along the way.”