
Understanding Epidemiology Models - signa11
https://arstechnica.com/science/2020/06/understanding-epidemiology-models/
======
compumike
The "models" being tossed around are mostly valid as short-term forecasts.
Because public policy (both government policy and aggregate human behavior)
are part of the system, this is actually a feedback loop with a time constant
measured in weeks, making forecasts beyond that point invalid quite quickly.

In contrast, I've recently modeled the pandemic response as a feedback control
system, where policy/public behavior affects infection rate, which affects
policy. This is a classic feedback engineering problem. I'm able to vary
suppression vs. mitigation along a continuous curve and look at the long-term
course of the pandemic. Counterintuitively, the policy frontier (economic vs
human damage) is concave and non-monotonic, and has three distinct regions.
[https://www.circuitlab.com/blog/2020/05/28/surprising-
covid-...](https://www.circuitlab.com/blog/2020/05/28/surprising-
covid-19-strategy-how-to-reduce-economic-damage/)

We need engineering-driven long-term thinking, so that we can agree on an
optimal strategy, before we choose individual tactics to implement that
strategy.

~~~
iantrt
That's a very interesting approach -- I really enjoyed reading your
transcript, and your implementation of the system as a circuit is really quite
unique!

I think you're onto something with your motivation: most epidemiological
models basically consider the infection/mortality/recovery rate to be fixed,
or at most modifiable over many years by national investments in health. It
seems that, before COVID-19, nobody had really thought that the trajectory of
an ongoing pandemic could be changed, so I haven't seen any models which
incorporate that.

But I think it's unfair to use scare-quotes around "models". The truth is,
there is a tremendous amount of insight to be gained from those "models" (SIS,
SIR, SEIR, etc.). I mean, your own proposed model is even just a small
variation of one of the classical models, and you don't even show that it
makes better long-term predictions, which is what you criticise in the other
models.

Sorry, that was a long-winded way of basically saying: Great work, but don't
discount the classic models entirely!

~~~
compumike
Thank you! Oh I absolutely agree -- the core of my model is a 100%-standard
compartmental SIR model.
[https://en.wikipedia.org/wiki/Compartmental_models_in_epidem...](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology)

But, the differential equation models shown on the Wikipedia page have no
policy/response variables, and basically assume an unintelligent, static null
response. That was what was feared initially (early March), but of course fear
drives behavior, so there's a closed-loop system here. That's my small
variation and I think it's probably an important one.

~~~
zwaps
This is the same reason why such models are no longer popular in social
sciences since the 1970s: responses to policy change are complex and one
usually tries (with more or less success, which is another topic) to model the
micro behavior, so the incentives of actors, rather than find fixed systemic
parameters. Cf. Lucas critique

Ironically, models without such complexities are often referred to as
engineering or physics inspired - from the observation that human behavior is
more complex and interdependent than the objects under consideration in
physical systems. Natural scientists and engineers delving into social systems
frequently fall prey to these issues.

------
ChrisCinelli
There are a lot of parameters.

Assuming the models are good, the inputs makes all the difference and the
input numbers have been largely off at least till recently.

I would be interested to see how the values change with a specific parameter
movement.

It would be also good to see a montecarlo simulation after defining a
realistic distribution of the parameters to find how the numbers of infected
and death change.

I toyed with a simple (probably semplistic model) early on, I was finding that
because of the exponential nature of the relations, small changes were
producing huge changes in the output. That was telling me that unless we could
get more realistic data we could go from having a number of death smaller than
the actual number given for the flu to numbers more than 1-2 orders of
magnitude larger...

------
xhkkffbf
I know I'm being cynical, but I'm not sure anyone understands these things. If
the top scientists from well-funded teams came up with estimates that were so
wildly off, I'm not sure why we have any hope of getting these things right.

We need a better philosophical explanation. If you ask me, the biggest issue
is that they assume that it just spreads kind of randomly. That doesn't seem
to be what happened with COVID-19. It flourished in some parts and limped
along in others. We have theories about subways or air pollution but none are
very well correlated.

And then there's Farr's law which doesn't match these models very well.

~~~
btrettel
> If the top scientists from well-funded teams came up with estimates that
> were so wildly off, I'm not sure why we have any hope of getting these
> things right.

A "top" scientist produces a ton of publications and brings in a lot of grant
money. You don't need to be right to be a "top" scientist. Change the
incentives and you'll change the outcomes.

The failure of blind predictions (i.e., without the opportunity to calibrate
the model to the data) says more about the culture than whether this is
possible. Weather forecasting does a fairly good job:

[https://journals.ametsoc.org/doi/full/10.1175/2011MWR3525.1](https://journals.ametsoc.org/doi/full/10.1175/2011MWR3525.1)

It's instructive to figure out why. I'd say it's because weather forecasters
have a ton of data, forecasts can quickly be proven right or wrong (so they
get a lot of feedback), and they have a culture emphasizing accuracy. It also
helps that they're modeling physics, though various submodels (turbulence,
etc.) may not be any more credible than the models in epidemiology.

Blind predictions have been tried in a variety of domains. I'm most familiar
with computational fluid dynamics, e.g.:

[https://www.sciencedirect.com/science/article/abs/pii/S03797...](https://www.sciencedirect.com/science/article/abs/pii/S0379711209000034)

[https://ntrs.nasa.gov/search.jsp?R=19970027380](https://ntrs.nasa.gov/search.jsp?R=19970027380)

(There are many more examples.)

As I recall, it's not uncommon for different groups even using the same
software to come up with radically different predictions. I'd say scientists
and engineers need much more practice making blind predictions. Blind
prediction exercises should occur regularly.

There's a yearly competition to predict the heat release rate of a Christmas
tree: [https://fpe.umd.edu/events/christmas-fire-safety-
demo](https://fpe.umd.edu/events/christmas-fire-safety-demo)

I entered one year. Rather than using a fancy computer model, I merely
smoothed last year's data and scaled it by the ratio of the mass of this
year's tree to the mass of last year's tree. As I recall, I did better than
most!

------
ChrisCinelli
A discussion about modeling:

[https://www.youtube.com/watch?v=gxAaO2rsdIs](https://www.youtube.com/watch?v=gxAaO2rsdIs)

------
ChrisCinelli
Amy open source models worth looking into?

------
mistrial9
I pulled an epidemiology textbook from the 1970s, from a reputable author..
the math is very dense, but basically an extension of calculus-type curves
from the 1950s or so.. The overall effect seems overbearing and self-
referential (to my untrained eye). The serious gravity of the subject, with so
many lives affected in an event, leads to a combination of erudition and also
solemn, unapproachable pronouncements, it seems. Maybe there are useful
aspects to this curve approach, in hindsight.. but, as mentioned in other
posts here, data-driven feedback is now a thing, where it just was not a thing
then..

lessons learned -- Question Authority!? use good tools .. look for data and
use it well.. ask the right questions..

~~~
avs733
you picked up a source from 50 years ago in a field you (seemingly) know
little about and your takeaway was 'question authority'?

Some reflection on your own epistemic commitments and beliefs may be in
order...

Honestly the ongoing discussion of COVID treatment and modeling in HN scares
the pants off me given how uninformed but self-assured it is.

~~~
mistrial9
I took some time to review something that was "authoritative" from 50 years
ago, and yes, the carry-over to the modern times is not great. Does it make me
weak to question it ?

I am purposefully not "self-assured" but rather, looking into the topic. As a
learner, why would I not question the contents I found?

The textbook leaves no room for alternative -- in fact, the tone and
presentation of the textbook is that the topic is settled, and the math is
"best".

~~~
mattkrause
That's the nature of a textbook though, not the field.

Most textbooks present a pretty definitive take on a topic, if only for
pedagogical reasons. You _might_ get a few well-defined controversies, like
various theories of matter in chemistry, or frequentist vs. Bayesians in a
stats book. Even there though, that debate is often resolved, one way or the
other, by the end of the first chapter.

The actual back-and-forth in most fields happens in journal articles and
conference presentations instead.

