
Imperial College London have released their Covid-19 epidemic simulation - bencollier49
https://github.com/mrc-ide/covid-sim
======
abainbridge
This code is great fun. Start here to explore the horror:
[https://github.com/mrc-ide/covid-
sim/blob/master/src/CovidSi...](https://github.com/mrc-ide/covid-
sim/blob/master/src/CovidSim.cpp)

I like the way InitModel() crashes (I think) if a global pointer called bmh
(short for bitmap header) isn't first initialized by calling InitBMHead() from
Bitmap.cpp. I guess it's obvious to academics with giant brains that
InitModel() depends on a bitmap existing.

But it gets worse - the pointer is declared in CovidSim.cpp but Bitmap.cpp
reaches across the module boundary to initialize it. Sweet.

Edit: I feel bad now. It's great that this code was made public. I should be
encouraging that. I think academics writing code for things like this should
probably take an undergrad course on software engineering. I think the one
that taught me about Abstract Data Types would be most beneficial the
author(s) of this code.

~~~
spatulon
I was part of the GitHub team that helped get the code ready for public
release. We fixed a few bugs, reduced memory consumption, made it portable
across operating systems, etc., but the code you see is largely what was
written by Neil Ferguson and his team.

Given your concerns, I would like to mention two things:

1) the code was originally a single source file, so I'm not surprised the
module boundaries are imperfect - the code is many years old and the module
boundaries have existed for only a few weeks.

2) this was not written by professional software engineers, but
epidemiologists, and they were working with limited budget, time, and
programming experience.

~~~
thu2111
Epidemiologists _are_ professional software engineers, in the sense that they
are paid full time to write code and summarise the output in papers. They
might not be "professional" in the sense of the quality of their work but
let's not kid ourselves that these people spend half their days in the lab or
taking swabs from children in Beijing. Other types of researchers do that -
these guys just run sims.

~~~
mping
Academia is geared towards paper publishing, so why would they bother with
code quality? Even if in the long run it would benefit them, its at best a
nice to have. Most academia doesn't share code or data anyway.

It's like saying that programmers are professional English writers, so
comments and such should be written in perfect English.

IMHO the only valid criticism is if the the simulation modelling is sound and
implemented accurately.

~~~
abainbridge
Papers should not be accepted unless they are accompanied by high quality
source code.

> IMHO the only valid criticism is if the the simulation modelling is sound
> and implemented accurately.

I think the best way to tell if it is sound and implemented accurately is to
first require that it is implemented as simply (to understand) as it can be.
Even then, it is hard to reason about code. But without simplicity, the
problem is ten times worse.

I'm tempted to go further and say that source code is formalized thought. I've
written a number of simulators in the past, sometimes purely to help me
understand a system more fully. There's something beautiful about describing
the behaviour of a system in elegant code. When there's nothing left to remove
I get much greater confidence that I've fully groked the system. I'd say this
is the area where the "computers are like a bicycle for the mind" quote is the
most true.

------
djhworld
Interesting tidbit for HN, Jobn Carmack helped a little with this project
apparently
[https://twitter.com/ID_AA_Carmack/status/1254872368763277313](https://twitter.com/ID_AA_Carmack/status/1254872368763277313)

~~~
antupis
Yeah and he said that code is fine so I am trusting him.

------
lbeltrame
Some people have raised issues with the model[1][2], or rather, the software
implementation of it.

There's also a (flagged) submission on HN discussing this[3] referencing [1].

[1] (warning: possibly partisan link) [https://lockdownsceptics.org/code-
review-of-fergusons-model/](https://lockdownsceptics.org/code-review-of-
fergusons-model/)

[2] [https://github.com/mrc-ide/covid-sim/issues/165](https://github.com/mrc-
ide/covid-sim/issues/165)

[3]
[https://news.ycombinator.com/item?id=23099212](https://news.ycombinator.com/item?id=23099212)

~~~
te_chris
That issue is unhelpful. Yes, no good tests, but how about suggesting ways it
can actually be improved? Why are devs such bores when it comes to things like
this? They have released the model, review it and suggest improvements! Don't
grandstand "We, the undersigned etc etc" as if that's going to help improve
the codebase in the slightest.

Carmack is OK with it, he's put his name to reviewing it. That's not to excuse
the bad testing, but he hasn't thrown his hands up and run away. He worked
with them constructively to make it possible for us to even see it! Be more
like Carmack.

EDIT: Also be like this person: [https://github.com/mrc-ide/covid-
sim/issues/161](https://github.com/mrc-ide/covid-sim/issues/161) Helpful,
constructive and the developers engaged with them.

~~~
JauntyHatAngle
I fear that this area is too divisive currently to allow for normal discourse.
The downvoting in these kinds of topics have been atrocious on hacker News the
last few weeks.

~~~
Gibbon1
HN is one of the increasing fewer places online where covid19 deniers can
congregate.

~~~
lbeltrame
I personally don't like much the term "deniers", because there are a lot of
unknowns on this virus and what it does, and there is no agreement on many
fronts.

Also, doing these generalizations groups together people with very
questionable theories ("It's the 5G") with others that have more nuanced
criticism.

Personally (and yes, I am a scientist) try to look up whatever is said in the
media, either by journalists or experts, no matter if the results end up
matching 100% what it is said (often it is less, and on some cases there is no
match).

I think there should be fairly high standards of scientific rigor even in
published code, especially if this might impact public policy actions, like we
should expect high rigor in biological and epidemiological studies.

~~~
Gibbon1
I started using the term deniers because all of them are studiously ignoring
something that doesn't pencil out. It doesn't matter if they are loony
deniers, sciency deniers, I have a big brain tech bro deniers. The result is
all the same. Garbage.

Dangerous garbage.

------
beshrkayali
Honestly this is kind of sad. I can't understand how someone can have the gall
to use any results from this spaghetti code in a scientific paper, let alone
one that changes a country's entire strategy.

~~~
jfnixon
I'll be "that guy" and point out we know the climate simulations have similar
issues.

------
notkaiho
I'm no expert in C++ so not sure if it's meant to look like that, but that
code is making my brain hurt.

~~~
mshook
Just argument parsing makes my eyes bleed...

[https://github.com/mrc-ide/covid-
sim/blob/master/src/CovidSi...](https://github.com/mrc-ide/covid-
sim/blob/master/src/CovidSim.cpp#L146)

That stuff is priceless:

else if (argv[i][1] == 'C' && argv[i][2] == 'L' && argv[i][3] == 'P' &&
argv[i][4] == '1' && argv[i][5] == ':')

I guess string comparisons are complicated. I also fail to see why they used
':' as the separator and why they didn't use a proper library to parse argv...

~~~
tasogare
Because when you have only a very limited number of arguments it’s easier to
do it that way than selecting a lib, adding it to the project, reading the
doc, trying it and integrating it for real. I add the issue in C#, and after
trying 2 or 3 libs (including one by Microsoft) I just gave up.

~~~
mshook
I hear you but in their case they have more than 130 LOC to parse ARGV. Doing
all that manually hurts both code readability and maintainability.

~~~
notkaiho
Having seen a discussion about this on a different forum, someone who used to
work in academia mentioned that repeatability and consistency are key to
academic code, which means often readability and convention may get sacrificed
- it works for the author and if someone wants to repeat the experiment it
should get the same results without wangling around with many external
packages or modules that may change or become obsolete.

Still hurts to read though :P

~~~
tasogare
Yep. Also, speaking from academic point of view, code won’t make you advance
on your career by itself (only papers based on it will), so the minimum is
done until the thing is usable for its goal. So forget about CI/CD, proper
test suits, documentation (I hate this point personally and document my
projects a bit more than the average researcher) and engineering best
practices.

~~~
jfnixon
Then public policy should heavily discount results based on academic code, if
it shown to be poorly engineered. I'd go further and say you can't trust the
papers based on the results of badly engineered simulations. As a poster said
earlier "I think there should be fairly high standards of scientific rigor
even in published code, especially if this might impact public policy actions,
like we should expect high rigor in biological and epidemiological studies."

~~~
notkaiho
...what do you think those biological and epidemiological studies hinge on? ;)

------
te_chris
If this has been flagged off the front page I despair for HN. This should be
an apex story for us: open source release of a computer model that has
incredible relevance to loads of people globally. Quite sad.

~~~
bencollier49
I'm pretty sure it was. It was at number four and rising then suddenly dropped
to the bottom of the front page before vanishing.

------
asplake
Alumnus here, it’s Imperial College, London. It is a university (in its own
right), but always goes by that name.

~~~
howd00
Only since 2007. Uni of London before that for 100 years. Used to play them at
sports as part of the ULU league.

~~~
OJFord
No, Imperial's royal charter (what's needed to be 'a university' and award
degrees in your own right) dates to 1907. I believe that's true of most of
UoL's constituents, just not the smaller ones perhaps like SOAS or other
field-focused colleges.

------
kidintech
I guess confidence really is key when making code public...

------
jarym
For a long time academics would only publish formulas and results as part of
their papers - omitting source code and arguing that ‘the formulas are the
only thing that matter’.

Perhaps the real reason is the code typically smells of Swiss cheese.

From now on I vow to trust NO academic results of computer models unless
source code is published along with instructions on how to reproduce outputs
(or at least similar output!)

------
buboard
.

~~~
toyg
_> Its results is as useful as simcity is for mayors._

I bet the quality of local administration would go through the roof if all
mayoral candidates were forced to be proficient at SimCity.

------
devit
Is this thing actually being used for policy decisions?

Given the complexity, poor language/framework choice (should have used either
Rust or Tensorflow), bad code design, and unclear determination of the
parameters (especially the fact they don't seem to be estimated from real data
with Bayesian inference, or if they are they didn't release that), as well as
the ludicrous CPU and RAM usage making it impossible for most people to run it
and thus check it, it doesn't seem like it has any chance of actually being a
good model.

If this is the state of the art for the most important statistical modelling
project in the world, we are in the dark ages.

~~~
bencollier49
It's 13+ years old. There was no Rust or Tensorflow when it was originally
produced.

~~~
drtillberg
13+ years ago the same academic modeler predicted 200 million deaths from a
bird flu outbreak that killed 282 people in total.[1]. That's a pretty
enormous error-- alarmist publicity-seeking behavior then and now, but the
damage was felt more acutely now. If this modeler really wanted a competent
simulation it would have been better constructed. That it was not well-
constructed is another datapoint indicating how egregious the error was on the
part of the goverment in failing to properly vet it.

[1].[https://www.express.co.uk/comment/columnists/frederick-
forsy...](https://www.express.co.uk/comment/columnists/frederick-
forsyth/1270245/coronavirus-neil-ferguson-sars-bird-flu-deaths)

~~~
zimpenfish
That's an amazingly disingenuous comment.

[https://www.theguardian.com/world/2005/sep/30/birdflu.jamess...](https://www.theguardian.com/world/2005/sep/30/birdflu.jamessturcke)

He said that if it was the same as the 1918 flu, you could probably scale it
up to 200M people. He didn't _predict_ 200M, he speculated that it could be
that bad.

And he wasn't alone - "A global influenza pandemic is imminent and will kill
up to 150 million people" said "David Nabarro, one of the most senior public
health experts at the World Health Organisation"

"A Department of Health contingency plan states anywhere that there could be
between 21,500 and 709,000 deaths in Britain."

An unnamed WHO spokeswoman said "best case scenario" would be 7.4 million
deaths globally.

[https://www.newscientist.com/article/dn7787-flu-pandemic-
let...](https://www.newscientist.com/article/dn7787-flu-pandemic-lethal-yet-
preventable/)

"And yet, the models show, if targeted action is taken within a critical
three-week window, an outbreak could be limited to fewer than 100 individuals
within two months."

Oh, he also showed how to react and keep the number of casualties to a
minimum.

Don't believe what you read in dirt rags.

