Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Covid-19 Interactive Model (neherlab.org)
291 points by nbnoll on March 19, 2020 | hide | past | favorite | 100 comments

Hi everyone, My name is Nicholas Noll, a computational biologist and co-developer of the above tool. As of now there are two other people actively working on this project: Richard Neher, Ivan Aksamentov, and I. Richard and I have a background in theoretical physics but we have been working on quantitative biology and epidemiology. Richard is also one of the founders of nextstrain.org which leverages phylogenetic analyses to track the global spread of many pathogens -- including SARS-CoV-2. The other person, Ivan, is a super talented software engineer who just recently joined our group.

We started working on this a few weeks ago, in collaboration with Jan Albert and Roberk Dyrdak, as a helpful model for public health officials and hospitals to help predict the usage of local community hospitals, and explore the utility and immediate need for quarantine measures.

We are very much still actively developing the tool, now with an eye to help global communities at large. If anyone is interested to help, we are trying to organize a road map and put it together on our Github page. I'll be around to answer questions and listen to feedback.

Thank you for putting this together!

A suggestion: adding decimal separators to numbers in the "Cases through time" chart for better readability.

The linear plot is a lot more relatable (and scarier) than a log-scale plot, but it is impossible to filter out "susceptible" and "recovered" making the linear scale view impossible to read. Would be nice to be able to filter out.

Would be nice if the parameters could be encoded in a shareable link.

Yes I agree. We are working on this - albeit slowly. I think realistically we have an import parameters feature (obtained from the export) sooner.

I'd be happy to help you work on this feature

This is a wonderful tool, I am very much looking forward to seeing further development & countries

Thank you very much!

One observation I had while playing with the numbers was: The most accurately shaped curves were achieved by adjusting the "seasonal forcing" to values between 0.5 and 0.6 [1]

I extended the time range to one year and lo and behold there was the gigantic hump in all curves.

I hope that testing kits will be amply available within the next few months so that we can emulate the success of Vò [2]

[1] I believe best-fitting curves are a miss-representation as many cases are not recorded, this is why I optimized for a shape that counted the "Infectious"consistently above the "confirmed cases" by the factor of 2-4. I am curious to learn how well my assumptions hold up.

[2] https://www.ft.com/content/0dba7ea8-6713-11ea-800d-da70cff6e...

Your modelling - how well does it match the actual data? Does it reproduce the actual data from the countries (China, Iran, Italy) furthest in the epidemic's lifetime correctly?

What does the terms moderate mitigation, weak mitigation, strong mitigation mean? What measures are required to go from none to weak, to moderate mitigation?

This is the next push for our modeling. We are trying to integrate live case count data to provide more accurate scenarios for people. Once we have that data we can try to fit a few parameters to improve predictions. I should note we are actively looking for people to help us curate data sources if you are interested!

I think the most intuitive explanation to your second question is a reduction of number of contacts you have per day - i.e. .6 corresponds to 60% reduction in social contacts in a given day. This is not exact but should provide you rough handrails for the mitigation numbers.

1) the parameters that don't fit with the already observed data should be visibly marked in a sense how much they are away from the observation. That's the most important use of the tool: to show which parameters simply don't match: for each parameter one should assume that others remain the same, then show how much that specific parameter has to be for the observations to match. If the parameter can't be any value with the rest unchanged to produce the observed output, it should be marked as n/a or whatever, otherwise the difference from the nearest possible should be shown.

2) you should allow longer time frame for the simulation (it should be possible to run it for e.g. 12 months).

2) is already there, it's an input field where you select day-month-year for both start and end.

Thank you very much for this, it's really great work.

Do you have a sense, or any benchmarks, for where the current responses across the world lie along the weak - strong mitigation scale? Or is that something we'll need to estimate using the model as the situation evolves more?

Not enough that I would feel confident publically stating. I think our best hope for accurate calibration would be to fit the data as it comes out (hence why we are focusing our efforts here) or to do a post-hoc analysis of a few examples, e.g. Wuhan vs South Korea. I'll note that mitigation effects will be best measured as a reduction in exponential growth.

That makes sense, thank you!

> I should note we are actively looking for people to help us curate data sources if you are interested!

Would be very interested to help. I sent you an email yesterday with the subject title:

    Interested in helping with data for `COVID-19 Scenarios`

We’ve all seen the time series of diagnosed cases in Italy versus the U.S. in which the latter appears to track closely but with a time delay. Is it relevant that the U.S. population is some 5 to 6 times Italy’s? Is it meaningful to compare populations by cases per capita?

In theory, the exponential growth rate of the virus doesn't depend on the population size. It more strongly depends on the population structure - i.e. the average number of social contacts made in a day.

It also depends on the growth rate of the number of tests administered. If you exponentially grow the number of tests given (which one could argue is the case in many areas), the absolute number of “confirmed cases” will also look exponential.

If you start with a hypothesis that this virus is in wide global circulation already, all this testing is doing is confirming “yup, people have the virus!”.

Only way to prove or disprove the hypothesis that the virus is widespread is large scale testing of a truly random subset of the entire population. Last I checked that isn’t happening anywhere in the United States and possibly elsewhere.

Why is proving that hypothesis important? Because if true it means the severity of this virus is dramatically less than our limited data suggests. If true, most of the rather draconian measures we are taking are pointless.

If false, god help us all I guess...

Theoretically, unbridled transmission will follow a logisitic curve, which is exponential at the start. Test kit production, testing capacity would be linear with spikes when capacity is ordered to be increased or production facilities come online.

The growth rate of infected deaths and the measurable mortality displacement makes that hypothesis extremely unlikely.

As does (albeit in a weaker form) the testing of the full population of Vò.

As does the testing in at least a few European countries, where the growth rate of tests outpaces that of confirmed infections. And/or where the number of tests administered vastly exceeds the number of confirmed infections.

Sure, once we have anti-body tests widely available, the data will become better.

While there's exponential growth yes, but further up the logistic curve it makes a difference (you can't kill more people than the population of any country).

Yes I agree. However, I don't think we're close to the inflection point where this is relevant yet.

You do get finite size effects with non-mean field models.

I just want to say thank you for putting this together as a free resource. Clearly a lot of effort went into this (and you all are making it better over time)

Interested in the backend. What is this coded in? Python/dash?

It's all coded in typescript and javascript. I've been toying around with a more in-depth model in web assembly, but as I alluded to elsewhere in this thread, the coding is slow going.

Hey Nicholas, love the tool - great job! I'm working on a new modelling tool (https://causal.app) and built a COVID-19 model myself: https://my.causal.app/models/1432. This lets you easily run Monte-Carlo simulations. Might be useful for you. But maybe your model is too advanced for Causal.

There is a link to the GitHub repo in the About section: https://github.com/neherlab/covid19_scenarios

Hi, The assumptions for mitigation (strong, weak, ...) aren't described anywhere (I think?).

There is a bit of info in the About page, where it says the most agressive measures taken in Wuhan gives a mitigation factor of 0.1 (with a source for that data).

It would be really nice if the uncertainty of fatality and severity information we have right now could be incorporated and visualized. Current reports by the CDC and info from France and Spain suggests that the age distribution of severe/critical cases might look very different than data from China made us believe. On the other hand, we don't know enough about the prevalence of asymptomatic cases, which have huge impacts on spread and severity, potentially.

It would be really nice to model the probability distributions for each of these and see the resulting probabilities of simulation outcomes.

Also, the huge uncertainty regarding the ratio of symptomless/unreported cases should be visualized. Guesses are in the range from 30 to 90%.

I'm working on a tool that lets you work with uncertainty very easily. The model I came up with is very trivial compared to Nicholas' though: https://my.causal.app/models/1432 Would be awesome if someone could build a more legit model in Causal - happy to help :)

Shouldn't this tool allow for modifying the numbers in the "Severity assumptions" section, too? This uses the same estimate from the Imperial College model, which is basically an educated guess at this point:

> A total of 72,314 patient records—44,672 (61.8%) confirmed cases, 16,186 (22.4%) suspected cases, 10,567 (14.6%) clinically diagnosed cases (Hubei Province only), and 889 asymptomatic cases (1.2%)—contributed data for the analysis.

Another study guesses that 86% of cases are asymptomatic and/or otherwise undocumented:


All numbers in the table should be editable by the user -- apart from the fatality percentages which are computed from the values given. We've been working on the design to make this more obvious at a glance.

Aha, thank you! Nice work putting this together, it's much easier to understand than a paper.

No worries. I think it's also important to note that the Imperial College study that's gotten a lot of attention recently is an agent-based model - i.e. they simulate actual individuals - which allow them to look at specific quarantine measures like school closures. You can accomplish an approximate version of this with our model by controlling the isolation column of this table. This allows one to selectively quarantine age groups quantitatively.

Cool - again, nice work!

Someone notices in another thread that in this model the death number does not depend on the number of beds or ICUs.

We are working on this! It originally started out just as a visual representation however now it factors into the model. Check out branch feat/hospital-bed-model

Nice to hear that, because one of the important factors is if the health system gets oversaturated.

Edit: For someone else that is interested, the link to the repo in GitHub, is in the "About" section at the top.

Edit 2: The About section has a lot of info. Perhaps you should copy part of that info to the main page.

Edit 3: When the site loads, it should show the graph of the simulation with the default parameters.

I just wanted to reply that the finite ICU capacity was merged last night and is now on the main webpage.

This can in a sense be captured in the virulence rate factor. Lower rate factor equals more care available.

The main problem is the number of people with very bad symptoms (for example that need ICU). If there are enough ICU, then the mortality rate is like 0.1%. If there are not enough, the mortality is like 5%. (The numbers are difficult to measure, so they are only estimations.)

Now they have added the ICU overflow, that is very important for the mortality rate.

Does it scare anyone that the North / Fast setting for the USA puts the death count at ~1/2 reality for the past few days? Does this indicate that it spread faster than fast?

This is going to be really really bad isn't it?

Consider that in the US testing has barely started and even then there are a ton pre-conditions to get tested. Any “scary growth curve” of any absolute number you see is almost certainly a reflection of how much tests are being done. More tests means more confirmed results which means a nice scary looking curve.

If you start with a hypothesis that this virus is already in widespread circulation and possibly has been for a while (which is the simplest explanation, mind you), odds are good many people already had it or have it right now.

Only way to prove/disprove if it is widespread is by testing a random sample of the entire population. Something we aren’t doing in the US.

What's really disturbing and hard to doubt is the growing daily death count in a country as Italy [1]

[1] https://www.worldometers.info/coronavirus/country/italy/#gra...

It looks like the number of deaths per day hasn't really been growing in the past ~5 days. That's somewhat comforting.

Just a suggestion, but some general thoughts on what activities correspond to what levels of mitigation - eg. are schools closing at 80%, restaurants at 70%? What activities result in which %'s?

this site isn't working for me, I grant location access and it says I need to turn on location access to see the map. Doesn't seem to be working?

Fails on Chromium and Firefox.

Thank you for making such a great tool. Do you have any plans that will allow the death rate to be adjusted once ICU overflow kicks in? There is going to be a much higher death rate once we run out of ICU beds and it would be interesting to see how that changes things


I’d be interested in helping out with the front end. Having some issues with display on my iPhone SE. Thanks for the awesome tool!

Why is it when I reduce the infectious period, the steepness of the infection curve increases? That seems wrong.

Possibly because you forgot to reduce the R factor. If you infect people over 1/2 the time, perhaps you need R to be 1/2 what it was before.

I assume that the model is spreading the people you infect across the time you are infectious.

The link is not working right now

Thank you for providing a very nice tool that is of use for all of us. Much appreciated!

After reading about the implementation of this model and experimenting with different a while, I've made some observations about its behavior and especially about the values that have been chosen for its for its parameters which I feel are important to consider when interpreting its output.

TL;DR: Many of the critical inputs to this simulation are based off of almost entirely unknown values, and relatively small changes to these inputs can swing the results of the simulation by orders of magnitude. The defaults that the website creators have selected seem pessimistic on average compared to even the source research which can have the effect of misleading users as to what the expected outcome of the pandemic is.

Since under-reacting and over-reacting to this situation both have huge real-world ramifications, making assessments based off of even a fully valid and well-designed model can lead to incorrect decisions being made if its parameters are set unrealistically.


First of all, there are a TON of variables and many of them pull values from extremely thin data. Although it looks like the researchers didn't just make up any of their values, there are several instances of very impactful parameters being hand-waved:


The "Mitigation" ratio curve, an unsurprisingly critical variable for the simulation, seems too high even for the "strong" mitigation preset. Even after looking extensively, I couldn't find any explanation or justification from the simulation's creators as to how they created their default curve. As this is possibly the most impactful parameter in the entire simulation, that just makes it more critical to correctly pick its value.

Judging by the degree of dramatic measures that society has taken so far and the actions of people that I've observed, I'd personally estimate that ratio to be much lower than even the one included in the "strong" mitigation curve. Of course this is random guesstimation of my own here, but it goes to illustrate the massive changes in results that can stem from small changes to the input values they've chosen.

After starting at default settings, setting population to USA, and setting mitigation factor to "strong", further adjusting the March 15 mitigation factor value down 10% from 60% to 50% (a value that seems extremely reasonable and even generous to me) causes the total number of cases to drop by nearly 66% holding all other values constant. Adjusting it to 40% reduces the total infections by another nearly 50% from there.

Another paper that is cited by the simulation's authors writes that "[...] by taking drastic social distancing measures and policies of controlling the source of infection, with the tremendous joint efforts from the government, healthcare workers, and the people (Fig. 1), Rt was substantially reduced [from 3.58] to 0.32 in Wuhan after February 2, which was encouraging for the global efforts fighting against the Covid-19 outbreak using traditional non-pharmaceutical measures [...]" [4] This would seem to represent a mitigation ratio of 0.1 after just around a month. Clearly there are differences in the way that the pandemic was handled in China compared to how it's being handled now in the rest of the world, but China's ability to combat it that effectively seems to lend credence to the idea that mitigation ratios of less than 0.5 are possible if not already in place in the US and across the rest of the world.

In any case, claiming nearly 80% of the transmissibility of this virus remaining at this point in time and maxing out at 60% as the "moderate" case seems dangerously exaggerated to the upside and potentially skews the results of the whole simulation due to how critical this one variable is in determining the output.


Another foundational variable is R0: The number of additional infected individuals per infection. The linked research paper states "the early human-to-human transmission of 2019-nCoV was characterized by values of R0 around 2.2 (median value, with 90% high density interval: 1.4–3.8)" [2] Although the distribution from the research paper has a long tail towards higher values, setting a value of 2.7 for "Moderate/North" feels somewhat disingenuous given that the mean is 2.2.

Of course other research papers list a wide range of different values for this variable, so perhaps expanding the range of the presets would be a better option. In any case, this variable is dominated by the mitigation factor in cases of high mitigation, so its precise value may not matter as much in those situations.


The "seasonal forcing" factor varies from 0 to 0.2 in all of their "Epidemiology" presets. On their about page, the example they provide in their graph seems to have a value of ~0.6, but that may just be an illustration using non-realistic values for visual effect. [1]

One other thing to note is that their implementation uses the selected R0 as a mean value for their function, meaning that at the peak month the true R0 is `(1 + <seasonal forcing>) * R0`. This doesn't agree with the data from the research paper, which estimated their R0 value using data from the most infectious period (winter). Assuming a seasonal forcing factor of 0.2, that means The R0 values they provide are actually 20% inflated in January on top of their already high values.

Of course, their very choice of a cosine wave to model that impact seems largely unfounded, but of course given a lack of data pointing to a more accurate option it's as good as any. That being said, even a small change to that function could have massive impacts on the simulation's results.


In the most pessimistic scenarios, their implementation of additional fatalities caused by ICU overflow fail to take into account potential measures such as emergency hospitals, things like the hospital ship being sent over to New York by the navy [3], and temporary increases to hospital capacity. In most cases, the peak of ICU overflow doesn't occur until months down the line, giving a lot of time to scrape up resources to increase capacity.

Perhaps providing an additional value to adjust the hospital capacity over time would be useful for accurately representing the impact of ICU overflow.


- "Imports" is held constant through the entire simulation. It seems unlikely that it would remain this way; setting this as a curve instead would make more sense in my opinion. Also, I couldn't find any justification or explanation for the values they picked for that parameter.

- Improvements to treatment causing ICU/hospital stay time to go down aren't accounted for. Of course there's no guarantee that it will change significantly as the pandemic progresses (or even that it will not get worse due to some mutation of the virus or other circumstance), but recent research and experimentation seems to already be making progress in creating more effective treatments for the virus.

- Due to how incredibly vast the system this model models is, there are an uncountable number of possible external events that could, in some situations, invalidate the whole thing entirely. At least mentioning the possibility of things such as a vaccine being released in the coming months, mutations with developed antiviral drug resistance, etc. seems prudent to me.


NOTE: I'm not a trained statistician or epidemiologist - I just have experience working with data and compute models. I'd appreciate any feedback about the accuracy of my comments here or expansions/refutations of my thinking.

[1] Link from about page: https://neherlab.org/covid19/assets/seasonal_illustration.15... Screenshot at time of writing: https://ameo.link/u/7pk.png [2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7001239/ [3] https://www.nbcnewyork.com/news/local/navy-medical-ship-comi... [4] https://www.medrxiv.org/content/10.1101/2020.03.03.20030593v...

The biggest thing I've been trying to get friends to understand (from 2 weeks ago on the we have to shut things down train, and now on the opposite "look not interacting with any other human being for 2 months might not be the best course of action train") is the non-linearity of how infection rate responds to decreasing the amount of social contacts.

ie; in most models you can get about a 50% gain with just a 25% reduction in social contact, after that it is diminishing returns all the way down to the bottom of trying to wring small population level effects out of reducing contacts from "very little to very very little"

In my mind, people are still underestimating the effects of large gatherings (and by large here I even mean 100 people in a nightclub) and overestimating the effects of seeing one friend in your house that sees another one friend the next day etc.

Don't want to get into an overall debate here about what level of distancing is appropriate or acceptable, would like to see an interactive model that does a good job showing how different types of distancing interventions have very different effects and that once you get to a certain point it gets very hard to get much more overall connection out of the system...

I agree that having a more "realistic" model that simulates people's social interactions is important, especially as we look to predicting longer term effects. This is something that is on the long-term roadmap for our model here, although as of now its on the back-burner while we deal with more immediate concerns like hospital allocation.

yeah, it's hard to get to that out of a stock/flow differential model (that's what this is under the hood right?) but if we had some sense of effect of different measures on R we could at least build in different "presets" thanks for your work here! wasn't meaning to disparage it, just calling out a lack of anything clear I could offer friends to help that understand that point...

Yes, under the hood its a coupled set of ODEs with the ability to be stochastically sampled. But this means the entire susceptible population can interact with the entire infected population every time step w/ some small probability -- hence no structure. And no worries, not offense taken!

The Washington Post has a nice visualization. Not aiming at realism but easy to understand: https://www.washingtonpost.com/graphics/2020/world/corona-si...

I think japan has shown us one thing. The absolutely dominant effect of closing schools. 1-5000 people in close proximity with high levels of social mixing and a very low rate of symptomatic infection. It's absolutely perfect for incubating the virus.

The tough part being: our teachers are already vastly underpaid.

Luckily, every teacher I know is still being paid. Even those who are completely off and don't have to teach virtual classes.

Except they can work from home, and now that they’re actually required to their commute cost is way cheaper.

Some teachers CAN work from home (not PE, Home-Econ teachers, etc.), but it's a less-than-ideal situation.

Are children self-motivated enough to work from home? I know many adults who aren't. Without adults to keep children accountable a total work-from-home schooling is likely to create gaps in the kid's education. And which type of child is the least likely to have an adult around keeping them accountable? Impoverished children.

A counter argument is that keeping kids physically proximate often exposes them to violence which makes school a lot harder and tends to really mess people up.

Lots of modern jobs worth doing will end up remote (at least occasionally and often permanently) so it’s probably a good idea to start training them when they’re younger anyway.

Nassim Nicholas Taleb discussed how superspreaders have a big effect[1]

[1]: https://www.youtube.com/watch?v=LX_bqMQfWlw

Do you have a source that you could share on the topic? I have been trying to inform myself and balance my behavior and think it would be very helpful.

There is official health institute report from Italy: https://www.bloomberg.com/news/articles/2020-03-18/99-of-tho...

This is just a panic created on social media.

> This is just a panic created on social media.

Don't be ridiculous. You can discuss the distribution of deaths, but the deaths are real.

A 2% death rate is still 2%. That's huge. It still has the potential to overwhelm any country's health system.

The deaths were always real. Putting the whole planet on hold because some 80 years old with three associated illnesses died is a bit borderline SF and a bit borderline stupid.

It's not going to be 80 year olds dying if the hospitals are overwhelmed. 40% of people in ICU in italy are 19-40, basically, those who make up the majority of the workforce.

19-40 makes up 0.25% of the deaths (4 in total) [0]

In the US, of people aged 20-44, "only" 2.0-4.2% are admitted into ICU, significant, but making up only 12% of the ICU admissions, and 20% of the hospitalizations. [1]

0: https://jamanetwork.com/journals/jama/fullarticle/2763401 1: https://www.cdc.gov/mmwr/volumes/69/wr/mm6912e2.htm?s_cid=mm...

I think you're confusing "dead" with "in the ICU". If you listen to the various interviews with Italian Drs, you'll learn that to make a recovery from the full-blown infection requires spending weeks on ventilation. So looking at the deaths count does not accurately capture the real problem, which is that you have an accumulated pile of people who are severely sick but will probably make some sort of recovery eventually. In the meantime your health system is offline.

> 40% of people in ICU in italy are 19-40, basically, those who make up the majority of the workforce.

Correction: 11.9% of people in ICU in Italy are 19-50; 51.2% are 51-70; and 36.9% are > 70 [1].

[1] https://www.epicentro.iss.it/coronavirus/bollettino/Bolletti...

I don’t believe that could possibly be true. The current ICU makeup in the US is 2-4% for ages 20-44. That’s a 10-20x increase you’re proposing. And yes, you can say that they’re triaging, but for this to be possible you’d need truly enormous numbers of infected individuals AND tons and tons of triaging going on which doesn’t feel accurate based on what news says.

I highly doubt that number. Do you have a reliable source for that?

It looks startling, but could be true. Reportedly, the health workers in Italy had to do some fairly brutal prioritisation in triage. If you have a lot more patients requiring ICU than you have stations, you end up with a high proportion of younger patients in ICU even if they are a low proportion overall.

The classic fake news model: post some shit with no evidence that is immediately validated by some other worried internet person.

Basically the current state of the internet in a nutshell right now. People assume any valid question of any data means that person is a denialist nutter or something. Fact is, we are working with very biased data and people, even the smart folks here, are misinterpreting it. The virus exists, but there is no data that tells us how widespread it is in the population. Odds are very good this sucker has been adrift for weeks or months and many of us already got it. But without good, random sampling we cannot prove or disprove that hypothesis.

About the only place I’ve seen where rational talk is allowed without getting flame to death is /r/covid19.

Hacker news, unfortunately, appears to have devolved into yet another place full of panic stricken people.

Your aggression is unwanted and unnecessary. I haven't validated anything; I speculated, with appropriate qualifiers.

I'm happy to defer to actual stats if you have any.

Page 5 of the recent well-respected Imperial paper on modelling approaches to managing the outbreak has statistics on hospitalisation that I imagine are the current best estimates.


For example, 3.2% of cases aged 30-39 require hospitalisation. Could that rise to 40% after triage? I'm not sure, it seems high, but it depends on what pressures the system is under.

Can't find the Italy link, but here's an article talking about hospitalizations and young people including alarming ICU cases: www.washingtonpost.com/health/2020/03/19/younger-adults-are-large-percentage-coronavirus-hospitalizations-united-states-according-new-cdc-data/

Quote from the movie The Big Short: "Every 1 percent unemployment goes up, 40,000 people die, did you know that?". That is probably an exaggeration, but I really hope people in power know what they are doing at the moment.

>The deaths were always real. Putting the whole planet on hold because some 80 years old with three associated illnesses died is a bit borderline SF and a bit borderline stupid.

Unfortunately this approach doesn't work unless you somehow deal with the relatives of the millions of 80 year olds who are now pissed off that the government let them die and their bodies rot in the streets, and you arrange for the army for force medical providers not to try to treat some proportion of the dying.

"More than 75% had high blood pressure, about 35% had diabetes and a third suffered from heart disease."

I feel like a huge number of Americans have high blood pressure and heart disease. Seems like it will still be very serious for a large number of people.

>705 were aged 20 to 44, according to the Centers for Disease Control and Prevention. Between 15% and 20% eventually ended up in the hospital, including as many as 4% who needed intensive care.

That's not a shocking number to you? 15% - 20% of people aged 20-44 ended up hospitalized? Even if no deaths were involved, that many young people ending up in the hospital is concerning.

It really comes back to the testing. There could be a million people 20-44 who didn't get tested and didn't need to visit a hospital.

If you only test people who arrive at a hospital with serious symptoms, it's not surprising that many of them go on to be admitted.

maybe examine the prevalence of all those conditions in the US?

"More than 99% of Italy’s coronavirus fatalities were people who suffered from previous medical conditions"

What were the most common pre-existing conditions?

"More than 75% had high blood pressure, about 35% had diabetes and a third suffered from heart disease."

How common are those conditions in the United States?

High Blood Pressure - 1 in 3

Diabetes - 9 in 100

Heart Disease - 1 in 10

I would be careful not to confuse correlation with causation here. Most old people have medical conditions. For example, about 75% of older people have high blood pressure [1] so seeing a 75% rate in the virus fatalities should not mean much.

I suspect age and immune function drive mortality, and the other factors are merely along for the ride.

[1] https://www.uptodate.com/contents/treatment-of-hypertension-...

When I studied medicine 15 years ago 130-139 mm Hg was not considered high blood pressure. We were young students and we were toying everyday measuring our blood pressure. Most males had over 130.

It seems times are changing: https://www.cdc.gov/bloodpressure/facts.htm

We have "evolved"!

Your initial proposal was that most of the Covid related deaths were due underlying conditions.

Now you propose that the underlying condition most present isn't really a condition.

You made me curios about it so I did a quick internet search for what is considered high blood pressure in Italy.

I found https://www.epicentro.iss.it/ben/2002/settembre02/2_en (the same health institute that provided the study I was talking about above). I quote:

"The prevalence of borderline hypertension was calculated by determining the number of persons who had systolic pressures between 140 and 160 mm Hg or who had diastolic pressures between 90 and 95 mm Hg."

This is more inline with what I've been thought in med school in my time.

I closed this after "based on data from China". That's almost as bad as basing it on US data.

Please don't post unsubstantive and/or flamebait comments to HN.

Interface is a not great. I wanted to compare US model to what's actually happening (I've been graphing the US curves with public data).

Top dropbox is Scenario. Default is custom. I pick 'country - no mitigation'. Box below labelled 'Population' (eh?) now says 'Germany'. I pick 'United States' from it. The box below that also is labelled 'population' but has the value with no digit separation to make it readable (shows 330000000, which I initially misread under by an order of magnitude) but it does have an up/down arrows beside it to increment/decrement it by 1 (double eh?).

The labels are not clearly associated with the dropdows, so it's easy to get confused over whether they refer to the box above or below.

The help buttons are (blue show not hide text I think. The text they do have is often unhelpful eg. "ICU/ICMO (est.)" ask for help on that and you get "Number of ICU/ICMO available in health care system". Could be worse, could be better.

The text box that pops up when you hover over the produced graphs are hard to read - you've got light text on a white background. Also the larger numbers are hard to read without digit separators. Also, don't get clever with the box following the pointer, it's not slick, it jumps around and is distracting.

The graph axis is very logarithmic which is hard to get a feel for. Also the top figure shows for me as 0000000 because the leading '1' is hidden.

I guess it's for professional of this area, which I'm not, so I'll step back now. HTH anyway. Not intended as dismissive if it sounds that way.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact