We started working on this a few weeks ago, in collaboration with Jan Albert and Roberk Dyrdak, as a helpful model for public health officials and hospitals to help predict the usage of local community hospitals, and explore the utility and immediate need for quarantine measures.
We are very much still actively developing the tool, now with an eye to help global communities at large. If anyone is interested to help, we are trying to organize a road map and put it together on our Github page. I'll be around to answer questions and listen to feedback.
A suggestion: adding decimal separators to numbers in the "Cases through time" chart for better readability.
Thank you very much!
One observation I had while playing with the numbers was: The most accurately shaped curves were achieved by adjusting the "seasonal forcing" to values between 0.5 and 0.6 
I extended the time range to one year and lo and behold there was the gigantic hump in all curves.
I hope that testing kits will be amply available within the next few months so that we can emulate the success of Vò 
 I believe best-fitting curves are a miss-representation as many cases are not recorded, this is why I optimized for a shape that counted the "Infectious"consistently above the "confirmed cases" by the factor of 2-4. I am curious to learn how well my assumptions hold up.
What does the terms moderate mitigation, weak mitigation, strong mitigation mean? What measures are required to go from none to weak, to moderate mitigation?
I think the most intuitive explanation to your second question is a reduction of number of contacts you have per day - i.e. .6 corresponds to 60% reduction in social contacts in a given day. This is not exact but should provide you rough handrails for the mitigation numbers.
2) you should allow longer time frame for the simulation (it should be possible to run it for e.g. 12 months).
Do you have a sense, or any benchmarks, for where the current responses across the world lie along the weak - strong mitigation scale? Or is that something we'll need to estimate using the model as the situation evolves more?
Would be very interested to help. I sent you an email yesterday with the subject title:
Interested in helping with data for `COVID-19 Scenarios`
If you start with a hypothesis that this virus is in wide global circulation already, all this testing is doing is confirming “yup, people have the virus!”.
Only way to prove or disprove the hypothesis that the virus is widespread is large scale testing of a truly random subset of the entire population. Last I checked that isn’t happening anywhere in the United States and possibly elsewhere.
Why is proving that hypothesis important? Because if true it means the severity of this virus is dramatically less than our limited data suggests. If true, most of the rather draconian measures we are taking are pointless.
If false, god help us all I guess...
As does (albeit in a weaker form) the testing of the full population of Vò.
As does the testing in at least a few European countries, where the growth rate of tests outpaces that of confirmed infections. And/or where the number of tests administered vastly exceeds the number of confirmed infections.
Sure, once we have anti-body tests widely available, the data will become better.
It would be really nice to model the probability distributions for each of these and see the resulting probabilities of simulation outcomes.
> A total of 72,314 patient records—44,672 (61.8%) confirmed cases, 16,186 (22.4%) suspected cases, 10,567 (14.6%) clinically diagnosed cases (Hubei Province only), and 889 asymptomatic cases (1.2%)—contributed data for the analysis.
Another study guesses that 86% of cases are asymptomatic and/or otherwise undocumented:
Edit: For someone else that is interested, the link to the repo in GitHub, is in the "About" section at the top.
Edit 2: The About section has a lot of info. Perhaps you should copy part of that info to the main page.
Edit 3: When the site loads, it should show the graph of the simulation with the default parameters.
Now they have added the ICU overflow, that is very important for the mortality rate.
This is going to be really really bad isn't it?
If you start with a hypothesis that this virus is already in widespread circulation and possibly has been for a while (which is the simplest explanation, mind you), odds are good many people already had it or have it right now.
Only way to prove/disprove if it is widespread is by testing a random sample of the entire population. Something we aren’t doing in the US.
Fails on Chromium and Firefox.
I assume that the model is spreading the people you infect across the time you are infectious.
TL;DR: Many of the critical inputs to this simulation are based off of almost entirely unknown values, and relatively small changes to these inputs can swing the results of the simulation by orders of magnitude. The defaults that the website creators have selected seem pessimistic on average compared to even the source research which can have the effect of misleading users as to what the expected outcome of the pandemic is.
Since under-reacting and over-reacting to this situation both have huge real-world ramifications, making assessments based off of even a fully valid and well-designed model can lead to incorrect decisions being made if its parameters are set unrealistically.
First of all, there are a TON of variables and many of them pull values from extremely thin data. Although it looks like the researchers didn't just make up any of their values, there are several instances of very impactful parameters being hand-waved:
## MITIGATION CURVE
The "Mitigation" ratio curve, an unsurprisingly critical variable for the simulation, seems too high even for the "strong" mitigation preset. Even after looking extensively, I couldn't find any explanation or justification from the simulation's creators as to how they created their default curve. As this is possibly the most impactful parameter in the entire simulation, that just makes it more critical to correctly pick its value.
Judging by the degree of dramatic measures that society has taken so far and the actions of people that I've observed, I'd personally estimate that ratio to be much lower than even the one included in the "strong" mitigation curve. Of course this is random guesstimation of my own here, but it goes to illustrate the massive changes in results that can stem from small changes to the input values they've chosen.
After starting at default settings, setting population to USA, and setting mitigation factor to "strong", further adjusting the March 15 mitigation factor value down 10% from 60% to 50% (a value that seems extremely reasonable and even generous to me) causes the total number of cases to drop by nearly 66% holding all other values constant. Adjusting it to 40% reduces the total infections by another nearly 50% from there.
Another paper that is cited by the simulation's authors writes that "[...] by taking drastic social distancing measures and policies of controlling the source of infection, with the tremendous joint efforts from the government, healthcare workers, and the people (Fig. 1), Rt was substantially reduced [from 3.58] to 0.32 in Wuhan after February 2, which was encouraging for the global efforts fighting against the Covid-19 outbreak using traditional non-pharmaceutical measures [...]"  This would seem to represent a mitigation ratio of 0.1 after just around a month. Clearly there are differences in the way that the pandemic was handled in China compared to how it's being handled now in the rest of the world, but China's ability to combat it that effectively seems to lend credence to the idea that mitigation ratios of less than 0.5 are possible if not already in place in the US and across the rest of the world.
In any case, claiming nearly 80% of the transmissibility of this virus remaining at this point in time and maxing out at 60% as the "moderate" case seems dangerously exaggerated to the upside and potentially skews the results of the whole simulation due to how critical this one variable is in determining the output.
## ANNUAL AVERAGE R0
Another foundational variable is R0: The number of additional infected individuals per infection. The linked research paper states "the early human-to-human transmission of 2019-nCoV was characterized by values of R0 around 2.2 (median value, with 90% high density interval: 1.4–3.8)"  Although the distribution from the research paper has a long tail towards higher values, setting a value of 2.7 for "Moderate/North" feels somewhat disingenuous given that the mean is 2.2.
Of course other research papers list a wide range of different values for this variable, so perhaps expanding the range of the presets would be a better option. In any case, this variable is dominated by the mitigation factor in cases of high mitigation, so its precise value may not matter as much in those situations.
## SEASONAL FORCING
The "seasonal forcing" factor varies from 0 to 0.2 in all of their "Epidemiology" presets. On their about page, the example they provide in their graph seems to have a value of ~0.6, but that may just be an illustration using non-realistic values for visual effect. 
One other thing to note is that their implementation uses the selected R0 as a mean value for their function, meaning that at the peak month the true R0 is `(1 + <seasonal forcing>) * R0`. This doesn't agree with the data from the research paper, which estimated their R0 value using data from the most infectious period (winter). Assuming a seasonal forcing factor of 0.2, that means The R0 values they provide are actually 20% inflated in January on top of their already high values.
Of course, their very choice of a cosine wave to model that impact seems largely unfounded, but of course given a lack of data pointing to a more accurate option it's as good as any. That being said, even a small change to that function could have massive impacts on the simulation's results.
## ICU OVERFLOW
In the most pessimistic scenarios, their implementation of additional fatalities caused by ICU overflow fail to take into account potential measures such as emergency hospitals, things like the hospital ship being sent over to New York by the navy , and temporary increases to hospital capacity. In most cases, the peak of ICU overflow doesn't occur until months down the line, giving a lot of time to scrape up resources to increase capacity.
Perhaps providing an additional value to adjust the hospital capacity over time would be useful for accurately representing the impact of ICU overflow.
## OTHER VARIABLES + NOTES
- "Imports" is held constant through the entire simulation. It seems unlikely that it would remain this way; setting this as a curve instead would make more sense in my opinion. Also, I couldn't find any justification or explanation for the values they picked for that parameter.
- Improvements to treatment causing ICU/hospital stay time to go down aren't accounted for. Of course there's no guarantee that it will change significantly as the pandemic progresses (or even that it will not get worse due to some mutation of the virus or other circumstance), but recent research and experimentation seems to already be making progress in creating more effective treatments for the virus.
- Due to how incredibly vast the system this model models is, there are an uncountable number of possible external events that could, in some situations, invalidate the whole thing entirely. At least mentioning the possibility of things such as a vaccine being released in the coming months, mutations with developed antiviral drug resistance, etc. seems prudent to me.
NOTE: I'm not a trained statistician or epidemiologist - I just have experience working with data and compute models. I'd appreciate any feedback about the accuracy of my comments here or expansions/refutations of my thinking.
 Link from about page: https://neherlab.org/covid19/assets/seasonal_illustration.15... Screenshot at time of writing: https://ameo.link/u/7pk.png
ie; in most models you can get about a 50% gain with just a 25% reduction in social contact, after that it is diminishing returns all the way down to the bottom of trying to wring small population level effects out of reducing contacts from "very little to very very little"
In my mind, people are still underestimating the effects of large gatherings (and by large here I even mean 100 people in a nightclub) and overestimating the effects of seeing one friend in your house that sees another one friend the next day etc.
Don't want to get into an overall debate here about what level of distancing is appropriate or acceptable, would like to see an interactive model that does a good job showing how different types of distancing interventions have very different effects and that once you get to a certain point it gets very hard to get much more overall connection out of the system...
Are children self-motivated enough to work from home? I know many adults who aren't. Without adults to keep children accountable a total work-from-home schooling is likely to create gaps in the kid's education. And which type of child is the least likely to have an adult around keeping them accountable? Impoverished children.
Lots of modern jobs worth doing will end up remote (at least occasionally and often permanently) so it’s probably a good idea to start training them when they’re younger anyway.
This is just a panic created on social media.
Don't be ridiculous. You can discuss the distribution of deaths, but the deaths are real.
A 2% death rate is still 2%. That's huge. It still has the potential to overwhelm any country's health system.
In the US, of people aged 20-44, "only" 2.0-4.2% are admitted into ICU, significant, but making up only 12% of the ICU admissions, and 20% of the hospitalizations. 
Correction: 11.9% of people in ICU in Italy are 19-50; 51.2% are 51-70; and 36.9% are > 70 .
About the only place I’ve seen where rational talk is allowed without getting flame to death is /r/covid19.
Hacker news, unfortunately, appears to have devolved into yet another place full of panic stricken people.
I'm happy to defer to actual stats if you have any.
Page 5 of the recent well-respected Imperial paper on modelling approaches to managing the outbreak has statistics on hospitalisation that I imagine are the current best estimates.
For example, 3.2% of cases aged 30-39 require hospitalisation. Could that rise to 40% after triage? I'm not sure, it seems high, but it depends on what pressures the system is under.
Unfortunately this approach doesn't work unless you somehow deal with the relatives of the millions of 80 year olds who are now pissed off that the government let them die and their bodies rot in the streets, and you arrange for the army for force medical providers not to try to treat some proportion of the dying.
I feel like a huge number of Americans have high blood pressure and heart disease. Seems like it will still be very serious for a large number of people.
That's not a shocking number to you? 15% - 20% of people aged 20-44 ended up hospitalized? Even if no deaths were involved, that many young people ending up in the hospital is concerning.
If you only test people who arrive at a hospital with serious symptoms, it's not surprising that many of them go on to be admitted.
What were the most common pre-existing conditions?
"More than 75% had high blood pressure, about 35% had diabetes and a third suffered from heart disease."
How common are those conditions in the United States?
High Blood Pressure - 1 in 3
Diabetes - 9 in 100
Heart Disease - 1 in 10
I suspect age and immune function drive mortality, and the other factors are merely along for the ride.
It seems times are changing: https://www.cdc.gov/bloodpressure/facts.htm
We have "evolved"!
Now you propose that the underlying condition most present isn't really a condition.
I found https://www.epicentro.iss.it/ben/2002/settembre02/2_en (the same health institute that provided the study I was talking about above). I quote:
"The prevalence of borderline hypertension was calculated by determining the number of persons who had systolic pressures between 140 and 160 mm Hg or who had diastolic pressures between 90 and 95 mm Hg."
This is more inline with what I've been thought in med school in my time.
Top dropbox is Scenario. Default is custom. I pick 'country - no mitigation'. Box below labelled 'Population' (eh?) now says 'Germany'. I pick 'United States' from it. The box below that also is labelled 'population' but has the value with no digit separation to make it readable (shows 330000000, which I initially misread under by an order of magnitude) but it does have an up/down arrows beside it to increment/decrement it by 1 (double eh?).
The labels are not clearly associated with the dropdows, so it's easy to get confused over whether they refer to the box above or below.
The help buttons are (blue show not hide text I think. The text they do have is often unhelpful eg. "ICU/ICMO (est.)" ask for help on that and you get "Number of ICU/ICMO available in health care system". Could be worse, could be better.
The text box that pops up when you hover over the produced graphs are hard to read - you've got light text on a white background. Also the larger numbers are hard to read without digit separators.
Also, don't get clever with the box following the pointer, it's not slick, it jumps around and is distracting.
The graph axis is very logarithmic which is hard to get a feel for. Also the top figure shows for me as 0000000 because the leading '1' is hidden.
I guess it's for professional of this area, which I'm not, so I'll step back now. HTH anyway. Not intended as dismissive if it sounds that way.