At the time of original release (1972), the Limits to Growth model (a set of coupled differential equations that purported to adequately describe environmental, demographic and economic trends over a 50+ year horizon) hit the Zeitgeist right on the nose. Catastrophism coupled with the all-knowing infallible digital computer, with MIT branding to boot. Mass media jumped all over the results.
Engineering professors at the time pointed to its many failings, chief of which were the lack of validation of the underlying model structure and a lack of adequate parametric/uncertainty analysis on its many, many dubious simplifying assumptions. The study severely underestimated the uncertainty intervals/ranges with the point forecasts, to the extent they were presented at all.
Underestimating model uncertainties is a common problem with forecasting models, especially “life, the Universe, and everything” models. Consider the epidemiology models early in the pandemic, for example. Groupthink abounds and the modeler is tempted to make too strong a case, rather than provide the accurate, objective assessment of what is known/knowable and what isn’t. Even more tempting when the time horizon extends past the expected lifetime of the modelers. They aren’t going to be around to pick up the pieces, but they will get their kids college education paid for.
The study lost its luster in the 1980s and 1990s. I remember talking to a modeler in the late 1970s who made it a point to mention he was a part of the study when Limits to Growth was mentioned, so that I wouldn’t (as presumably so many others had) unintentionally offend by mentioning the steaming pile of misleading results it produced. It has been the focus of apologetics by the study authors ever since…even beyond the grave, it appears. This thesis is perhaps the most recent.
You can play with the model yourself here. (Disclaimer: it might be right, who knows.)
https://donellameadows.org/the-limits-to-growth-now-availabl...
Engineering professors at the time pointed to its many failings, chief of which were the lack of validation of the underlying model structure and a lack of adequate parametric/uncertainty analysis on its many, many dubious simplifying assumptions. The study severely underestimated the uncertainty intervals/ranges with the point forecasts, to the extent they were presented at all.
Underestimating model uncertainties is a common problem with forecasting models, especially “life, the Universe, and everything” models. Consider the epidemiology models early in the pandemic, for example. Groupthink abounds and the modeler is tempted to make too strong a case, rather than provide the accurate, objective assessment of what is known/knowable and what isn’t. Even more tempting when the time horizon extends past the expected lifetime of the modelers. They aren’t going to be around to pick up the pieces, but they will get their kids college education paid for.
The study lost its luster in the 1980s and 1990s. I remember talking to a modeler in the late 1970s who made it a point to mention he was a part of the study when Limits to Growth was mentioned, so that I wouldn’t (as presumably so many others had) unintentionally offend by mentioning the steaming pile of misleading results it produced. It has been the focus of apologetics by the study authors ever since…even beyond the grave, it appears. This thesis is perhaps the most recent.
You can play with the model yourself here. (Disclaimer: it might be right, who knows.)
http://bit-player.org/2012/world3-the-public-beta
Also consider
https://debunkingdoomsday.quora.com/Comparison-of-1970s-MIT-...
Blessedly, I got out of that morass early on. There but for the grace of G-d…