Hacker News new | past | comments | ask | show | jobs | submit login
The forecasting fallacy (2020) (alexmurrell.co.uk)
91 points by rognjen 8 months ago | hide | past | favorite | 67 comments



Much of the discussion here including the linked article fail to make an important distinction between domains. Prediction can be done quite effectively on thin tailed processes. A lot of the counter examples listed in the comments here are physical systems which are thin tailed. I see aircraft autopilot, collision detection, ozone depletion. These are all well understood physical phenomenon in which large deviations do not occur — your car doesn’t get teleported elsewhere in the middle of avoiding a collision. If a large deviation did occur, say a meteor striking between your car and the object it is attempting to avoid, the collision avoidance system would almost certainly fail. These events occur so infrequently that the system can just assume they won’t and boast a high success rate.

Meanwhile the examples from the linked article are fat tailed processes. Recessions, GDP, interest rates, exchange rates. These are all subject to large discontinuous jumps. Anyone doing a 5 year rate prediction in July 2019 would have been required to predict the pandemic in order to accurately forecast. This is a single example but predictions in this domain are regularly blown out by being teleported to a completely different world. Unlike the thin tailed domain these events happen frequently enough that they’re the only thing that matters for the forecast.

Knowing which class your generating process belongs to is critical to understanding whether forecasting will be effective or not. I’ll take collision detection and leave economic forecasts at the door any day.


It's not that we can't forecast heavy tailed processes -- it's just that the forecasts are used wrong.

The appropriate layman's forecast of a recession within the next year is something like a constant 11 %. I'm willing to bet this outperforms most "predictions" out there.

But! When people see that number they go, "right, so it's vastly more likely it does not happen" and then completely ignore the possibility. The problem is not in the probability, but in the failure to adequately assign a cost function to the less likely outcomes.


> The appropriate layman's forecast of a recession within the next year is something like a constant 11 %. I'm willing to bet this outperforms most "predictions" out there.

I think we agree here but the unspoken measure is the amount of uncertainty in each forecast. This is still a very inaccurate prediction compared to the collision detection system which nearly always gets it right.

You hit the nail on the head in your last paragraph. Doing something useful doesn’t require an accurate prediction. I agree entirely that assigning an appropriate cost function and responding accordingly guides you to useful actions.

Edit: Just discovered your blog through your profile. The topics look super relevant to my interests. Thank you for sharing your thoughts online, I’m looking forward to reading them!


De Finetti used a different word to separate "prediction" (object-level, concrete outcome) from "prevision" (probabilistic statement). The first is often nonsensical, the latter useful.

Alas, these are not widely understood words.


Thankyou for putting it eloquently. The other subtle problem is that people tend to ask silly questions. Maybe I've just been lucky so far, but the focus on recessions seems like a relic of the pre-1970s time when GDP correlated with wages. I'm not seeing why it matters so much these days.

Interest rates are centrally planned, so it is a bit of a philosophical question to ask what it means to "forecast" them. Really it is more a question of politics than any reliable process and the answer is "lower than makes sense". The economy for the last 10 or 20 years looked like it is carrying a lot of value-destructive companies looking for an opportunity to collapse that were almost certainly related to cheap credit. Uber and WeWork spring to mind. US debt is accumulating in a way that would have been making people nervous for decades.

The problem here isn't forecasting, it is money being corralled to purposes that consume more resources than get generated. Being unable to predict what the perturbation against trend will be in 12 months time doesn't seem useful, appears to be impossible and it is weird that people love to focus on it. There are much more serious things to keep an eye on.


the article was not that good overall. people who do this stuff for a living do not take such a naive or reductionist approach to forecasting.


That you're talking about both collision detection and economics is very telling for the issues a lot of people have with things like these.

When an economist predicts a heightened probability of a recession, people believe the economist, and adjust their investments to avoid it, this is seen as a failure in prediction.

Yet when a collision detection system detects a collision, and the car brakes to avoid the collision, this is also a false positive as no collision happened!

It's not clear why people see these as different. When the system takes the prediction into account to avoid the predicted calamity, we shouldn't see that as a failure in prediction.


There's a germ of insight here that could use some development and nuancing.

> The future is uncertain. You cannot predict it. But you can create it.

For millions of years, prediction has been the engine by which humans have created the future. We don't always call it that, but prediction is the engine.

Let's start from the most basic facts of life. We know from experience (our own and others') what plants will sustain us and what will kill us, so we can predict what present-day choices of food will create a positive future. We create our positive future by making choices in accordance with those predictions.

It seems that we need to figure out what separates the kind of prediction that is the engine of human life and progress from the kind that is just useless blather.


Also to your point, it makes no sense to say, "You cannot predict it. But you can create it." The reason a person or entity creates something is that they've predicted a desired outcome for that creation.

> It seems that we need to figure out what separates the kind of prediction that is the engine of human life and progress from the kind that is just useless blather.

For sure. The author treats content marketing by management consultancies — blather — as serious efforts to predict outcomes. But these are stories about potential outcomes created to lure customers to the rim of their sales funnel. In other words, their actual prediction is that publishing thousands of "thought leadership" pieces will improve SEO and sales engagement, which is probably true.


While true, it's quite an underappreciated factor and rarely considered seriously by people.

You are correct that the human ability to manifest our imagination is remarkable.

We dream of flying like a bird, eventually we build machines to do it, and then it's taken for granted as part of our society.

In the 2nd century Lucian thinks up impossible things, comes up with a ship of men flying up to the moon, and less than two millennia later the impossible is increasingly old history.

Often, our invention even outpaces imagination. I can remember watching GATTACA long before watching its primary plot point become obsolete with CRISPR.

But despite a longstanding trend of imagination and prediction preceding invention, I regularly see those things dismissed as fantasy when they haven't happened yet. Much like the op-ed saying humans wouldn't fly with a million more years of effort just a year shy of the Wright brothers, I often see a disbelief in human advancement over the status quo as the more predominant perspective that the opposite, especially as the imagined thing is more fantastical.

So as an example, I've been talking privately a lot since GPT-3 about how digital resurrection is going to be an increasing trend to the point nearly everyone alive and online today will have a digital twin that outlives them. But because it's considered weird and "Black Mirror" in the present it's often dismissed as fantasy, even though the foundational concept of resurrection is at least as old as the Sumerians and there's already a patent granted to a trillion dollar company for an early technical vision of it. The Wright brothers' plane wasn't going to change the world as it existed that year, but the improvements over the future did.

It's seeing the trend of incremental improvements to early versions of a thing that I think people struggle with, and moreso now than a generation before as we've become so conditioned to shorter windows of consideration. "AI can't do my taxes only three years after allegedly doing impossible things that alarmed the most cited scientists? It must be a fad like crypto."

Long term vision of sufficiently advanced technology looks more and more like magic, and we're used to dismissing magic as fantasy. So when the things we imagined or predicted in fantasy begin to take shape with emerging technology, we ignore the past trends of realizing fantasy with technology and overemphasize the present state and limitations in predicting the future of it.


Really? I think rather than prediction as our future creating engine, it's been critical thinking and problem solving.

Most inventions are solutions to problems, not solutions in search of problems. Industrialization was in response to a need to scale up production. The internet was in response to a discoverability problem. Smartphones were in response to a need to do personal computing on the go.

Monetizing those things was the inflection point for success in all those cases, but even prior to monetization most human ingenuity has been based in problem solving.

Which is looking backwards. Not forwards.


How do critical thinking and problem solving work?

How do we evaluate potential courses of action, if not by predicting their consequences?

I showed how prediction is involved in a basic example from everyday life. Do you dispute that?


I think this article has two shortcomings that make its sweeping conclusions shaky.

First, it identifies forecasting with point forecasting. There are other ways to put forecasting questions, e.g. lower and upper level with a certain probability.

Also it mentions Tetlock, but only his negative findings, not his positive ones that lead to Good Judgement Project, which suggest the contrary of the conclusion of this article [1].

Thus I think it is not up to the latest research results.

See you over at gjopen.com, if you are interested and have lots of time to waste...

[1] https://en.m.wikipedia.org/wiki/The_Good_Judgment_Project


Tetlock paints a different picture of his positive findings than you do.

Specifically, Tetlock's project opens with key issues of scope about what to even try to forecast. Based on his previous work in expert prediction, he concluded that geopolitics is sufficiently chaotic to be impossible to predict 10 years out. So while he did a lot of work on forecasting, it is generally focused on the next year or to.

Which means that Tetlock agrees that we can't predict 10 years out.


I agree with the assessment that there are not many systems we can predict 10 years out with great confidence, specifically geopolitics.

But I do not think I painted much of a picture of Tetlock's results.

I read the article as concluding: let's stop predicting, it does not work. Let's start building. (After stating we cannot predict this, and we cannot predict that.)

And I think Tetlock"s result contradict that, as I said. Sometimes and under certain circumstances we can predict quite well.


I have a graph from "Expert Political Judgement" that I've kept on a cork board for over a decade. It's from page 55 in my edition. It charts "Objective Frequency" vs "Subjective Probability" It has three curves, Experts (people in Government, paid to make political assessments), Dilettantes (people who are well read, read NYT, WSJ and the like), and College Undergrads. The Expert and Dilettante lines are more or less on top of each other. The undergrads are observably much worse and farther from the "Perfect Calibration line" that is a 45 degree line between objective frequency and subjective probability. So it's not the case that there is no difference in people's ability to predict political events, it's that so called "experts" are no better than people who follow current events closely. This was for me the main takeaway from the book, is that nobody can predict political events very well, but some groups are measurably worse than others. Tetlock has a brief section that somewhat mirrors your argument on page 186 "Misunderstanding what game is being played" where one expert tells him making predictions is all about getting your sound bite out, not about being correct. In this game, stronger, incorrect predictions might be advantageous in that they can change the narrative.


In October was this chaotic election of the Speaker of the House. Indeed, a few people following the news closely where quite good in predicting the outcome: https://marketwise.substack.com/p/speaker-election-interview...


Right, and then he followed this up with Superforecasters which is all about the people who are on that 45 degree line. They exist! They just aren't popular.


He's not saying you can't predict 10 years out, just that the appropriate prediction is the base rate.

In other words, you cannot use information from today to improve predictions beyond long-term statistical generalities.

But that doesn't mean the prediction is useless, only that it has great uncertainty.


A forecasting system in aircraft autopilot that can accurately forecast when the plane hits the mountain is always wrong.

Forecasting when the forecast depends on the actions of agents that can be informed by the forecast changes the game. If the Fed model forecasts recession and the Fed takes action to prevent it from happening, it changes everything. Only a forecasting model that is not observed/believed by policy makers can predict without intervention.

Layman's idea of forecasting: Predict what happens in the future.

Economic forecasting: Forecast is input for actions. Predict what happens in the future, using this model, these variables, and everything else stay the same. You can check afterward if the model is an accurate forecaster by removing the changes caused by variables outside the model.


It is always harder to accurately forecast actual recession, than it is to forecast the predictions of the Fed model. You don't need an information edge there, just information parity.

When the Fed takes action, it is usually a very rational action, with a clear-defined goal of long-term economic health. This makes their actions easier to predict than other market participants.

So you went the hard route, forecasting the highly complex system directly, but then "variables outside the model" caused the "accurate" model to not perform well? You don't buy anything with that, since you live in a world with outside variables which mess up your predictions. The solution is to make your model actually accurate, by incorporating these "variables outside the model": Predict what others will predict.


Yeah, if forecasts did work, ppl would change behavior accordingly, rendering the model useless eventually


If you want a counter example, go and investigate algo trading hedge funds - you'll find they do a pretty solid job of predicting the future. Sure, some of them predict only a few ms into the future, some a few minutes (the one I worked for was in that category) and others will do interday strategies.

I'm pretty sure there are examples which have a track record of decent returns above the markets they trade in with longer term strategies.

So, i'd say there are examples of forecasting working, but generally the people who are good at it don't write about it, and instead use their knowledge and ideas to quietly make money from their insights :)


I don’t have a background in this but I was under the impression that much of algorithmic trading is that there are trillions of pennies lying around and if you have an algorithm that picks up those pennies faster than anyone else, you make a lot of money. So it’s capitalizing on tiny market inefficiencies rather than directional predictions.


There's a wide variety of strategies available. The type you mention of picking up small inefficiencies certainly exists but there are plenty of other strategies that involve having some sort of informational edge. Some hedge fund managers just read a lot of earnings releases, but there are also more sophisticated approaches: a famous example would be the fund that paid for satellite imagery of the parking lots of certain shops, so that they could count how many cars there were and extrapolate that into whether the chain was growing or not.

Another straightforward example would involve using proprietary weather forecasting software to try and predict the global grain/cocoa/coffee/whatever harvest, so that you can then trade accordingly if you can see a bumper crop coming up.


There's a wide variety of strategies available. The type you mention of picking up small inefficiencies certainly exists but there are plenty of other strategies that involve having some sort of informational edge

There are many such strategies. It's not all HFT either. For example, a strategy that short BTC and goes long ndx/qqq at the open and closes both positions at the close (four trades total), allocating half of capital to each pair, posted a double-digit gain for 2023 despite btc rising. https://greyenlightenment.com/2023/12/31/2023-bitcoin-method...

there are many other things like this. gotta keep your eyes peeled but they exist.


Both of those examples have been exploited to death and no longer are profitable


If the parent had discovered a viable and profitable trading strategy, do you think they would share it here?


I've been out of the loop for a number of years, but I believe there's a serious amount of ML being thrown at predicting time series, so this probably gives you an idea of how money is being made.

I've no idea what the current models look like. I imagine it was all RNNs but maybe transformers have taken over?


If a feature is used by many and has a predictable impact on their behavior it becomes profitable again.

If you act faster on the same feature as everyone else, or you predict the feature accurately, you can anticipate what the market will do in response.

The market often overreacts to new data. So if satellite imagery shows steep decline in parked cars, the stock will be predictably oversold. You can then take a contrarian position (buy the stock before it reverses to the mean).

Some commonly used features by popular public trading bots create predictable market movements, no matter if the feature itself is long-term informative/profitable.


True, but they were just meant as easy examples of non-HFT hedge fund strategies.


yeah but but his point is that hedge funds do things that are non-obvious to extract alpha


Another field that meticulously tracks their forecasting performance are meteorologists. Jokes about them aside, they do a smashing job of something really hard.

Also we are some hobbyist predictors who try our abilities out on all sorts of questions at metaculus.com. Highly recommend to get a sense of how good some people are at prediction.


At this point algo trading is a race to exploit the other algo created patterns, and is less predicting inherent market inefficiencies as much as fellow algo trading inefficiencies.

Self-fulfilling prophecy is a thing, even when the prophecy is algorithmic.


Any forecasts that don't involve probabilities or confidence intervals are useless. Also, if any forecasters were really serious, they would register their past forecasts and show how good they have been in the past. But I think most of forecasters are probably afraid of showing their true track record.


I agree. The article succumbs to a False Binary Fallacy, where forecasts are either correct or wrong. The real question about forecasts is how certain they are.

https://en.wikipedia.org/wiki/False_dilemma


First rule of forecasts in supply chain management: the forecast is always wrong. And still people ignore that cardinal, and a lotnof smaller, rules all the time.


Right or wrong is not the measure by which to evaluate forecasts, just as it's not the appropriate yardstick for models.


I like the article but not the final conclusion "We can’t predict much at all", I think we can predict more than we think, it is similar to asking if the glass is half empty and half full.

For example, sci-fi, scientists, and some economists have predicted a lot of things before but we don't have an accurate time of happening: one thing is to predict an event for next year and another see a trend that will happen in the next 50 years. There is even a futurist science.

Regarding AI, and forgetting "AI cults" it is incredible that the neural networks that we are using now are similar to the ones studied decades ago but there was a breakthrough in other aspects such as computing capacity and techniques [1].

[1] https://www.nature.com/articles/323533a0


Randomly, today's Fred Wilson's "What Happened In 2023" post [1] includes the following content: "...The second is the emergence of a new tech megatrend, AI, which has been developing in front of our very eyes for as long as I have been in tech, so that is over forty years now."

[1] https://news.ycombinator.com/item?id=38825073


Language is important. We can predict anything and everything. There are some things we can't _reliably_ predict.

This article ironically suffers from its own thesis. It assumes that because we haven't provided some things successfully in the past, we will never predict anything in the future.

A simple counterexample should dispel this silly notion. We used to consider the weather completely unpredictable. Now we have elaborate systems and theories that allow us to predict the weather with at least some accuracy.

A more reasonable thesis might be, we can't reliably predict human behavior, because much like the uncertainty principle, each prediction that is published, which it must be to be meaningful, affects the behavior it is trying to predict.


One should ask economists what a recession is, not how to predict one. Good modelers do not necessarily need (or want) to know what they are predicting and still beat "domain experts".

Authority without clear track-record is a net negative to getting good results. It is better to stick to anonymity, and only let the track-record do the talking/weighting. Without a clear track-record it does not even matter if the prediction-maker has skin in the game. If you do have skin in the game, there is no reason to sell your hide cheaply, or even give it away. You instead take the profit others say does and can not exist beyond "luck": If you can't even beat a random walk, you have no business evaluating the limitations of predictive modeling.

The big consultancy companies making bold predictions don't even need to be right. Customers read the predictions these consultancy companies peddle, because these customers are not bold enough to make their own predictions. And nobody ever got fired for buying the predictions from big consultancy companies and incorporating them into a business strategy.


Consultancies predicting something isn't forecasting, it is marketing.

And there or only a rare few thing I disagree more stongly with the statement, that good modellers / data scientist / whatever only need knowledge about how to model stuff to beat domain experts. It takes domain experts to judge whether or not a model correct, to identify the known and unknown unknowns and limitations of these models. Claiming otherwise is deeply arrogant, and it ended in disaster everytime I saw it tried. Good modellers need enough domain knowledge to properly work with, and understand, domain experts. And domain experts need sufficient knowledge about modelling to do the same. Both need the willingness to do so. And every modeller needs to accept that reality beats models, always.


"Every time I fire a linguist, the performance of the speech recognizer goes up."

> It takes domain experts to judge whether or not a model correct, to identify the known and unknown unknowns and limitations of these models.

Arguably true, but I still claim the domain expert test-performance is below that of a modeling expert. No knowledge/preconceptions: Try it all, let evaluation decide. Expert domain knowledge/preconceptions: This can't possibly work!

Domain experts need to focus on decision science (what policies to build on top of model output). Data scientists need to focus on providing model output to make the most accurate/informed decisions downstream.


I'll be blunt: everytime I saw people try model something they don't understand, it boiled down to throwing stuff at the wall and see what sticks. Very best case, whatever stuck solved one special case without people realizing it was a speciap case.

Worst case, the stuff sticking was sheer luck, could have, and quite often was, identified prior of trying by domain experts, no lessons were drawn from the excercise and the resulting models were ignored by everyone except the modellers.


> One should ask economists what a recession is, not how to predict one.

Most economists would agree. It's everyone else that says "well if you know so much about how shocks and policy changes cause recessions, why can't you tell me if there will be a recession in $country in Q2 2025?". And in economics, "skin in the game" means policy responses to avoid dire forecast outcomes (or lack of them when nobody expect oil prices to change or a major bank to collapse).

There's no shortage of opportunity to make money by beating everyone else at the prediction game, but the funds that have consistently profited from spotting the recessions ahead of everyone else don't exist any more than the always-right public expert forecasters.


Everyone that provides a forecast that others depend on should really be on the hook to report on the outcome, and to provide the forecast error distribution, if the forecast is one they make regularly.


I remember actually importing all the WSJ economic forecast surveys and testing the accuracy of each forecaster to find out which one I should bet my money on in the future. The results were... unimpressive for all of them. What they call forecasts aren't really rigorously modeled predictions of the future. Pretty much all of them come down to a very simple formula of long-term mean reverting growth rates. The forecasters that got awards for best forecast were either always optimistic and got the award during good economic periods or always pessimistic and got it during an unprecedented recession. So disappointing. I really believed in it before I studied the numbers.


Isn't inability to accurately predict some economic metrics consequence of efficient market hypothesis?

All available and some unavailable information is already reflected in market. So, sum of reasonable guesses of next year GDP more or less is today's market index. Anything over that is some baseless speculation with no skin in the game.


Afaik the efficient market hypothesis says nothing about how long the market takes to optimize after new information is available (and I believe the market needs to solve an np-hard optimization problem). So in principle you could beat the market, by using a better algorithm or more compute.


We need a market uncertainty principle or something, haha.


correct me if im wrong but the approximate cycle of boom/bust each decade or so for capitalism is a well documented feature? that it sort of has to "reinvent" itself each time in order for continued existence?

couldnt one plan around this in broader strokes that dont involve the sorts of precision quantitative analysis that wallstreet seems so fond of?


The critical distinction is IMO that the examples he gives are all examples of where the prediction forms part of its own outcome. Whether interest rate go up depends on whether people think interest rates will go up. Same with GDP and all the other examples.

Having this recursion is a massive problem, because there might not be a stable point where the prediction and its outcome are fixed.

This may be why it's hard to steer cars automatically. I don't know enough about the subject, but it would seem to fit in the category of "fields where what you think affects what happens".

By contrast, a lot of other phenomena are complicated but do not have this prediction-feedback effect. The weather, astronomical observations, a lot of engineering systems.


What?!

The article ends with a Alan Kay quote attributed to Cindy Gallup:

>Or as Cindy Gallup likes to say:

>

>“In order to predict the future, you have to invent it".

So she does like to say it (see quote below), but it seemed strange to end with a "second hand" quote.

From https://www.lbbonline.com/news/5-minutes-with-cindy-gallop

LBB > ‘In order to predict the future, you have to invent it’, Alan Kay, is reportedly your favourite quote. Why?

CG > Because I am all about inventing the future. Too many people feel that the future is something that happens without us, that we have no control over, that simply rolls us over in its wake. I believe in deciding what you want the future to be, and then inventing it.


You can predict what will occur quite easily, you can't necessarily predict when it will occur.

A lot of the "failed" predictions relate to markets...the reason why you can't predict this stuff is because humans are irrational and those irrational humans control outcomes in the short and medium term.

For example, you can see that a recession should have occurred. What people didn't expect is fiscal stimulus worth about 50% of GDP, tens of trillions in monetary stimulus, etc. Yes, if the government just deposits hundreds of billions into people's bank accounts then it is going to impact growth.

I remember back in 2007, Blackstone RE made insane leveraged bets at the very top of the market, it is very easy to point out rationally "these are absolutely terrible investments, the price is awful, these aren't economic"...today, all these bets got bailed out by the government (after a short period of bankruptcy/restructuring), the person responsible is probably going to be made head of Blackstone, that unit has hundreds of billions in AUM, etc.

The assumption that people make with forecasts has to be: the long-term is today. That is it. You will often be wrong but that does not mean that your model is wrong (indeed, the reason why this stuff is so predictable is because people believe that the models have stopped working repeatedly).

If you take something as apparently "unpredictable" as the market, you can predict returns to within 10bps very easily over the long-term because the fundamentals do not change (but, again, the current period has been the most unpredictable because of the level of government intervention, it is unprecedented...the government cannot hold back the waves forever though).

EDIT: referencing the 2005 interest rate prediction is quite humorous too, 2% against a predicted 5%...this was basically the start of it. Back then, no-one thought the Fed would cut rates to this level for, essentially, no reason...the Fed cut, the result was a financial crash. Turns out those predictions (which were essentially the long-term neutral rate) were right and the Fed was wrong...but the only account you hear about is: those damn forecasters, they failed to predict the Fed torching the economy, so stupid. Lol.


Some questions from an economic analysis standpoint.

If someone can predict the future reliably, why don't financial firms hire them? If a firm did hire them, how much money are they getting paid? why so little? Is the economic value of correct predictions lower than you'd think? Does the market believe "past performance is no guarantee of future results"?


The author would also conclude:

* Collision avoidance systems are terrible at forecasting collisions because they almost never result in a collision. (The point of the system is to help you avoid an upcoming collision.)

* The prediction that Y2K would happen was a bad one since it didn't happen. (We spent billions of dollars to make sure it didn't.)

* The 1978 prediction that the ozone layer would be depleted by 2010 was a bad one since it didn't happen. (Humans took action to reverse CFCs and the ozone layer began to regenerate.)

When you make a forecast about an event wherein agents can change the course of the event, the correct evaluation of the forecast is not "did the event happen?" but "would the event have happened but for intervention?".

The author seems to miss this larger point.


> The author would also conclude: > The prediction that Y2K would happen was a bad one since it didn't happen. (We spent billions of dollars to make sure it didn't.)

I would argue that we don't really know what would have happened had the world not spent all the money on upgrading systems. It appears a very large number of them would have continued to work as expected and it isn't immediately clear if the ones that were replaced would have resulted in a catastrophe.


The people who were working on Y2K did know what would happen in many cases. Their work avoided known huge messes in banking, infrastructure, aviation, and healthcare, among others.

What they didn’t do, much, is write or blog about their work. A lot of fixes were to commercial or government systems running on commercial or government hardware. Publicly disclosing problems and fixes was not part of those cultures.

So it is very hard, today, for members of the public to go back and reconstruct the problems and solutions to “prove” that there were real issues. Which has led some people to believe, incorrectly, that there were not real issues.


I was part of that remediation effort in the healthcare sector. And yes there were things that were fixed that prevented problems. However, given how many things were not fixed, it is amazing how few problems actually happened. (Someone got charged for 100 years of late library fees...)

Is that because we found and patched all the systems that would have actually had a problem? Maybe. I'm guessing it is because many of the things that were fixed, wouldn't have actually caused any signifiant problem--at least not at the scale that was being predicted.

But as you point out, if there were any systems that would have failed in a catastrophic way, those were evidently fixed.


It's like saying if you drive tomorrow you're going to get in a fatal accident, no one in their right mind would drive in that case.

The only way it can work is if you make the prediction and don't tell those that are affected. But generally in any larger market attempting to capitalize on the future state of the market changes the market and the predicted position.


yes, systems are dynamic . predictions are by definition based on things that already happened and cannot account for new information except what was already programmed into the model. outcomes are affected by attempts to change outcomes.


Discussed at the time:

The Forecasting Fallacy - https://news.ycombinator.com/item?id=24521279 - Sept 2020 (9 comments)


Ugh. This annoys me.

Can we predict things super accurately? Often no. But you know what’s better than anecdotes about times predictions were bad? Training and testing sets to judge how good we expect models and predictions to be from the beginning. Because a lot of these are not high confidence predictions.

And no, it’s not “black swans”. Are those a thing? Sure. It’s ok that models can’t account for things that are not modeled or seen before. But if these things are common enough that they’re systemic and the mode is just not actually accommodating for the world of relevant factors, then it’s not going to have been a good model on the test set to begin with. And we would know that.


I just read Hari Seldon to find out what will happen in the future.


does this matter if they cannot predict? Doctors cannot predict who gets heart attacks or cancer but they are still needed anyway.


(2020)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: