Hacker News new | past | comments | ask | show | jobs | submit login
Model beats Wall Street analysts in forecasting business financials (mit.edu)
77 points by blopeur 67 days ago | hide | past | web | favorite | 23 comments



Time series methods have been applied to economic data since time immemorial. There is nothing particularly newsworthy here beyond the fact that another wannabe mathematician is trying to find a way to make a quick buck off of unsuspecting investors in the markets. "Outperformance" is an hackneyed phrase that has long lost any real meaning when uttered by an MBA; there are just too many ways to cook the books in order to produce the desired outcome.

Surprised not to see Andrew Lo's name associated this "groundbreaking research." It would be totally within his "working style" to trot out some fancy-sounding mathematics that pretends to solve some impossibly messy financial problem, and brag about it in some journal that real scientists (who sometimes collaborate with him in hopes of landing a job on Wall Street after their academic career starts to flounder) do not take all that seriously.

On a more serious note, the former deputy dean of the MIT Sloan School of Management (Gabriel Bitran) is currently serving time in a federal penitentiary with his son for a similar sort of chicanery (fancy mathematical pricing formulas that were complete bullshit according to SEC indictment) in order to screw his investors out of millions of dollars.

This was not long after Bitran narrowly escaped charges for sexually assaulting one of his secretaries at the same institution. In which case, I guess there may be some truth after all to the old saying about a man not "getting lucky" twice :)


Disappointed to hear this about Lo. Was just looking at a course MIT will be offering online that he's co-teaching!


Most folks these days are more interested in enrolling in one of Lo's classes for the future contact, not the course content.

I remember walking by Lo one afternoon and overhearing a medical doctor trying to catch up with him in order to tell him what a "lifetime admirer" he has been of Lo's work. It could have been anything from Lo's latest antics claiming that he knows how to "harness the power of greed" to cure cancer (the so-called Cancer-X project) to this poor fellow thinking that he had discovered a new way of beating the stock market (perhaps after reading this classy title that Lo published with one of his more attractive acolytes several years ago: The Heretics of Finance.)


Well, I was more interested in the science side of the course, anyways! Taught by Harvey Lodish. Still, disappointing to hear that.

https://www.edx.org/course/the-science-and-business-of-biote...


> The model makes use of “alternative data” – such as credit card purchase data, smartphone location data, satellite imagery and so on

The problem with predicting markets is that they suck up information.

When these predictions become public or anyone acts on them, the market automatically adjusts. Then the cycle repeats with people looking for even more leading indicators because the old ones are already priced in.

So, while it's interesting that these "alternative" data points seem correlated with prices, I'm dubious that anyone will profit off them at above-market returns for any sustained period of time.


To build on what you've said, I've learned from experience that alternative data can be misleading in the presence of unusual tail events. It's the fundamental bias-variance tradeoff. I was benchmarking one of our simple models (based on known causal variables) with another from a third-party that claims to be augmented with alternative data. When things were normal, both models behaved similarly. When things were out of left field, the alternative data model tended to be more wrong, whereas the simple model tended to be more robust and less wrong.

This is a surprising but well-known phenomenon in mathematical modeling [1, 2] -- simple models tend to outperform complex models in complex situations.

Complex models can give you accuracy refinements during normal situations but need to be detuned/weighted-reduced during abnormal situations.

[1] https://www.johndcook.com/blog/2012/09/17/robustness-of-simp...

[2] https://sloanreview.mit.edu/article/why-forecasts-fail-what-...


This is precisely what all the quant based investment firms have been doing for 25+ years.


Why would anyone say that? Why wouldn't you just make a few billion and prove it?

I'm going to guess its because it doesn't do it at a rate which matters, or in a manner that's actually scalable.


> On the 34 companies tested, the MIT researchers' model beat an aggregate Wall Street analyst benchmark in 57.2 percent of quarterly predictions tested in the experiment.

Sorry but I'm not impressed by this number, for various reasons:

- aggregate benchmark means some average of predictions from wall street investors, which is not the "state of the art" to beat, you should beat the best performing funds. Related: the best (and worst) funds are private and don't (all) report performance. Therefore, they are likely not included in the benchmark used and therefore the benchmark is biased.

- 57% doesn't seem that much (only slightly better than chance). Also, there is no variance number

- if they 'win' 1$ in 57% of the cases but 'lost' 2$ in the remaining 43% of the cases it's still a net loss. No numbers are given

- not clear if they are after-casting, i.e. whether they tuned the predictions after the fact happened. In other words: How well does the algorithm perform if you turn it on now and leave it for a year?


>- aggregate benchmark means some average of predictions from wall street investors, which is not the "state of the art" to beat, you should beat the best performing funds.

They're not beating funds, they're more accurately predicting company earnings than 'Wall Street analysts' -- people who work for investment banks and write stock notes for clients, not people who actually invest.

And the experiment assumes that the job of an analyst is to accurately predict earnings, which is frequently not the case -- analysts often lowball their estimates so that the companies they follow can 'beat and raise'.


It seems from the outside that analysts jiggle the number that the company gives them and then the company jiggles their own mechanisms to hit the agreed-to number.

What tips this is off is absurdly narrow forecast intervals. If I know that Company X's deal size is $10 million, then in what universe is it fair and reasonable for a forecast to be +/- a million bucks?


If they don't report performance, how do they gain new investors?


Not all funds are looking for new investors. Some just grow their own money, and outside investors would limit their agility.


Any number of ways. For example pitching behind closed doors to potential customers.

Others don't have customers, they trade with their own money (not just small-time traders, by the way).


Eventually, there'd have to be a leaked anonymous source?

"Fund beats S&P by 20% for 20 years straight" is a good story?


Not at all, you've probably never heard of tgs management for example, a fund comparable to rentec


You're correct, I never have.

I am not understanding how a fund could remain open/transparent enough to build trust with potential investors. But still maintain secrecy over the years.


Because they don't trade outside money - tgs and others like it only trade their founder/employee's money. They also enforce strict NDAs with all their employees and business counterparties (data vendors, prime brokers, etc).


MIT says a lot of things.

This basically amounts to "good data from other sources, provided granularly and processed intelligently, can predict asset movements."


This was discussed here just less than a month ago:

https://news.ycombinator.com/item?id=21894862


Predicting key metrics better than analysts with alternative data is very easy, the hard part is making actionable trading insights. Alternative data gleamed metrics are only a small part of the overall market price.


I've never seen a backtest that I didn' tlike.


Url changed from https://www.enterpriseai.news/2020/01/22/mit-says-its-foreca..., which points to this.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: