Hacker News new | past | comments | ask | show | jobs | submit login
The Mathematical Con of Hedge Funds and Financial Advisers (2014) (psmag.com)
22 points by jimsojim on Nov 16, 2015 | hide | past | favorite | 16 comments



"When it comes to publishing in the Journal of Finance or the Journal of Financial Markets, the editors simply don’t have the mathematical knowledge necessary to vet some of the more complex and nuanced assertions."

Personally, I think everyone should stop reading at this point. Of course I went through to the paper itself.

What is the paper? Mathematicians citing Leontief from 1982 about the woes of economics as a science, running some simulations (no data for me, thanks!) and really getting quite angry, for what, to remind us that backtesting is a flawed methodology? If you invest based on some random back-tested wavelet bullshit you're doing it wrong, this is not news to anyone. Harvey, Liu and Zhu made their point already (note - not the Zhu who authored this paper) and nobody cares about your filling in some dull derivations in a mathematical appendix and calling them "proofs." Everyone complains about economists doing too much of that already.


I find it entertaining that you simultaneously critique the lack of rigor and also complain about economists making the math rigorous.

The point of proofs, simulations, and the like, is to make sure that your model is at least internally consistent and your assumptions actually yield the results you think. A lot of times models specified with vague verbiage don't even do that.


The point put forth in this article is that a certain degree of scientific rigour is important in running a business and it's a principle that is broadly generalisable. In particular in software development one encounters the philosophy of "get it out the door quick" regardless of the cost to quality. Engineers are often viewed as a cost-centre rather than the actual producers of value in a business when the bottom line is evaluated. It should be clear to us that businesses that actively capture and nurture engineering talent (google, apple, facebook etc.) are the businesses that are doing better at the moment than those that focus on "unit cost" of "resources" (e.g. IBM, HP) and many are now trying to make that transition as we can see from the trend towards more engineer-friendly processes such as Scrum (away from the more punishing "waterfall" style approaches).


I'm with you except that I think the "unit cost" thinkers are only shifting towards Agile/Scrum because the cool kids are doing it. In the brand of Agile/Scrum that those shops create, it is anything but engineer-friendly, and they absolutely still view engineering and software development as cost centers instead of value centers. In fact, in a lot of cases, Agile is used explicitly because it enforces an aggressive deadline culture in which the sacrifice of quality in favor of short release cycles is explicitly codified into the work culture, and reinforced every day in every team interaction. These situations also foster extreme degrees of "surveillance culture" -- basically since the middle management over top of the so-called cost center of engineering can't do much, technologically, to mitigate the realities of creating software value, they opt instead for draconian surveillance and progress tracking environments, like stodgy Jira, and they take things like team burndown literally and draw sweeping conclusions from an extremely small set of burndown data. This all typically goes hand-in-hand with open-plan offices too, where everyone can be literally seen during their work and there is an implicit mandate from the company that your status is partially related to how much you 'look the part' while in the office. Even if you're a developer needing quiet time to creatively solve a problem, and you're seated smack in the middle of a loud group of sales associates whose job requires them to be on the phone all day, the business doesn't care. Your actual productivity as a developer isn't as meaningful to them as your ability to look like a good piece of office furniture most of the time by fitting into their surveillance and micromanagement culture. Typically, promotions and advancement are won in the environments not through technical skill or hard work, but through one's ability to sublimate your natural desires for reasonable working conditions and "endure it with a smile" better than peers can, a phenomenon which Michael O. Church referred to as "macho subordination." In my working experience, at least, this has been precisely how Agile/Scrum are used, whether in a start-up, a finance firm, or a long-standing education tech company.


To borrow and misuse a phrase, "guns don't kill people". If you're working in this kind of environment you're on to a loser no matter what.


Anything anyone is trying to sell you in this space is a con. If it were profitable, they'd be doing it themselves. Exactly the same as gambling systems.


Investing other people's money is always more profitable than your own. If you found a way to get a risk free 30% IRR year on year, wouldn't you rather take 5% of the profit from $1,000,000 invested than 100% of the profit from $10,000?


It's only more profitable if you don't have capital. Most of the very successful hedge funds stop taking other people's money and just invest their own once they have enough.


You're confusing the math with the reason they stop taking external funds. They always make more money if they add external funds just like 20% of $101 is greater than 20% of $100.

They stop taking external money because managing clients can be a pain. At some point, depending on the fund (e.g. a niche fund that can never grow beyond a certain size), a shrewd money manager can grow to dominate the niche. In that space, they might as well just kick everyone else out and save the headache because the reward of the bigger pie just isn't worth it.

Frankly, even though I manage money for a living, and I really like my clients, it's still difficult. If it were my 1 beeellion dollars (touch pinky finger to the lips), then I wouldn't manage other people's money either - the marginal profit to me personally just wouldn't be worth it (for me, after the first billion, it's pretty darn trivial).


Most strategies can't take unlimited capital. I've got $100k in one strategy right now, but if I put $200k in it I wouldn't make any more money.


I've worked in quant funds for over a decade. The article is right that there's a lack of rigour, and it's true the number of trials is an issue. However it doesn't go into why it happens:

- People with phds often have an authority that is unwarranted. People with advanced degrees in similar sounding fields may have spent their time doing very different things. Phds are also coming from an environment where getting credit matters a lot, so you get political struggles that appear to be mathematical in nature to the uninformed manager.

- Programming skills tended to be crap in the places where I worked. Most of the researchers think of coding as a necessary evil, and their code looks like the kind of code you write when you're trying to take shortcuts. What effect does this have on overfit? Well, it's simply impossible to know how many tests are actually being performed. There's a lot of informative errors, too: something gets coded up, you see a backtest, you realise there was an error after. Now you have information that you can use to get a better result, having not incremented your number of tests.

- There's a lot of pressure to come up with something new. Pretty hard when there's loads of people attempting the same. The most natural way to try to make something "new" is to take something old, add a twist to it, and see how it goes. Of course when you add a twist, there's more parameters to fit. And because it's supposed to be complex, you can get away with an elaborate fitting mechanism. Often something that hides the fact you're doing more tests.

- Backtest fetish: it's natural to think about how a system would have done in the past. But unfortunately we all know about the past. Anyone coming up with a strategy is going to know volatility spiked in 2008, so he'll avoid anything that would have lost money then, or worse, just restrict the parameters so they are in a safe zone. Basically, there's a bunch of tests that aren't done, but should have been counted anyway.

- How to fix this: well, there's no magic bullet. You should start by being wary of adding parameters. If you add one, it should work with a number of reaction functions. If it works with a step func, does it work with an s-shape or a ramp? Does it work when the parameters are perturbed? Does it work when the input data itself is perturbed? If you've built a model, does it work on simulated data? Something I rarely see is when someone says "model is an ARMA(2,2)" that they've generated an ARMA22 with the same parameters.


I develop a number of backtesting systems and algortihms and the article is spot on but lacks to address far more sophisticated backtesting systems that include running once on sample data. These are the best algos.


TL;DR Over-fitting investment models is pseudo math.


the greatest overfitting of an investment scheme the world has ever known: markets must go up.


Er, no, that's basic economics. If you make an investment, you are going to expect some sort of profit, on average.


That's a different issue. I think what @lintiness is trying to say is that just because a market went up in the past does not mean it will in the future. Survivorship bias is absolutely huge here.

Most people forget this, but if you look at global markets since the early 1900's, there were several that went down and never came back up. Germany's markets failed on at least one occasion and never came back. A good paper that addresses part of this is here: http://www.researchgate.net/publication/4913017_Global_Stock...

Depression and war do many things that create discontinuities. Most people don't come close to thinking about those because they aren't knowledgeable enough to even understand how strong the survivorship bias in their current local market like the US.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: