Hacker News new | comments | ask | show | jobs | submit login

1. A question that has interested me for some time: what is the best quantitative measure of "risk"? Intuitively we understand risk to be related to the distribution of investment returns. More information means that we are more confident about our prediction, hence our risk is lower.

But pinning risk down "mathematically" is kind of tricky. There are several proxies for risk under modern portfolio theory, including variance, standard deviation, shortfall, half-normal variance. Why are there so many different measures? If analytical tractability was not a concern, what should risk models look like?

2. Multi-horizon portfolio optimization. Portfolios are typically optimized one time step (horizon) into the future. However, if one wishes to optimally allocate a portfolio with extremely long-term investment periods (think Berkshire), one needs to subdivide that period into multiple horizons and optimize over these together. This is computationally expensive - is there a way to do this efficiently?




#1 I don't think there is a perfect measure. In practice portfolio managers use multiple views of risk, including multi-factor risk models, stress testing, value at risk, etc. Any tool money can buy they will use.

#2 has already been solved: One formulation is here: https://stanford.edu/~boyd/papers/pdf/dyn_port_opt.pdf There are optimizers out there already on the market with these capabilities.


#1 - Whether or not there is a formal risk measure that is less "ad-hoc/in-practice" than the others is still up in the air, no?


1: Risk, just like reward, is an umbrella term. It doesn't specify one particular thing. Take reward for work, some of it is monetary, some in terms of learning experiences, some in terms of friends from work, some reward is in working close to home or at home etc. Risk is similar. The variance you mentions measures volatility, but value-at-risk does not, it's also a risk but it measures the most you could lose at some high degree of probability. For different perspectives and measurements you have different models.

2: My biggest concern here is that the data isn't really reliable in extremely long-term investments, MPT mostly runs on historical data, if you're wanting to pick stocks to invest in 10 years from now on today's data, it's like investing in today's stocks on data from 1996-2006, which is pretty silly. Such very long-term models are just not very feasible. The weather is a great example, I think. We're pretty good at calculating patterns an hour or a day into the future, two weeks into the future is extremely hard. There's just too many small things that can explode into significant changes.

You mentioned Berkshire as an example, they're pretty much the opposite of modern portfolio theory. They look first and foremost at fundamental business analysis, i.e., is it a good business, is there a solid management etc... MPT disregards virtually all of that.

Every model has assumptions, MPT relies pretty heavily on the notion that 'all knowledge is priced in', so that analysing businesses and building portfolios like Berkshire is not going to get you anywhere. Rather, you're left with looking at historical data and building a portfolio like that, assuming that this data includes all information in the market, because it's price data and everything is priced in, so it gives the most accurate reflection of the market. Lots of counterexamples show this isn't the case, it's still a nice theory but it does better in the short (or medium) than the long term imo, and not because it's computationally expensive.


I see - thanks for the clarification! Looks like they were only "open problems" in my mind :)


The measure for risk is probability. Variance, sd, etc. are just attempts to simplify complete probability distribution.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: