Hacker News new | comments | show | ask | jobs | submit login
Nassim Taleb: We should retire the notion of standard deviation (edge.org)
248 points by pyduan 1260 days ago | hide | past | web | 244 comments | favorite



I sometimes think that progress in the 21st century will be summed up as: "The realization that the normal distribution is not the only way to model data".

Taleb's favorite topic is the "black swan event" which is something that the normal distribution, and the idea of standard deviation, don't model that well. In a normal distribution very extreme events should only happen once in the lifetime of several universes. Of course assuming variation inline with a Gaussian process is at the heart of how the Black-Sholes model calculates risk/volatility/etc.

Benoit Mandelbrot argued that financial markets follow a distribution much more similar to the Cauchy distribution (specifically the Levy distribution) rather than a Gaussian. The problem of course is that the Cauchy distribution is pathological in that it doesn't have a mean or variance, you can calculate similar properties for it (location and scale), but it doesn't obey the central limit theorem so in practice it can be very strange to work with.

The normal distribution is fantastic in that it does appear frequently in nature, is very well behaved, and has been extensively studied. However a great amount of future progress is going to come from wrestling with more challenging distributions, and paying more attention to when assumptions of normality need to be questioned. Of course one of the challenges of this is that the normal distribution is baked into a very large number of our existing statistical tools.


This is actually what I expected to read: "The standard deviation is useful because with the average and the standard deviation, one can fully characterize a normal distribution. However, the standard deviation is less useful a statistical summary the farther away from 'normal' you get, and in reality, there is no such thing as a normal distribution, as a true normal distribution is defined on the entire real number line from negative infinity to positive infinity. Reality always provides some bound, and it's often quite distorted from Guassian. For instance, a 'normal' distribution averaging 2 with a standard deviation of 1.4, bounded by 0, is quite non-Gaussian in many important ways! (Not least of which is that you're going to have to do something to replace the missing probability...)

"People rarely check how closely their data conform to the standard distribution; indeed, many people blindly apply the standard deviation to their data regardless of its distribution! The resulting number is often more obfuscatory than helpful, to the extent that it crowded out more useful summaries.

"It's a useful metric when treated carefully, but it is rare to encounter it treated carefully. Science courses would be well-served to stop teaching it in favor of a stronger emphasis on multiple distributions. (Multiple distributions are usually touched upon, but implicitly our curricula overfavor the Gaussian distribution and end up accidentally implicitly convincing students its the only one.)"

But that's just me.


>Reality always provides some bound

But... it doesn't. You ever hear about the hypothetical possibility of your atoms lining up and falling through the floor?

It's hypothetical in the sense that it's really ridiculously unlikely, but there is no bound preventing it.

Now the central point about different probability curves stands, but that's not what Taleb was talking about--he seems to think that it's the tool's fault if people are using it wrong--and it's also not what Homunculiheaded argued.


"But... it doesn't. You ever hear about the hypothetical possibility of your atoms lining up and falling through the floor?"

A bad example; that's a very, very large sample space, such that deviations from mathematical perfection are irrelevant. They do exist, if you're precise enough (for instance, the universe is not modeled by perfectly continuous space), but I'm not inclined to argue them, because it's too easy to argue that they're irrelevant. So instead consider something more human-sized: Match a normal distribution to the height of human beings.

It works very well, except in real life, the probability of a negative-height human being is zero. This is not what the Gaussian model predicts.

Unfortunately, rather more science takes place in the second domain than the first.

"that's not what Taleb was talking about"

I'm quite aware. The fact that I commented on how I got something other than what I expected rather suggested that, I thought... The fact that this isn't precisely what Homunculiheaded said is also why I posted, rather than just upvoting....


>The fact that this isn't precisely what Homunculiheaded said is also why I posted

Ah. I misread the following...

>>This is actually what I expected to read:

as agreement ("This is actually what I expected to read."). My mistake.

>Unfortunately, rather more science takes place in the second domain than the first.

As I said to Homunculiheaded, this is because of the relative utility of the models, which we understand--and even those that do not understand it do not make the tool's use invalid.

What are we bemoaning, here, but actual misunderstanding itself?

And really, what's the point of that?


>"The realization that the normal distribution is not the only way to model data".

Realization by who? If you understand the normal distribution you had damn well better know that there are other probability distributions.

>The problem of course is that the Cauchy distribution is pathological in that it doesn't have a mean or variance, you can calculate similar properties for it (location and scale), but it doesn't obey the central limit theorem so in practice it can be very strange to work with.

In other words, we're using the normal distribution as the workhorse because considering other distributions is, well, inefficient/unproductive.

> However a great amount of future progress is going to come from wrestling with more challenging distributions, and paying more attention to when assumptions of normality need to be questioned

What exactly is it that you think physicists have been doing for the past half century? The error accounting for CERN's experiments requires actual millions of PhD-hours.

This topic's conversation is at some bizarre intersection of good intentions, concrete knowledge, and woeful ignorance. I guess I tar myself with that brush.


"Realization by who? If you understand the normal distribution you had damn well better know that there are other probability distributions."

See Glyptodon's post: https://news.ycombinator.com/item?id=7065067

If you hang out with mathematicians, yeah, sure, everybody knows there's a ton of distributions. Try hanging out with, say, biologists. The undergrad statistics education is basically "mumble mumble guassian mumble hideous equations mumble mumble YOU MUST DO THE CHI-SQUARED TEST mumble mumble poisson mumble hideous equations mumble YOU MUST DO THE CHI-SQUARED TEST mumble mumble CHI mumble calculus is hard mumble mumble NULL HYPOTHESIS mumble CHI CHI CHI WE WILL DRUM YOU OUT OF THIS DISCIPLINE IN DISGRACE IF YOU DON'T DO THE CHI-SQUARED TEST".

I'm hardly even exaggerating! I remember being asked by someone in their third semester of using the damn test what it actually meant.

Nominally, yes, the Poisson and probably a couple of others were mentioned, but believe me, the ALL CAPS part of the education does not mention them.


"A general science education needs far firmer statistical grounding" doesn't equate to "The standard deviation should be retired."


I was establishing that there are plenty of people who don't really realize there are other distributions.

You seem to be having some trouble reading what I'm writing, rather than what you think I should be writing.


>I was establishing that there are plenty of people who don't really realize there are other distributions.

Something I never contested.

From Glyptodon:

>>"This is std dev. This is how you compute it. Make sure you put it your tables and report."

>>it wasn't always a sensible thing to be asked to calculate but was instead just an instinctive requirement.

I hate to break it to you, but this is how rote mathematics is taught. You can't communicate concepts purely, and hammering instinctive math is better than no math at all.

To reiterate, there's nothing wrong with the normal distribution. We're not about to retire addition or subtraction just because there are "plenty of people who don't really realize" there's more to math.

>You seem to be having some trouble reading what I'm writing, rather than what you think I should be writing.

Sure whatever, same to you.


Taleb has a good point about people mistakenly interpreting standard deviation (sigma) as Mean Absolute Deviation (MAD). I like that he gives some conversions (sigma ~= 1.25 * MAD, for Normal distribution).

I think it's rather silly to talk about "retiring" standard deviation, but we can't blame Taleb - the publication itself posed the question "2014: What Scientific Idea is Ready for Retirement?" to various scientific personalities.

What Taleb failed to mention is that, once properly understood, standard deviation has distribution interpretations that can be much more useful than MAD. For example, if the data is approximately normally distributed, then there is approximately a 99.99% probability that the next data observation will be <= 4 * sigma.

Not everything is approximately normally distributed, but a lot of phenomena ARE normally distributed. It's a well known fact that the phenomena which Taleb is most interested in (namely, financial return time-series) are not normally distributed. But I would like to know how Taleb proposes to "retire" volatility (sigma) from financial theory and replace it with MAD? Standard deviation is so central in finance that even the prices of some financial instruments (options) are quoted in terms of standard deviation (e.g. "That put option is currently selling at 30% vol"). How do we rewrite Black-Scholes option pricing theory and Markowitz portfolio theory in terms of MAD and remove all the sigmas everywhere? Surely Taleb has already written that paper for us so that we can retire standard deviation?


I think his point is that Black-Scholes et al are holed beneath the waterline precisely because they involve standard deviations. In his world, you're better off being unable to price an option than you would be with Black-Scholes. Your example of "That put option is currently selling at 30% vol" is actually an example of why the system is so completely broken: if volatility as standard deviation was valid, all options against the same underlying instrument would have the same implied volatility. The volatility smile shouldn't exist.

This wouldn't matter if the down-side wasn't so crippling.

I don't think Taleb has to be the one to propose a replacement for portfolio theory, and I think criticism of him for not doing so is pointless. You don't need to have a spare tire handy to point out that your neighbour's car has a flat, and you don't have to run an airline to tell people not to get on a plane with the engines visibly on fire.


I've never understood this part of Taleb's argument. Of course, constant vol Black-Scholes does not hold. BUT NO ONE USES THIS. Everyone in the financial industry is well aware of the volatility skew, and spends lots of time adjusting for it.

B-S vols are putting the "wrong number into the wrong equation to get the right price" as Rebonnato famously said.


> For example, if the data is approximately normally distributed, then there is approximately a 99.99% probability that the next data observation will be <= 4 * sigma.

Could you sketch out why similar statements couldn't be made about MAD? My (possibly flawed) intuition is that the expected proportion of observations within n\*MAD should be similarly independent of the parameters of the normal distribution.


The central limit theorem shows us that unimodal data with lots of independent sources of error tends towards a normal distribution. That description is a good first-pass, descriptive model for lots and lots of contexts, and standard deviation speaks well to normally distributed data.

Squaring error isn't just a convenient way to remove sign, it's driven by a lot of data-sets' conformance to the central limit theorem.


Thank you. I don't think it is intellectually honest for Taleb to omit this fact.


This article is based on paper Taleb published in 2007. If you want to test yourself, submit yourself to experiment in page 3: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=970480


   A stock (or a fund) has an average return of 0%. It moves 
   on average 1% a day in absolute value; the average up move 
   is 1% and the average down move is 1%.
How does that yield an average return of 0%?


The usual way to do this is to take the natural log, so an up 1% day followed by a down 1% day (or vice versa) will always net out to a 0% change.


It goes up a little more often than it goes down?


Yes, start by writing a confusing question. One that starts talking about average up moves and average down moves and then switches to asking about the standard deviation of moves. Then publish a paper showing that people were confused by your question. Now you have "research" to back up your claim that everyone is confused by "mean deviation" and "standard deviation".


Thanks! Always nice to read the full research.


I think because it's called "standard deviation" that it sounds like the thing to use or look for. It sounds more correct because of the word standard.

I feel like it is the same kind of failing due to human perception of language that programmers have with the idea of exceptions and errors, especially the phrase "exceptions should only be used for exceptional behaviors". That's a cool phrase, but people latch on to it because of the word exception sounding like something extremely rare and out of the ordinary whereas we see errors as common, but they are in fact the same thing. Broke is broke, it doesn't matter what you call it, but thousands of programmers think differently because of the name we gave it.

We are human and language absolutely plays a role in our perception of things.


> I think because it's called "standard deviation" that it sounds like the thing to use or look for.

Yes! Because it's an awesome trick and lets you do good estimates on napkins.

The other day I was buying lunch at a food cart and thought about how much change the food carts had to carry, as a function of how many customers they have, under the assumption that they want to be able to provide correct change to 99% of their customers.

Let's say that the average amount of change a customer needs is $5, and a 99-th percentile customer needs $15 in change. If we pretend that the distribution is approximately Gaussian we can calculate that 1,000 food carts with 1 customer each would need $15,000 in change, but 1 food cart with 1,000 customers would need $5 x 1,000 + ($15 - $5) * sqrt(1,000) ≈ $5,320. That's math you can do in your head without a calculator (being a programmer, 1,000 ≈ 2^10 so sqrt(1,000) ≈ 2^5).

The standard deviation and assumptions of normality are so useful because of the central limit theorem. That is, if you have many iid variables which have finite standard deviation the sum will converge to a Gaussian distribution as the number of variables increases.

Then you say "Well, the standard deviation weighs the tail too heavily" and the response is "well use higher order moments then, that's what they're they're for".


It's a neat math trick, but it seems more accurate to say this lets you calculate bad estimates on the back of a napkin. Unless you really think food carts carry $5000 in change.

The quantitative work I do has to do with measuring latency, where the minimum, median, 90%, and 99% values are more meaningful than the mean or standard deviation. Programs typically have a best-case scenario (everything cached) and a long one-sided tail.


Saying that it's silly to think that food carts have $5,000 in change is unproductive because I was illustrating how the calculation works, not how the economics of food carts works, and the numbers I used for illustrative purposes were not intended to reflect reality. (1,000 customers in a day? Not likely, my guess is 200 for the busiest food carts.)

But it's good to have bad estimates, at least, it's better to have bad estimates than it is to have no estimates at all. I'm not saying that standard deviation is a substitute for more thorough analysis, just that standard deviation is an improvement over just talking about the mean.

Another example: We'd like to hire you, the mean number of hours per week you'd work is 40.

Versus:

We'd like to hire you, the mean number of hours per week you'd work is 40, and the standard deviation is 15. So your bad estimate is that you'd have two 70-hour weeks each year. But it's better than no estimate.


Sure, two points is better than one but what's special about two? I'd rather have a graph. We have computers so there's rarely a reason to compress the data so much.


We often have to compress the data down to a single decision or statistic: yes/no should I accept the job offer, how much money should I save before buying a house, or what's the probability that I'll die in the next 10 years.

I hate to quote XKCD, but it's like saying your favorite map projection is a globe (http://xkcd.com/977/). Yes, you've preserved all the data, but even with computers, your beloved graph will not make it all the way to the end.


Preserving all the data is the logical endpoint but that's not what I was suggesting. I'm just saying there's nothing special about keeping two points.

I'd rather not feed two points to my decision algorithm, whether it's machine learning or a human looking at the data. It makes more sense to make some attempt to preserve the shape of the graph unless you have strong reason to believe it's Gaussian, and even then the assumption should be checked.


I really tried to get through "The Black Swan" and Taleb's writing struck me as so pretentious and self-involved that it made it impossible for me to finish.

He strikes me as someone who is so desperate to be important and recognized that an assertion like this doesn't really surprise me.


His facebook doesn't inspire a lot of confidence. It's full of vague availability-heuristic-targeting generalities.

"Virtue is when the income you wish to show the tax agency equals what you wish to show your neighbor"

"The problem is that academics really think that nonacademics find them more intelligent than themselves",

etc

I've read black swan as well, and there were parts I didn't quite grok at the time (You cannot predict a black swan! etc)

My take is that he's a Malcolm Gladwell with numbers and therefore less easily takedownable but has the same model of "hold prestigious academic position, appear wise, publish book with simple principle, be easily referenceable by people who want to sound well-read" except he has lots of math and charts to point at lest anyone call him out on it.


Talebs problem is an intense narcissism, allied with strong intelligence, but not as strong as his narcissism would leave him to believe. So, we have to listen carefully to find his useful insights, while wading through the bombast.

Don't try to engage with him critically, but constructively, directly on the internets though, because he prefers acolytes to arguments!


I started reading that book and I thought he was a genius, but then, right abut the point where he starts bagging on the Uncertainty Principle, I realized that he's actually kind of an idiot.

The book does make a good point though.


Maybe I'm the idiot, but I found the book's point to be pretty trivial. "Sometimes unexpected things happen, and people are bad at expecting the unexpected" seems pretty damn trivial to me.

Maybe it's because I think like a programmer rather than a finance guy, but a large portion of what I do on a daily basis is about mitigating risk from the unknown. A programmer who only guards against known risks is going to get his ass handed to him sooner rather than later (these are the sorts of people who use blacklists and regexes to sanitize SQL). A finance guy who only guards against known risks is just playing the averages. My experience with traders is that they tend to be pretty abstracted from reality and like to impose rules and patterns where there are none. I can see why those sorts of people would find Taleb's work groundbreaking.

Taleb's undoubtedly intelligent, but I feel like he woke up one day, decided that he wanted to be a philosopher who brings wisdom to the masses, and built his temple on a mind-searingly obvious principle, which he now proclaims to be his great gift to humanity, for which he should be praised, hallowed be his name. The impression I got off of him was that he considers himself a prophet, imparting a word to the masses that the rest of us are just too stupid to recognize. Gag me.


I agree with you except that you left out the only really important (in my opinion) part of his pretty trivial message: "Sometimes unexpected things happen, and people are bad at expecting the unexpected, and then they FAIL TO NOTICE that they are bad at expecting the unexpected"

It can be difficult for folks (myself included!) to separate their message from their irascible personalities.. ;-)


But if we were good at noticing that we're bad at expecting the unexpected, then we would expect the unexpected, making it no longer really all that unexpected, no? His whole argument just felt tautological to me.


If you really try, for a long time, you can be that rational. Please note, however, that 99% of people are not that rational, and if you haven't realized it, you're also not that rational. (Neither am I! But at least I realize I have a problem... b^)

If none of this makes sense, take a look at the LessWrong site.


Not really a surprise for someone who was so in thrall to Benoît Mandelbrot, another world-class narcissist.

Here's Taleb on Mandelbrot: "[He] had perhaps more cumulative influence than any other single scientist in history, with the only close second, Isaac Newton."

That quote really says it all.


fascinating.


While the mean deviation as presented is slightly nicer than sigma for intuitive purposes, it isn't as appropriate (iirc) for statistical tests on normal distributions and t-distributions.

More importantly, it doesn't fix the real problem, which is that the mean and standard deviation don't tell you everything you need to know about a data set, but often people like to pretend they do. It's not rare to read a paper in the soft sciences which might have been improved if the authors had reported the skewness, kurtosis, or similar data which could shed light on the phenomenon they're investigating. These latter statistics can reveal, for instance, a bimodal distribution, which could indicate a heterogeneous population of responders and non-responders to a drug, and that's just one example.

I'm not a statistician, so some of this might be a bit off.


Whenever you try to describe a large data set with a single number, you lose a lot of information, like you said. Having more measurements helps, but I think the larger point is that we don't have to do this anymore, we could publish the entire data set instead.

Without computers, this would be a waste of paper, but transmitting the data electronically is cheap.

So why argue over the measurements? Publish the data and my software can give me any measurement I'm interested in.


So first about the article:

>>The notion of standard deviation has confused hordes of scientists

What an assertion! It also proved to be very useful for hordes of scientists... what about some examples of confused scientists ?

>>There is no scientific reason to use it in statistical investigations in the age of the computer

As someone who uses it daily I am eagerly awaiting his argument.

>>Say someone just asked you to measure the "average daily variations" for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Ok... if I am to calculate the average I am calculating the average if I need to know standard deviation I calculate standard deviation...

>> It corresponds to "real life" much better than the first—and to reality.

What the flying fuck. What "real life" ? Standard deviation tells you how volatile measurements are not what mean deviation is. Those are both very real life things just not the same thing.

>>It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term "standard deviation" for what had been known as "root mean square error". The confusion started then: people thought it meant mean deviation.

I don't know how one can read it and not think: "is this guy high or just stupid?".

>>. The confusion started then: people thought it meant mean deviation.

I am yet to see anybody who thinks that standard deviation is mean deviation. It's Taleb though. Baseless assertions insulting groups of people are his craft.

>>What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

One example please ? I can give hundreds when std dev is useful and mean deviation isn't. Anything when you decide what % of yoru bankroll to bet on perceived edge for example.

Ok so he asserted that people should just use mean deviation instead of mean of squares. Guess what though, taking the squares have a purpose: it penalizes big deviations so two situations which have the same mean deviation but one is more stable have different standard deviations. THis information is useful for many things: risk estimation or calculating sample size needed for required confidence (if you need more experiments, how careful should you be with conclusions and predictions etc). He didn't mention how are we going to achieve those with his proposal. Meanwhile he managed to throw insults towards various groups without giving one single example of misuse he describes.

This is not the first time he writes something this way. His whole recent book is like that. It's anti-intellectual bullshit with many words and zero points. He doesn't give any arguments, he throws a lot of insults, he misues words and makes up redundant terms which he then struggles to define. The guy is a vile idiot of the worst kind: ignorant and aggressive. Him gaining so much following by spewing nonsense like this article is for sure fascinating but there is no place for him in any serious debate.


>> The notion of standard deviation has confused hordes of scientists

> What an assertion! It also proved to be very useful for hordes of scientists... what about some examples of confused scientists ?

He is exaggerating, for sure. But the point is valid: the mean average deviation (MAD) is often very different than the standard deviation (STD), and the MAD is more intuitive, it has a natural geometrical interpretation - STD's usage of squaring the distance makes it more complex.

And yes, this confuses people in some cases, including scientists. Many scientists are not statistical experts, they use tools as they were taught, and they often assume MAD is approximately the STD, because it usually is, except in rare cases when it is not. I've seen examples of those people in grad school, he is not making this up.

The STD is far more easy to analyze in a mathematical way. That is the huge value it brings - squaring is an operation you can take the derivative of, but absolute value you cannot. STD gives us nice properties like easily provable sum of variances is the variance of the sum, for independent variables.

MAD, however is nicer for reporting data since it is more intuitive. I think he makes a valid point that STD is used more frequently than it should be.

> Ok so he asserted that people should just use mean deviation instead of mean of squares. Guess what though, taking the squares have a purpose: it penalizes big deviations so two situations which have the same mean deviation but one is more stable have different standard deviations.

His point is that many people are not aware of that property and do not want it.


I'm relieved to hear from others who didn't love his book, now I don't feel so left out. I checked out Black Swan from the library a while back. It seemed like Taleb's smug prose and mudslinging were writing a check his evidence couldn't cash, but I couldn't stand more then the first 50 pages, so I never got to find out. My girlfriend read it all of it and sort of gave me the cliff notes; she thought I wasn't missing much.

If I ever met anyone in real life who was as impressed by his book as as much as the inexplicably fawning blogs and reviews I have seen, I would give it another try, but I sorta feel like I've been had. At least it was a library book, so I didn't give him 12 bucks.

Anybody here love his stuff, wanna convince me to try again?


I couldn't make it through 50 pages of Black Swan. I haven't read his newest book.

Fooled by Randomness[1] was very good. It's more of a set of cautionary tales, mostly pulled from Wall Street, about how to think about chance and making sure you're judging all events against what is expected by chance.

The thing about Taleb is that all of his success hinges on a single, long-ago tail event that few people even remember. It defines his entire outlook and he often assumes that people who interview him or read his articles know this. The article in question not withstanding, if you mentally insert "when discussing tail events" before his claims, many of them make much more sense :)

[1] http://www.amazon.com/Fooled-Randomness-Hidden-Chance-Market...


What is hilarious is that I have been scratching my head over the exact opposite phenomenon!

I actually love Taleb's books (Black Swan, Antifragile). And I work with "data scientists", most with Ph.D.s, many in statistics.

I always talk about his book, but I cannot for the life of me find a single person who's even read it. I think: these are some the ONLY book about stats on the NY Times best seller lists (up until Nate Silver). And yet all these professionals have not only not read it, but barely even heard of it?

My hypothesis was that it is more popular on the East Coast than the West Coast (of the US). Are you from either of those places? I am originally from the east coast but work on the west coast. To hand wave a bit, I feel like west coast people are less into "ideas" and more into actions and experiences. Taleb's ideas do have somewhat of a nytimes-ish new yorker-ish east coast culture flavor. And a lot of people working in Silicon Valley are not really that interested in philosophy.

On the subject of his writing, I can totally see that people can be turned off by his writing. He can be arrogant and insulting. I find it kind of funny, but that's a matter of taste.

A few things I remember from his books that I really liked:

- The story of Nobel prize winner Myron Scholes, and namesake of the Black-Scholes equation, which I learned about in computational finance in college. He started a company "Long Term Capital Management", to monetize this ideas, and promptly lost billions of dollars. http://en.wikipedia.org/wiki/Myron_Scholes

That's not interesting? I think the difference between theory and practice is intensely interesting, and Taleb has a lot to say about it.

- I largely agree with his philosophy that people who claim to know things cause more harm than good. The downfall of Alan Greenspan and Bernanke is their arrogance. They think they can control the economy. But they can't and caused millions of people real harm.

- Respect for the old. For all its virtues, Silicon Valley does have a severe case of "neomania". Taleb's ideas about things that last apply to software too. Unix and C are going to be around a lot longer than say Hadoop or Puppet/Chef.

- The philosophy of fragility also applies to software in a straightforward way. Most people know this now, but you should continually expose your software to users and the market, not build up grand ideas in your head.

- actions over knowledge, i.e. people who know how to do things but not explain them

I could list a half a dozen more important ideas but I'll stop there.

I did write in my comments about this book that he overreached on his "trilogy" idea for the Antifragile. But I do like how he draws together a lot of seemingly disparate ideas that are philosophically related.


Ok, you convinced me to try again, thanks.

FWIW, I'm from the west coast, but have spent the last several years in New York working with natural sciences students and post-docs. I might be what you would call an "ideas" person. Abstraction appeals to me. People in our department seem happier when their work is closer to physical observerations. Measuring stuff with yard sticks and radar: good. Getting big piles of data from other people, and doing stats: OK. Fitting a parematerized model to someone else's data: healthy skepticism. Of course healthy skepticism is generally cultivated.

It wouldn't be surprising if our friends reading lists (and reactions) depend on what they do for living. Did you meet a lot of people in finance or economics on the east coast? Maybe people's professions are spatially correlated.

My peers seem to treat models cautiously (including their own), and have tended to respond to abstract economic ideas with measured skepticism. I can't speak for them, but I sometimes perceive that economic arguments are suspected of being insufficiently empirical and subject to ulterior motives. Which, it seems, is at least a part of what Taleb is complaining about. Obviously these things can be true of any kind of argument, I'm just reporting my impression. Anyhow, it would be easier to listen Taleb if his tone was more restrained.

Most of the economists I have paid attention to (which are not near as many as I would like) seemed inclined to provocation. Sowell in "Basic Economics", and Friedman in his speeches (I haven't read his papers) tend to poke fun at their fellow citizens, for example. I think they sometimes alienate those outside their discipline because of this. It makes reading fun though. I find Friedman very amusing, and I think so does he.


I am a programmer in Silicon Valley, but I was more interested in philosophy/mathematics when I was young (I was raised and educated on the east coast). I do feel there is a cultural difference -- not sure precisely what it is though.

I guess I share Taleb's problems with models. Even before I read Taleb I would say to myself "the map is not the territory", particularly with regard to software abstractions. I think the space between the model and reality is where you find a lot of interesting things (including the ability to make a lot of money).

I also share Taleb's skepticism with economics. The core problem is that it's not really a predictive science. It's a lot of people talking about stuff. Did those ideas help anyone? You can make a good case that they hurt a lot of people. If they are so smart, why aren't they rich? The Scholes case is a great example of that.

I'm currently reading "The Signal and the Noise" by Nate Silver, which is actually a fantastic complement to Taleb's books. They say very much the same things, in very different ways. The good part is that you will not be turned off by Silver's prose -- he's humble and very readable. I didn't follow 538 at all, and didn't pay all that much attention to the 2012 election, but I can tell that his writing skill was a big reason he became so popular.

To give an example, Taleb talks over and over about "negative knowledge" -- what not to do, what things don't work, etc. And Nate Silver says the same thing. To make accurate predictions and models, you have to be aware of known classes of mistakes, cognitive biases, etc. and not fall into those traps. People often think that they need to improve themselves by learning more. But for a reasonably smart people, the bottleneck to your effectiveness is actually thinking that you know something you don't.

I am also an "ideas" person but I share the utilitarianism and empiricism of Taleb. There has to be "skin in the game", as he says. There are so many ideas out there, and generally most philosophical arguments (and journalism, advocacy, etc.) boil down to confused semantics. So the way to find truth is through actions and experiments. Economics fails these tests for truth.

EDIT: Nate Silver talks about this paper: http://www.plosmedicine.org/article/info:doi/10.1371/journal... This would resonate with Taleb quite a bit. Most ideas are false, including published ones. It would actually violate economic theory if that weren't the case -- if most science was true -- because scientists have bad incentives (something I know from direct experience).


I think his books are a good holiday read. The targets are people in finance (many of whom are charlatans) and economists.


> and the MAD is more intuitive, it has a natural geometrical interpretation

It's less intuitive, not more intuitive, at least for me. And the standard deviation definitely has a more geometric interpretation than MAD. If you measure a hundred samples, and you want to figure out how much they differ from the expected values, what could be more intuitive than Euclidean distance? But most people never bother to try and extend their intuition about ℝ^3 to ℝ^100 to realize how simple standard deviation truly is.

What is being advocated here is the use of the L_1 norm (MAD) over the familiar L_2 norm (standard deviation). Everybody knows and understands L_2, and L_2 has a lot of desirable properties.


This makes no sense. If you want to know how much they differ from the expected value, you first define what you mean by difference (i.e. L1 or L2 norm) and then measure it somehow. The standard deviation is an estimator for the square root of the expected value of this difference, when it's chosen to be the SQUARED L_1 distance (which is the same as the L_2 distance in 1-D). The MAD takes the mean of the L-1 distance.

While the standard deviation is proportional to the L_2 distance betwen the vector of the samples and a equal sized vector with all coordinates as the mean, that's not an intuitive expression based on the problem statement.


> that's not an intuitive expression based on the problem statement

That's the danger I'm talking about when you start using the word "intuitive". Intuition is relative, and someone who works with mathematics or statistics will develop a mathematical intuition about things. Just like if you're an experienced driver you'll intuitively know when other drivers are about to change lanes, even before they signal.

I think of L_2 more intuitive because it physical space uses the L_2 norm.

The other thing that makes L_1 counterintuitive is that if you measure the absolute deviation from the mean, then you aren't minimizing the deviation—in order to do that, you have to choose the median.

In other words, you say "this is the center" and "this is the measure of how far away everything is from the center", but you could have picked a different center which has a lower distance from your data. Counterintuitive.


Maybe I misunderstood what you meant, but I would say that the L_2 norm amounts to MAD. L2 means

   sqrt(x²+y²+...)
and in 1D, that becomes

   sqrt(x²) = |x|


In 1D, L_2 = L_1. But if you are talking about 100 sample points, the data has 100 dimensions.


Ahhh, I see what you mean, I think. Or maybe not. If I have 3 sample points, [1,1,1], then the standard deviation is 0, but where you take your Euclidean distance?

If I'm not mistaken, with 3 sample points a, b and c that have an average mu, then

  sigma = |(a, b, c) - (mu, mu, mu)|   (L2 norm in R3)
Is that the geometric interpretation you're referring to? It's neat, but the mu vector feels a bit artificial.


the MAD is more intuitive

Which one, the mean average deviation or the median average deviation?


I think the A is for absolute (or it wouldn't make much sense as a measure of dispersion)


You're right, sorry.


I guess that's a matter of opinion. I personally meant the mean average deviation.


Huh? I... I don't know that. AAHAHAHAHHAHAHAhahaahahahhhhhhhhh.....


> The STD is far more easy to analyze in a mathematical way. That is the huge value it brings

This is also what I have read (I have essentially no experience with statistics).

> squaring is an operation you can take the derivative of, but absolute value you cannot

We can talk without wild, obviously false hyperbole. It's trivial to take the derivative of the absolute value function. Working with it is more difficult, but taking the derivative is as easy as anything in math:

    f'(x) = -1 (when x < 0); 1 (when x > 0)


"Easy as anything in math"? Now that's "obviously false hyperbole". You can't just brush away that discontinuity, specially when it's on the minimum (which is what people most often care about). Anyway, this is formalized in the notion of subderivative. http://en.wikipedia.org/wiki/Subderivative


You'll learn to take the derivative of |x| in high school calculus, shortly after learning how to take derivatives in general. It's not complex, and the fact that it's not defined at the origin doesn't mean there's a problem in taking it; it just means the function behaves "badly" there. I don't understand the focus on the discontinuity anyway; in working with the function, I'd be more concerned about the fact that it's piecewise defined.

Consider a variant on the absolute value function:

    f(x) = x (when x >= 0); sin x (when x <= 0)
It and its first and second derivatives are all continuous (after which it comes apart at the origin), but if you were working with the derivative, even though it's differentiable everywhere, you'd have to constantly be aware of whether the origin was in your domain. The great advantage of working with x^2 instead is that it behaves the same way everywhere, not that its derivative is continuous.


This occurred to me after the editing window for my other reply passed, but...

The tangent function has discontinuities at x = pi/2 and every interval of pi from it. When taking the derivative, is it more fruitful to say "you can't just brush away those discontinuities", or "sec^2 x"? And this is a derivative all calculus students are required to know!


That little discontinuity at the origin is kind of a pain though.


Oh, completely agreed. I was enlightened when I read that standard deviations were defined the way they were in order to make analysis easier; it seems more than reasonable to me. But there's no difficulty in just taking the derivative of |x|, and I hate seeing random misinformation out there.

http://xkcd.com/386/


> MAD, however is nicer for reporting data since it is more intuitive.

I have a hard time fathoming how anyone could think such a thing. This is math: we should be using things because they map to reality in some way, not because they are aesthetically pleasing.

The standard deviation maps to processes where the "importance" of changes is proportional to their square. For example, electrical power is proportional to the square of voltage, so AC power systems are conventionally rated by standard deviation. My 120 V power outlet has a standard deviation of its potential of 120 volts.

Standard deviation is also commonly used in situations where we cannot put a number on the importance of the deviation but we know it is big.

The mean average deviation is useful for numbers that are directly proportional to their importance. For example, if we have a hundred lamps and we measure their optical power outputs, the MAD would be a useful measure of their variation. Optical power is already in units of oomph.


> The standard deviation maps to processes where the "importance" of changes is proportional to their square.

Sure, but the point of the article is that STD is used very commonly, in places where that does not make sense. For example, it is common to see things like "the weight of the test subjects was 170cm (STD 5cm)".


Why does this not make sense? Please explain.

Because of the central limit theorem, many distributions encountered in science are approximately Gaussian, which is parameterized by its mean and standard deviation. According to Wikipedia: "Height is sexually dimorphic and statistically it is more or less normally distributed, but with heavy tails."

On top of that, we have well-understood and easy to compute estimators for standard deviation. Using the sample variance is not a bad estimator at all, the only real disagreement is whether you divide by N or N-1.


It makes no sense in that it has no physical meaning. As you say, even without deeper meaning it is a fine tool for handling Gaussian distributions.


Indeed, why does it not make sense? I read that, I immediately know that about 68% will be between 165 and 170 cm. http://en.wikipedia.org/wiki/68–95–99.7_rule


That rule only applies to normal distribution. So you would have made an assumption.


This comment was great fun to read, and I suspect that I share some of your opinions about Taleb (although mine are decaying with each day that passes since I last finished reading any of his books), but it would have been stronger as an argument if you had left off the part about him being a "vile idiot".


I just can't pass it. I gave my best at reading his book. The guy is unable to write one paragraph which doesn't contain some nonsense or serious misuse of established terms. He is insulting whole groups of people and institutions without giving any examples or research. I am pretty sure that if his stuff were posted without his name the conclusion would be obvious: it's just nonsense and very offensive at that.

I realize calling people idiots are frown upon here but I am ready to defend it. There isn't one other author which infuriates me with his ignorance and inability to make specific point without throwing condescending remarks. If there is anybody who deserve the term it's him. I mean "we should retire standard deviation because everybody misues it and it's not useful for real life" without any examples ? He just called out thousands of scientist, risk managers and statisticians without giving one single argument. His suggestion to just use mean deviation is laughable and obviously doesn't work for many purposes which he didn't address while being condescending in the process.

It's worse than saying: "we should retire Java and just use Python because it has dynamic typing and seemingly developers at even serious companies fail to see this!". Remark of this kind would gain me "idiot" badge pretty quickly. This article is on this level (well, actually lower as remark about Python actually contains an argument) and his book is worse. If that doesn't qualify him as an idiot I don't know what else could.


The guy is clearly not an idiot in the general sense; I bet he is of, at least, average intelligence. I also have my doubts that he is 'vile' for the majority of connotations and denotations of that word. Because of this, just shouting 'he's an idiot!' is vulnerable to being dismissed as hyperbole, and your baby gets thrown out with the bathwater.

You seem exasperated at his condescending attitude, but make no effort to hold higher ground. That said, even dismissing him as a 'crank' would add more substance to your argument than calling him a 'vile idiot.'

I'm not saying you have to play softball, and there are plenty of targets for your rage. Call his condescension vile, and his poorly-thought out conclusions laughable. Instead of saying he's an idiot, you can say that he's being lazy and should know better exactly because he's NOT an idiot.


I don't understand calling him a "vile idiot" is so terrible. Vile is a subjective assessment in the eye of the beholder. Bluecalm began with a few paragraphs making the case that Taleb has pretensions of being an expert where he doesn't know what he is talking about, and that his statements are unsupported.

Calling him a crank is a more objective statement, and to me seems to impute an unknowable motive to what Taleb is doing in these types of articles. Is it an egotistical attempt to feel important, a strategy to sell books, or does he believe all the stuff he says?

What I personally find so unpleasant and objectionable about Taleb is probably unfair, but he seems to fit the archetype of an entire group of people who make loud statements that are wrong or unsupported, but take a long and nuanced discussion to explain why.

For example he seems to be terrible at economic analysis, but portrays himself as an excellent trader so he asserts that scientific study of the economy is bunk. Let's say that he really is an outlier in finance, operating with localized phenomena is different than having a comprehensive view of the economy and refining models.

I think his idea of Black Swans is incredibly useful, and it is interesting to explore the psychology of underestimating the likelihood of rare events. His contributions are less useful when he sounds like the people who say "economics is not a science", before making a bold assertion about the economy that has no rigorous models to support it. Such situations are analogous to saying meteorology is not a science when it comes to forecasting whether or not it will rain in ten days. "A science" is a field where everything is known? "A science" is not a field where models are refined through research and experimentation? The science part of meteorology is the process of understanding the mechanisms behind how air masses interact and improving models, not predicting weather at a future date when a good deal of what will determine whether or not it rains has not yet occurred.


>I don't understand calling him a "vile idiot" is so terrible.

Then you are a vile idiot.


Super helpful comment. Thanks. Definitely, the transition from calling public figures "vile idiots" to calling individual commenters on HN threads "vile idiots" is sure to produce a productive discussion.

I started this subthread by pointing out that 'bluecalm had weakened his argument by using the words "vile idiot". But 'bluecalm also made a bunch of testable and non-obvious observations in his comment; his writing had value. Whereas you just jumped in to call someone you disagreed with a name, and nothing else; your writing had no value.

Could you stop writing comments like that?


Did you really understand that comment to be addressed to the writer rather than what was written? Frankly I found it to be a rather more convincing condemnation of the phrase "vile idiot" than your own. FWIW, I don't think anyone here (nor NNT even) is a vile idiot.


>Super helpful comment. Thanks. Definitely, the transition from calling public figures "vile idiots" to calling individual commenters on HN threads "vile idiots" is sure to produce a productive discussion.

The point I wanted to make was that "vile idiot" is problematic in itself, and by my meta-use of it, I hoped the parent would obviously see why.

How calling public figures "vile idiots" is any better than calling individual HN commenters the same?

>Whereas you just jumped in to call someone you disagreed with a name, and nothing else; your writing had no value.

Did it really seem like a called him a name just for the fun of it? Wasn't the meta-context obvious?


Let's say it was worded instead: "... these are some reasons that he was wrong, and his style of argumentation leads me to feel actual antipathy toward him."

Maybe it is because "vile" doesn't sound like a very extreme word to me—almost like Daffy Duck saying "despicable". I suppose I would have had the same reaction, if instead bluecalm had denigrated him with a word that sounds more serious to me like "worthless".


>That said, even dismissing him as a 'crank' would add more substance to your argument than calling him a 'vile idiot.'

Neither "crank" or "vile idiot" would add substance to bluecalm's argument. "Vile idiot" isn't even part of the argument, it's part of the conclusion.

I disagree with bluecalm on the "vile idiot" conclusion - I think that he's either vile or an idiot. If he's really communicating how he sees the world, he's an idiot. If he's just cynically trying to come up with another Gladwell/Taleb style thesis that will catch fire with the NYT and TED crowd and bring in buckets of money, then he's vile.


Yeah a polyglot who writes classical Greek, Arabic, French, English and can do advanced statistical modeling is clearly an idiot right?


Well, it surely looks like it seeing level of argument he presents and lack of logical consistency in his writing. Even in this article he claims SD doesn't model real world as well as MAD and then as arguments ask you what would you do if you were asked to calculate... MAD, you would clearly see it's not SD :-) His latest book is worse than the article and my conclusion about him being and idiot is based mainly on it. Here is one gem from his book:

>>True, while humans self-repair, they eventually wear out (hopefully leaving their genes, books, or some other information behind—another discussion). But the phenomenon of aging is misunderstood, largely fraught with mental biases and logical flaws. We observe old people and see them age, so we associate aging with their loss of muscle mass, bone weakness, loss of mental function, taste for Frank Sinatra music, and similar degenerative effects. But these failures to self-repair come largely from maladjustment—either too few stressors or too little time for recovery between them— and maladjustment for this author is the mismatch between one’s design and the structure of the randomness of the environment (what I call more technically its “distributional or statistical properties”). What we observe in “aging” is a combination of maladjustment and senescence, and it appears that the two are separable— senescence might not be avoidable, and should not be avoided (it would contradict the logic of life, as we will see in the next chapter); maladjustment is avoidable. Much of aging comes from a misunderstanding of the effect of comfort—a disease of civilization: make life longer and longer, while people are more and more sick. In a natural environment, people die without aging—or after a very short period of aging. For instance, some markers, such as blood pressure, that tend to worsen over time for moderns do not change over the life of hunter-gatherers until the very end. And this artificial aging comes from stifling internal antifragility.

In which he claims aging is misunderstood and proposes his new theory that it comes from too few stressors or too little recovery. Then he says that "Much of aging comes from a misunderstanding of the effect of comfort - a disease of civilization". Really ? Aging comes from misunderstanding ? This is just random babbling, there is no sense in it. Reasonable people don't write or talk like this and those who do with such conviction as him are called... well, idiots.

>polyglot who writes classical Greek, Arabic, French, English and can do advanced statistical modeling is clearly an idiot right?

If he is in fact a polyglot and in fact can do advanced statistical modelling (the latter I very much doubt, the former I have no idea about) maybe he is not an idiot but some mental illness is taking a toll on him which makes him write and talk like one. The thing is there is no continuity, what he sees as arguments don't even address the point. It's just stream of words without any essence or meaning. I mean again, read the paragraph I quoted.. it's not even cherry picked. There are worse (like the one about depression or academia). The whole book is like that and article from OP just continues the trend.


He does not say aging comes from too few stressors. He says maladjustment (which is just one component of aging) is.

>>> Aging comes from misunderstanding ? This is just random babbling,

You are trying very hard to not understand. The thought is simple - comfort has side effects (think obesity, bad nutrition, lack of movement, overuse of pharmaceuticals like antibiotics or mood adjusters, etc.) which are not properly appreciated (they are starting to be, but we are still far from proper realization (understanding) of what and how much we pay for it and doing something about it). Not understanding those effects influences behaviors in such ways that people harm themselves. These effects accumulate and contribute to what is called "aging" - you can eat random junk when you are 20 and be fine, but keep doing it till you're 50 and you'll be the best client of your local healthcare facilities for the rest of your life. And so on and so forth. I won't say it is the deepest of observations - actually, it's pretty rapidly becoming a commonplace and sometimes even a fashion - but it definitely not a "random babbling".

I get an impression that you just came to a hard conclusion that Taleb literally writes nonsense and you are hard set on not allowing any sense that is contained in his writing - and can be easily seen - to get to you. Your right of course, but I personally fail to find any utility in such a behavior.


This is what I get too. I've read all his books except the technical one (Dynamic Hedging) and this conclusion of 'nonsense' about a pretty clearly written paragraph is baffling. I too read the paragraph and came to the conclusion you did. Maybe the above commenter has a hard-on for bullet points and power point presentations but Taleb has repeatedly said that he writes essays for pleasurable consumption and not business books (regardless of how the publisher markets them).


I don't see a problem with the sentence "Much of aging comes from a misunderstanding of the effect of comfort - a disease of civilization". I read it like this: people think comfort is good and healthy and prolongs life, so they seek comfort and get fragile/sick - they misunderstand the effect of comfort.


Although its unfair to nitpick one paragraph since its out of context from the rest of the chapter/book, reading the aging paragraph makes me see where you are coming from with all this.

The guy can't help himself. It's like reading a stock ticker (except its Taleb's stream of thought), flashing across with all the different thoughts that don't necessarily correlate with each other. You think there's some relevancy there but its hard to pick it out in the moment.

I will say though, Taleb reader's are surely great Words with Friends or Scrabble players. You just can't help picking up a few new words.

I don't agree with your 'vile idiot' statement but I mostly concur with your thoughts. I hope that you don't let Taleb affect your senescence.


Infact his goal is to alienate readers like the commenter above by writing in the old literary style. We are so accustomed to modern non-fiction following Malcolm Gladwell like structure for 10 year olds that we've lost the art of appreciating the meandering, scattershot expression of ideas in a literary style. In short, his filtering works.


no, it's just bad writing. meandering nonsense is never good writing, though perhaps it boosts your ego to feel like you are part of a special club (of millions) that understands his ramblings.


Funny I find your comments to be doing exactly what they are critiquing. And I think you are letting your hate to him cloud your ability to want to understand what he is trying to say.

I personally find his books mostly valid critiques of established "truths", probably because I agree with much of it. If that makes me a vile idiot I am fine with that. Each to their own.


Too bad no upvote button. I agree with you. I find the comment also a little too worked up with a little useful counter argument. I agree STD has its uses in situations. Nassim is not saying to get rid of the concept, but more like saying a lot of people are not using it correctly. I don't know why so worked up about that... Quite a lot of pointless insults.


> Funny I find your comments to be doing exactly what they are critiquing.

I was about to write the exact same thing. It is very hard to take this criticism seriously when it is written that way.


Replying to you and parents:

> Funny I find your comments to be doing exactly what they are critiquing.

I am really not doing it. I gave specific examples of what is wrong with his article and in what areas standard deviation works. I am not defending a view that "standard deviation describes reality better" or anything like that. I am saying why his article is bad and in what areas his solution of just using MAD doesn't work. Those are quite specific things. How can you tell I am doing the same thing I am accusing him of ? While I didn't mention any specific fragment in his book I thought it's a useful view to add it as I've spent a lot of time developing it (I've read 3 books of him and listened to many talks). There is limited space in internet forum post and proving my point would require quoting the entire book as I claimed there is barely any paragraph without nonsense or term misuse. Also my claim is easily verifiable: just start reading "Antifragile" and see for yourself. That can't be said about his claims.

> Nassim is not saying to get rid of the concept, but more like saying a lot of people are not using it correctly.

Really? That's the vibe you get from the article ? Let's see:

>it is time to retire it from common use and replace it with the more effective one of mean deviation

>Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer

>as it does more harm than good

>It is all due to a historical accident: in 1893

He is not saying that some people in some situations misuse the concept. He is saying the concept is dangerous and should be disposed of. I get that he is easy to like as he picks on groups not generally liked but let's stick to what he is actually claiming instead of softening it for him. Again, he doesn't give one simple example of a situation which could be handled better by using MAD instead of now used standard deviation.


"...common use..."

I don't think it gets much clearer than that.


>>> I am pretty sure that if his stuff were posted without his name the conclusion would be obvious: it's just nonsense and very offensive at that.

This is not true, at least for me. When I first read Taleb's ideas, I had no idea who he was and what is his credentials, but his ideas were interesting enough and challenging enough to look him up and find out what else he wrote. Yes, his style is abrasive and sometimes outright combative. So what. His ideas are interesting, that's what matters. I'd rather have an abrasive person who has interesting ideas than a polite one who has nothing.

And speaking of Python vs. Java things, I'm reading those kind of things literally every day, frequently including here on HN. If idiot badges were distributed for each such occasions, there would be a very sizeable majority wearing them. But I am not convinced calling each other idiots is actually adds anything to the discussion.


Taleb's books are full of stories, random ideas, non-sequitur thoughts and a quite a few rants about things that bother him. This essay was clearly an invitation to rant about something and the more controversial the better.

Sure, he comes across as arrogant, confrontational and uncompromising and he is unapologetic on top of that; yet dismissing the bulk of his writing based on his personality and style of writing is a shame as the recurring theme throughout all his books are worth the time spent reading. To paraphrase: We believe and accept a lot of things without question and let fallacious thinking and cognitive biases lead us into making decisions and actions that do us harm.

Also, his books are that rare type that you can pick up, read a few pages and have set off a train of thinking for the rest of the day.


...inability to make specific point without throwing condescending remarks.

Hey, Kettle? It's Pot calling. He seems pretty pissed off about what color you are.



Is it telling that in his review of Black Swann he's seriously citing Malcolm Gadwell of all people?


No. He draws no conclusions from Gladwell's analysis; he brings Gladwell into the discussion only to show a well-publicized comparison between what Gladwell believed to be two opposing views of how risk works in the markets. He then tries to demonstrate that Niederhoffer's approach outperformed Taleb's, not because Gladwell said so, but because the empirically available facts of how the funds operated appear to demonstrate it.

Gladwell plays into the comparison about as much as the editors of Cosmopolitan would play into it if Niederhoffer and Taleb had shared the cover of an issue of Cosmo.


Your reading comprehension is questionable if you consider that a citation.


Your first link is an excerpt from [1]. I'm not sure its much of a takedown either?

I found the second and fourth links very interesting, thanks for the other perspective.

[1] http://www.dimensional.com/famafrench/2009/03/qa-confidence-...


I particularly like Noah Smith's take on him[0]:

"Nassim Taleb is a vulgar bombastic windbag, and I like him a lot. His books do a good job of explaining some deep, important finance ideas for a general audience. He has helped popularize the notion of "skin in the game". His trolling of economists is also good for some lulz (I particularly enjoyed his coinage of the term "macrobullshitters")."

[0]http://noahpinionblog.blogspot.com/2014/01/of-brains-and-bal...


Counterexample:

  |                           ..
  |                         .    .                 
  |    waste      |       .        .       |     stuff
  |     of        |      .          .      |       I
  |     my        |     .            .     |     don't
  |    time       |   .    stuff I'll  .   |     grok
  |               | .        read        . |       
  |(Nassim Taleb) |                        | .       
  |       .       |                        |       . 
  |______________________________________________________
          -3     -2     -1     0     1     2     3
                       standard deviations


Very nice! I did have his books on my reading list. But this whole discussion definitely puts me closer to the "waste of my time" side of the distribution.


That would be a mistake.

He's getting a lot of hate because he's insulted a lot of people while stretching some of his arguments and opinions too far, but his ideas are still better intellectual reading than most of what's available elsewhere.

If you care to read the comments again, you'll notice many of the disagreements with him are on the level of knee-jerk emotional reactions and not well reasoned criticism.

However, it's true that he's turned/turning into some gigantic obnoxious ego whose ideas are beginning to look like wild gambles (speaking more often than he substantiates.) He's definitely intelligent and I would say a genius, but in each book it's like: 10% great ideas, 30% very interesting intuitions, 30% vitriolic anger at people/institutions/ideas, 30% broken ideas/intuitions. But all is sold as the truth (tm).

If you wish to increase the quality of what you read you could use Taleb's own heuristic: when a book has existed for a thousand years it will exist for a thousand more. Time is a great arbiter for importance.

My point is that you either hazard a guess on which of his ideas are powerful, or you wait until the dust settles and somebody else has done this work for you.


+1 for this. Your comment is definitely the most reasonable in this discussion. I'll likely read his Fooled by Randomness and Black Swan at some point.


Sigh. If you read his books or papers or see a lecture or the technical companions, you will see that the case against him is vastly overstated. It's like how social justice people probably view PG right now. He is pretty reasonable and I love seeing him accused of all sorts of things here when it's obviously vastly overstated. He lives for trolling the try-hard crew found here. Do you seriously think he is advocating the absolute abolition of std deviation inquisition style?


You should be aware that HN and other online comment forums often have 'negative' posts. Comments that attempt to show up the author of a linked article are a kind of intellectual one-upmanship that is common on Reddit/HN/etc.

I haven't read Nassim Taleb's work either, but comment forums like HN are a poor place to get an idea of the quality of someone's work unless they have clear and substantial criticisms.

You're usually better off getting your reviews elsewhere.


You conveniently leave out the reasonable portions of his article. I read your comment before I read the article, and guess what, you have cherry picked specific parts of the article such that Taleb comes off as stupid.

It took me ten seconds to figure out what 'Goldstein and Taleb' refers to: http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Con...

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=970480

He specifies that SD is reasonable as a mathematical tool, but not for inference about society, finance, etc.

Your comment appears to have an agenda or ideology underlying it.

Further you don't seem to have read the article fully.

>Ok so he asserted that people should just use mean deviation instead of mean of squares. Guess what though, taking the squares have a purpose: it penalizes big deviations so two situations which have the same mean deviation but one is more stable have different standard deviations. THis information is useful for many things: risk estimation or calculating sample size needed for required confidence (if you need more experiments, how careful should you be with conclusions and predictions etc). He didn't mention how are we going to achieve those with his proposal. Meanwhile he managed to throw insults towards various groups without giving one single example of misuse he describes.

That is pretty much what his entire article is about: taking squares may not be the best idea universally.


>He specifies that SD is reasonable as a mathematical tool, but not for inference about society, finance, etc.

Yes, and as SD is widely used in those so this is quite a bold statement to make an Taleb doesn't give one example of things going wrong with it and his proposal working better. This observation is also wrong and I pointed out some areas where standard deviation is useful.

>Your comment appears to have an agenda or ideology underlying it.

Yes it's ideology of calling people out when they assert incorrect things in condescending tone without any arguments while insulting whole groups of scientists and statisticians.

>That is pretty much what his entire article is about: taking squares may not be the best idea universally.

No, article isn't about it. It was in my comment. Taleb doesn't mention how MAD works better than SD in some situations and why or why it should be substituted. Taleb doesn't point one situation when MAD works better than SD (I pointed out some where SD works better than MAD) he also doesn't address obvious problems which such substitutions create. Heck, he doesn't even discuss how nature of SD is different than MAD, he just says they are different and the latter is more applicable for "real life". Also he doesn't argue that MAD is sometimes more useful than SD. He argues that the latter should be disposed of outside some very narrow theoretical applications.


> Taleb doesn't point one situation when MAD works better than SD (I pointed out some where SD works better than MAD) he also doesn't address obvious problems which such substitutions create.

Ummm, are we both discussing the same article?

> Yes it's ideology of calling people out when they assert incorrect things in condescending tone without any arguments while insulting whole groups of scientists and statisticians.

No. It is the ideology of dissing Taleb. Obviously I am not his paid spokesperson. But please argue to the merit of his arguments.

To quote from Taleb's edge.org article:

>> 1) MAD is more accurate in sample measurements, and less volatile than STD since it is a natural weight whereas standard deviation uses the observation itself as its own weight, imparting large weights to large observations, thus overweighing tail events.

Also he alludes to his paper with Goldstein. It is clear form Goldstein and Taleb's manuscript that that Taleb is not just throwing these arguments to talk trash about practitioners of statistics. They report findings of an experiment with multiple groups of applied statisticians. I'll quote from it [1]:

"We first posed this question to 97 portfolio managers, assistant portfolio managers, and analysts employed by investment management companies who were taking part in a professional seminar. The second group of participants comprised 13 Ivy League graduate students preparing for a career in financial engineering. The third group consisted of 16 investment professionals working for a major bank. The question was presented in writing and explained verbally to make sure definitions were clear."

That makes Taleb's edge.org claims far from unsubstantiated.

[1] Goldstein, Daniel G. and Taleb, Nassim Nicholas, We Don't Quite Know What We are Talking About When We Talk About Volatility (March 28, 2007). Journal of Portfolio Management, Vol. 33, No. 4, 2007. Available at SSRN: http://ssrn.com/abstract=970480


>Ummm, are we both discussing the same article?

Yes. He doesn't mention the fact that MAD loses information about volatility. He ignores the fact because that would make his whole article look silly.

>But please argue to the merit of his arguments.

It's kinda difficult when he is not saying anything specific. When he says: "look, those scientists don't grok basic math but fail for catchy names instead" without any facts, examples etc. all I can do is call him out on this nonsense. When he argues for abandoning SD I can give situations where it's not gonna work and I did.

>1) MAD is more accurate in sample measurements, and less volatile than STD since it is a natural weight whereas standard deviation uses the observation itself as its own weight, imparting large weights to large observations, thus overweighing tail events.

But this is nonsense. It's like saying measuring temperature is better than measuring mass. Those are just different things to measure and saying one is less volatile isn't really meaningful. SD contains information about volatility, MAD doesn't. That the reason SD is used for many things. When you want to substitute one with the other you gotta address how you handle that lost information.

>Also he alludes to his paper with Goldstein. It is clear form Goldstein and Taleb's manuscript that that Taleb is not just throwing these arguments to talk trash about practitioners of statistics. They

This paper wouldn't pass peer review. What was the methodology ? How much time they have ? Did they have access to a computer ? Even if it was really serious experiment it's just toy example of people getting question asked in tricky way wrong. What about actual mistakes they make in real world because of using SD instead of MAD ? This is what he claims is a problem. He claims SD should be abandoned in favor of MAD, what is one situation which people would get better if they do it ? I am mocking him because is a master of using many words without making a point and somehow is good at seducing readers to like him.


He's inflammatory. Really, really inflammatory. He also tends to write about obvious technical points so if you do pick through the morass and find the gem you'll be a bit underwhelmed. In this instance, standard deviations are obviously important statistics to compute and have many, many useful properties, sure, but Taleb rightly points out that they can be misused and I agree with him that their misuse is endemic in science and a downright travesty outside of it.

But to just say that some tool is both useful and misusable is boring and wouldn't cause people to talk about this nearly as much.

I don't really like Taleb's style, but I can't deny that it causes conversation about things that people often would not give a second thought. In that way, I can see the outlines of a really great point underlying his inflammatory rhetoric.

Don't hate the player? At least he's arguing that people ought to be smarter and worry about what data really means.


I think that's my problem. By the time I've waded through the insufferable smugness, the anecdotes and the random and meaningless attention to things I've never encountered in practice, what is left is interesting, but not nearly worth the effort.


What is worth the effort to you?

I have found his books more stimulating than Hacker News for example...


> Don't hate the player? At least he's arguing that people ought to be smarter and worry about what data really means.

Are you still talking about the inflamatory style, or is that about the contents of the article?


Probably more about the style than the substance, here.


Having the reputation of 'provocative' can become about being provocative more than being on point.


Context I think

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=970480

And I think you should pay more attention to this:

"...as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems..."

if you want to understand where he is coming from.


Definitely more context, but I don't think that paper really backs up the assertion. He asked finance professionals or students, not social scientists. And he asked them at a seminar where I'm guessing they didn't have access to a calculator or computer, so it looks like a lot of them just used a rule of thumb.

The question they asked reads like a trick question to me. And coming up with the right answer would have required working backwards through statistical formula It's the type of thing you do an "Introduction to Statistics" class and then don't think about too often because it doesn't have a ton of practical applications. It seems a lot easier to just ballpark it.


His criticism aligns with my experiences. I don't have the scientific background to back those up but I do think that reality proves many of his points correct if you try to understand him instead of mis-understand him.

But like with Wolfram I can understand why his way of formulating himself might get in the way of what he is saying.


>>"...as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems..."

Well, for sure there are problem with people applying tools without fully understanding them. It's not very interesting statement. As obviously many things in nature form normal distribution and standard deviation describes those distributions well while mean deviation loses information about volatility. If you are going to propose abandoning of standard deviation it would be nice to give some alternative ways for handling those distributions and maybe show how using mean deviation in place where "many scientists" these days use standard deviation improve things. His study shows that people weren't able to answer his question. That could be for various reasons, like reading it too fast. It doesn't really matter. What matter are example of decisions/measurement which could be better handled by "abandoning standard deviation" or maybe showing that those people actually make financial mistakes due to not understanding the difference.

Even if he is right that many traders misuse or don't understand the term it's a problem with education and not with tools being wrong. "We should abandon standard deviation" is wider claim and to defend it you need to show how problems currently handled by using the measure should be handled in better way.


It might not be interesting to you and I am sure you know how to apply them.

It is however important if large groups of disciplines are not able to apply it properly.

Instead of calling him a vile idiot and refusing to even look at what he is trying to say you could see this statement as a cry for help, a provocation to start a debate and so on.

The deeper issues here, I think is that he is probably not death wrong nor death right and so it's really impossible to prove anything objectively but rather different experiences yield different opinions.

I personally believe that sometimes you have to simplify in order to get a fundamental discussion started. In fact i find the aftermath of those provocations to be the most enlightening.


In other news, sharp knives have been banned because untrained people have cut their fingers with them. Shoemakers and cooks protest to no avail.


>It might not be interesting to you and I am sure you know how to apply them.

I am saying that pointing out that some people misuse some tools is not interesting without specific examples and problems it causes.

>It is however important if large groups of disciplines are not able to apply it properly.

Of course it is, let's try to identify such situations... o just let's try to find one problematic example.

>It might not be interesting to you and I am sure you know how to apply them. It is however important if large groups of disciplines are not able to apply it properly. Instead of calling him a vile idiot and refusing to even look at what he is trying to say you could see this statement as a cry for help, a provocation to start a debate and so on.

I've read his 3 books and listened to many talks. He just doesn't have anything specific to say. If he is crying for help as you said why not describe some situations where using standard deviation instead of MAD causes problems ? I am not refusing to listen to him. The fact that he writes about those problem is the reason I invested a lot of time reading his books despite terrible writing and lack of editing. I was willing to go through just to see what he has to say. It turns out there is nothing, not one example of systematic error people make which could be corrected. It's just hate for anything and everything often paired with failure to understand the most basic statistical concept.

>The deeper issues here, I think is that he is probably not death wrong nor death right and so it's really impossible to prove anything objectively but rather different experiences yield different opinions.

He is dead wrong about standard deviation not being useful for "real life". He is probably right about "some people misuses it some of the time thing" but it's really not interesting unless you give something specific which he never does.

>I personally believe that sometimes you have to simplify in order to get a fundamental discussion started.

I agree. The problem is not simplifying he just doesn't have anything to say. It's not one article without examples or arguments. His whole book is like that and then his talks.

> In fact i find the aftermath of those provocations to be the most enlightening.

And I see danger in it. The guy has some serious following. It can't be good if people read his stuff and start believing that those risk managers are just morons, that scientists misuse standard deviation because of misunderstanding in 1893 and all other ridiculous things he claims. You can really gain an impression that everybody is a moron, academia is a waste and math doesn't apply if you don't know better and take his writing seriously.


If he doesn't have anything specific to say one would think you wouldn't have anything specific to critique him for.

And you are not the only one who read his books. But again. His criticism aligns with my experience, so what do I know.


> The guy is a vile idiot of the worst kind: ignorant and aggressive.

Your comment is a bit irritating. I don't know the author of this blog post but I think he has a couple of valid points. You seem to dislike him for some other reasons. Feel free to share them. The blog post is still perfectly reasonable.

Anyway, I agree some of your comments as well, namely this:

> I can give hundreds when std dev is useful and mean deviation isn't. Anything when you decide what % of yoru bankroll to bet on perceived edge for example.

and this:

> Guess what though, taking the squares have a purpose: it penalizes big deviations so two situations which have the same mean deviation but one is more stable have different standard deviations.

But I don't see why this comment from N.T. is stupid:

>>>It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term "standard deviation" for what had been known as "root mean square error". The confusion started then: people thought it meant mean deviation.

I actually agree that standard deviation is a confusing name, why do you think that's a stupid comment?


>Your comment is a bit irritating. I don't know the author of this blog post but I think he has a couple of valid points. You seem to dislike him for some other reasons. Feel free to share them. The blog post is still perfectly reasonable.

Really ? You think call for substituting one of the most common use measures in science, finance, gambling etc. with some other measure which doesn't have the same crucial properties without giving any examples of one going wrong and the other being better is reasonable ? He also threw some assertions about general confusion and scientist using it because of misunderstanding in 1893. Again without any example while giving condescending analogy of someone asking you to calculating average and realizing it's not standard deviation. It's not reasonable by any standard. The reason I have very low opinion about his writing and him are his books, especially Antifragile.

>I actually agree that standard deviation is a confusing name, why do you think that's a stupid comment?

He claimed that "the confusion", things he described in previous paragraphs about scientists, PhD's and statisticians is caused by misleading name. I am not saying the name is good but implying that so many people are using it just because the name given to it in 1893 is insulting. It's basically Taleb telling scientists: "hey guys, you use this standard deviation thing because of randomish reason and not because you know what you are doing". It's very strong bold statement which undermines credibility of whole groups of scientists which he again didn't even begin to argue for. He just asserts it and continue.


Why this attitude? Simply because it's 'insulting'? Is it true? Should you not be far more concerned if it's true? That lots of scientists are misusing statistical tools? What does it say about you that you are posting angry vitriolic comments online because someone called out scientists misusing statistical tools but that you are not as alarmed that such a thing could be happening?


>Simply because it's 'insulting'?

Because he makes a lot of bold/insulting claims and then fails to give one argument most of the time. When he does provide an argument it's very often nonsense (like with his analogy of someone being asked to calculate MAD).

>Should you not be far more concerned if it's true?

I am. I am also concerned about discrediting scientists without providing reasons. Creationism, anti-vaccination and climate change denial are all based on people not trusting scientists and following guys who just tell them scienc is nonsense. Taleb is very close to that line.

> What does it say about you that you are posting angry vitriolic comments online because someone called out scientists misusing statistical tools but that you are not as alarmed that such a thing could be happening?

He didn't call out anybody. He just threw plain insults without reasons. When you call out someone you say what is wrong with what they are doing, like this: "Hey, bankers, it's unfair that you make huge bets and don't pay out when you lose!", "Hey, investing firms, it's stupid that you treat a lucky few as celebrities while you can't have any confidence in their performance". Here is didn't called out people. He just asserted they are using one of the most used mathematical tools incorrectly, that it's because confusion in naming and that it should be disposed of. He didn't give one example of mistakes happening because of it. Now, people who follow him and believe that he is some kind of authority may start distrusting science or academia (as he is actively hostile vs both) exactly the same way anti-vaccination people do: just because some celebrity told them so. There is value in calling people like him out on their nonsense.


...people not trusting scientists...

I'm not sure why it took you so long to flip the bozo bit (if you were attempting to dance asymptotically along the edge it really has been a bravura performance!), but that did it.


So your argument is that if large groups of scientists are misunderstanding and misusing a scientific tool nobody should try to assert or argue that this is happening without providing exceptional proof since otherwise it might insult their credibility?

Alright...


> One example please ? I can give hundreds when std dev is useful and mean deviation isn't.

It is ironic that you take Taleb to task for not providing an example, and then you do exactly the same thing that you are excoriating him for. Claiming that you have hundreds of examples is not the same as actually presenting one.


The true case against the standard deviation is more or less impossible to make without going into the math. For a mathematically advanced Bayesian, the idea is straightforward if you've already been through the lawful derivation of Gaussians from their underlying assumptions: Using standard deviations in your thinking makes sense if you think that an increasingly large error has a decreasing log-probability that goes as the square of the size of that error. The distribution that you get from the sum of independent variables is the classic example. Basically, if you think in standard deviations, then all postulates of the form "The error was that size" will come with probability penalties that you trade off against each other in proportion to the square of the size of that error. Thinking in standard deviations similarly corresponds to models where exceptional events become unlikely in proportion to the square of the size of that exception, only by "unlikely" you have to read log probability rather than probability, and log probability is (always) the right way for Bayesians to think about it anyway for reasons I won't go into.

And then the critique of "standard deviation" is that people got taught SD as a statistical tool that you just pull out any time you feel like it, and they don't know what it means in underlying probability-theoretic terms as an assumption about either the world or our own uncertainty, and so it's misused horribly on all sorts of occasions.

I'd guess that SDs are appropriate 40-90% of the time depending on which field you work in, but without a lot of Bayesian background with fairly advanced math people will not be able to really appreciate what the other 60-10% of times look like. And the state of education is not anywhere near like that. It's just people being taught to calculate the standard deviation cause, like, that's something you do. They don't know what assumptions it corresponds to even in the cases where SD does apply.

Burning SDs to the ground and starting over would not be very much amiss in one of the fields where SDs only make sense 40% of the time, but the practitioners are using them all the time. (Machine learning is one of the fields where SDs make sense maybe 40% of the time, and if I found an ML practitioner who had been taught to think in terms of squared error I'd send them off to learn the underlying probability theory until their entire worldview had been translated into likelihood functions instead.)


I am wondering if you have a lot invested in concepts and techniques that Mr Taleb is suggesting are broken. There is certainly a tone of defensiveness about your post.

I think it is still unexplained why the financial community is sticking to things that are known to be broken. It is like trying to use newtonian mechanics to describe chaotic phenomena. Standard deviation does not properly describe fractal stuff.

Feels like the industry is looking for the lost set of keys on a dark street, not where they were last seen but rather under the lamptpost, cause that is where the light is.


I agree with some of what you wrote, but I think you should reread your reply to Taleb with as much scrutiny as you read Taleb.


Take this with a grain a salt. Taleb has made a lot of enemies with his writings about the bank bailouts and gov inefficiency. There are a lot of reasons for people to attack him, especially anonymously on the internet.

I am not saying that the arguments are without merit, just to be cautious in forming an opinion on the man.


> There are a lot of reasons for people to attack him, especially anonymously on the internet.

On the positive side, if you're going to judge the guy by the insights of his critics, his supporters have included the likes of Benoit Mandelbrot and Jack Bogle.


The funny thing is I've never used absolute deviation before, so I'm slightly confused by absolute deviation right now as I wrap my head around what it means relative to sigma (which I have used, lightly, for years) and why anyone would want to use it instead...

edit: apparently MAD can refer to either "mean absolute deviation" or "median absolute deviation". Yup, this sure isn't going to confuse anybody.


MAD can also refer to Mutually Assured Destruction, which seems fitting given this thread.


>>>There is no scientific reason to use it in statistical investigations in the age of the computer

>> As someone who uses it daily I am eagerly awaiting his argument.

Compute-intensive statistical tests, which are data-driven and make fewer assumptions about the underlying distribution, can give tighter confidence intervals and detect statistical significance better.

For example, stratified shuffling.


You will find example here: http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Con...

There is relatively simple, yet illustrative problem described on the first page. The solution is on the second page. Try to solve it w/o looking the solution.


> Standard deviation tells you how volatile measurements are not what mean deviation is.

Why would you use sd to convey the amount of volatility? Isn't the mean deviation much more easily understood?


His point has always been to cut off the tails. The MAD actually allows you to cut off the tails sooner, and that moves outside of your chosen confidence interval will be seen as tail risk much sooner than if you use STD. For example, think about the following example of normal returns versus S&P returns:

http://managed-futures-blog.attaincapital.com/wp-content/upl...

A distribution built from MAD will more closely resemble the reality of the S&P returns, and will create more robust models. There are many examples of this.


>> There is no scientific reason to use it in statistical investigations in the age of the computer

> As someone who uses it daily I am eagerly awaiting his argument.

The use of root mean square error|deviation comes from the ease of calculation in the days before computers. Now that we have computers we can just use mean absolute error|deviation. Before computers, we needed calculus to find the "best fit line" -- parameters that create the least distance between predicted and actual values. Calculus plays well with parabolas but not so great with absolute value functions. Now that computers are around, we don't need calculus (as much).


I understand your point, and I agree mostly with it

But I think the headline was sensationalised.

Taleb doesn't seem to be against "standard deviation" mostly, but calling it "standard deviation" (which should be called something else), and providing a more "user friendly" number.

The positive square root of the second central moment of the density distribution function will keep being that.


I am so glad that this comment is at the top!

Some other points - std.dev shows up in nature and is easier to understand, and analyze, simply because variances from independent sources add up. That's why something called ANOVA exists.

In OLS regression, we minimize the std. dev. of the error, not the MAD. (It is not because statisticians "thought" std. dev is actually MAD, it's because minimizing std. dev or square error has very nice properties that we use it even though the solution to MAD minimization is also known.)

Lastly, I don't follow who he proposes to be using MAD - if he decrees statisticians and physicists should still use Std. Dev?

Often it is that I find these articles, and I let them pass by. But sometimes, they have a potential to come back - and I stand a chance to be quoted this article, by someone else, who I then have to drain my energy to make the point that it's not really worth reporting MAD from now on.


Yeah. Eliminating stdev so social scientists don't get confused is the tail wagging the dog.


maybe in an intellectual sense, but a lot more people read http://www.huffingtonpost.com/science/ than http://arxiv.org/


And a lot more people will contribute nothing of note to science. Why should we cater to the lowest common denominator again?


This guy sounds like a complete joke. Standard deviation is an immensely useful tool in stats. It's not a historical accident we use it everywhere, for example it naturally appears in the very useful Chebyshev inequality, and a million other contexts.


I know this might be shooting the argument in the wrong direction...

Correct me if I am wrong, but I thought that the term inverse square law was erroneous, and I have yet to come across a complaint from anyone.


bluecalm says:

"What an assertion! It also proved to be very useful for hordes of scientists... what about some examples of confused scientists?"

How about every single economist and financial analyst who failed to predict the financial crashes of the last 50 years? Did you realize how much money was lost over the last 50 years by the financial industry's reliance on models that improperly use the standard Gaussian bell curve? Taleb pointed out why the Gaussian is not usually an appropriate model for financial analysis and suggested a return to more conservative models. Yet today Modern Portfolio Theory and the Black-Scholes models (which use the Gaussian) dominate in financial schools and institutions despite the fact that they absolutely utterly failed, not badly, but catastrophically in every financial crisis when they were most needed.

You state that "Standard deviation tells you how volatile measurements are". But Taleb shows how financial markets are non-Gaussian and have "fat tails" and gives the data and the supporting arguments.

bluecalm says:

"As someone who uses it daily I am eagerly awaiting his argument."

You're late! Taleb began publishing his arguments years ago and continues to publish. You're way behind. Read his papers and read his books in the following order:

Fooled by Randomness http://www.amazon.com/Fooled-Randomness-Hidden-Chance-Market...

The Black Swan http://www.amazon.com/The-Black-Swan-Improbable-Robustness/d...

Antifragile, http://www.amazon.com/Antifragile-Things-That-Gain-Disorder/...

BTW simply because someone's writing style is different does not mean that he is wrong. To me, complaining about a writing style is a form of ad hominem argument. People are different and Taleb is one-of-a-kind. Initially I didn't see where he was going, but once I realized that he was presenting ideas that were absolutely, utterly novel and had real explanatory power I sat up and paid attention. If you read for the facts and the valid arguments you will see that Taleb delivers the goods.


This whole post is a perfect example of the danger of Taleb's work. Someone with no knowledge of finance or statistics has read his books and now feels able to make extremely strong statements about a link between financial theory and financial crises. :(


You remind me of NNT - hyperbole and extremes. "vile idiot"?


Standard deviation tells you how volatile measurements are not what mean deviation is. Those are both very real life things just not the same thing."

While I am not a Taleb fan (he had his own black swan and became exactly the reviled expert his own books warn us about), your hostility in this case seems out of place.

The root of his seemingly casual contemplation is that standard deviation is simply misnamed. That this misnaming causes a cognitive dissonance that confuses even knowledgeable practitioners to mentally conflate it with the mean deviation. Which -- as someone in the financial industry -- I can absolutely confirm. It seems like such a minor thing, but the name has tremendous influence on how we parse these things, and the shortcuts we take in understanding things.


Well, I read his article as "let's abandon standard deviation and use average mean deviation instead". I think that's what he claims. It's not: "some people are sometimes confused by the concept and in some places mean deviation would serve better, for example here and here". It has way more aggressive tone and way wider assertions about people misusing standard deviation. Remarks about "real life" doesn't help either, especially if you don't provide even one example where standard deviation is commonly used but mean absolute deviation would serve better.

>> Which -- as someone in the financial industry -- I can absolutely confirm.

This is what he is doing: he takes common sentiment often and makes into war. "Abandon standard deviation", "the confusion started", "PhD's are confused" It's easy to like him and nod in agreement while omitting that he took it way too far, proposed things way too radical and again: didn't give one single example of stuff not working because of it.


What is Taleb's own black swan?


Correctly predicting the 2008 collapse


After being assigned to a currency desk, he was sitting on a lot of Eurodollar options when black Monday happened in October, 1987.

http://en.wikipedia.org/wiki/Black_Monday_(1987)

That unpredicted event made him and his organization loads of money, and made him a sudden seer of markets. Which is humorous because his book Fooled By Randomness is primarily a bit of a bitter tirade about how we declare people savants because they happened to be in the right place at the right time.

Since then he gets the classic seer type treatment, for instance the other post notes that he "predicted" the financial collapse of 2008 (weird that someone would register just to post that). Yet almost everyone predicted 2008. No, seriously, the house bubble and crisis, and the probable impact on the market, was absolutely common knowledge by the time Taleb made his pronouncements, and was the endless material of virtually every financial discussion. There was absolutely nothing of worth predicted in that, but that's what you get when you got lucky once.


He did make a lot of money in 2008 as well. That is completely aside from the fact that his points would be valid regardless of how successful his individual investments have been.

I am not sure what would convince you otherwise though, since it seems like you have made up your mind already. Please loosen up on the snark.


There is no snark in my post: I am outright calling Taleb a false prophet who is guilty of exactly the things he criticizes in Fooled by Randomness.

And yes, I've seen a lot of claims of various other successes of Taleb, with a lack of any actual citations or proof at all: His own fund folded after yielding mediocre returns, and then he started consulting for a fund that seems to demonstrate a tremendous amount of bluster, with some curious and cringe-inducing ways of reporting returns, the narrative reading like the story of a gambler, telling you with great pride about the 35:1 payout on their roulette bets, skipping over the 0:37 payout on all their other bets.

I've worked with a large number of hedge funds, many of whom made a lot of money in 2008. You haven't heard a word from them because they have no need to do PR or to pitch the one bet they made right. I become suspicious of any who does.

And the danger of Taleb's lucky bet was that he is listened to as an expert, at least by some. Virtually everything he had to say about government intervention after the financial crash has been proven fundamentally wrong -- actual reality demonstrated that. Thankfully a lot of people didn't listen to him.


I think that you may be missing the point of Fooled by Randomness. The point is knowing whether your success came from luck, skill, or both. It also shined a light on people who took high-probability, "sure bets" (selling out-of-the-money options) that paid off most of the time but would eventually bankrupt them if the stock price jumped.


Quite contrary, the point of the book is that it is very difficult to differentiate between skill and luck when random odds influence events.

For instance if you asked 1000 people to guess coin flips, on average one half will be wrong on each flip. So on the first flip you'd drop to 500, then 250, then 125, then 65, then 32, then 16, then 8, then 4, then 2, then you'd -- in a perfectly ideal scenario -- have one person left. A person that remarkably, "against all odds", guessed 10 coin flips in a row right! Surely they must be some sort of magician or seer, right? Yet their probability of guessing the next coin flip is no better than the people eliminated in the first round.

But that is exactly what we do with things like exceptional events: We find the person who was right about a set of events, among a collection of people guessing almost everything, and assume they've cracked the code. Even outside of financial situations (like being a guy who happened to have a portfolio that did great during an extreme black swan on October 19th, 1987, even if that same portfolio generally did terribly), just look at what happened with 9/11: Of all of the millions of scenarios that people concocted for fiction or just postulating, anyone who talked about a plane hitting the WTC suddenly became prophetic.

It's actually a pretty good book, as an aside.


The key question is how you determine the difference between skill and luck. Is that difference statistically significant and is their a different set of inputs that lead to this output?

For instance, Taleb's example of fund managers that beat the market for five years in a row. You start with a cohort of 10,000 and a 50% chance of beating the market each year. There will be a group of 300 or so fund managers that consistently beat the market 5 years in a row. This does not differentiate the lucky from the skillful. 1) Analyzing other cohorts that had higher success rates and 2) their investment strategies and decisions would be necessary to make the determination.

Well, a broken clock is right twice a day. Someone who always calls for a market correction will eventually be right. The real question is how will the investment strategy fair over decades or longer.

We may have different takeaways from the book. My takeaway was that one should invest in becoming skilled (learn strategies that work in the long run) rather than seeking or worshiping those who may just be lucky.


...weird that someone would register just to post that...

The implication is obvious. 'gggggggggggggg is either an employee of yours, or is in some other situation such that she doesn't want to get on your shit list. Nevertheless, the truth must come out! Hence, anonymity.


After reading your two above comments, I'm not sure what your point is. You've listed a bunch of facts about Taleb that he talks about. Just because people mis-attribute his success or butcher his ideas when applying them to him is not his fault.


Almost everyone predicted it yet almost nobody was ready when it came, and I still remember the panic and cries about "this is the end of western civilization" and "we have to abandon capitalism now because it failed completely". Maybe "almost everyone" was not as everyone as it seems.

>>> There was absolutely nothing of worth predicted in that,

I think it was quite worthy for those who was on the right side of the market. From what it looks like, though, a lot of people were on the wrong side. So presenting it as "everybody knew" is a bit misleading, it seems.


The housing and subprime issues were the fodder for every economic fear monger, rightly, for literally years (and honestly if you think otherwise, you have absolutely no knowledge of the financial markets). So why then didn't everyone just pull out of the market? Well, eventually they did, and that was what caused the crash. But preceding it was effectively a game of chicken where people still want to make money trading papers, and if they think they can buy something today that they can sell for more tomorrow, even if they think the next day it might be worthless, many will do that. In the same way that the world is pretty certain we're going to run out of oil...so we use more of it than ever. We'll deal with it running out when that actually happens.

There was zero specificity in Taleb's proclamations. It was just the vague "there are a lot of debts and an inflated housing market that is bound for a correction and is immensely interest rate dependent". Yeah, that's great, but is the same thing everyone else was saying.

GM went bankrupt during the financial crunch. This may blow your mind but the bankruptcy of GM was predestined for years -- their debts kept growing larger while their profits stayed static or shrank.

Everyone knew it was coming, it was only a question of when and what would precipitate it. Yet, people still traded in GM. Exactly the same concept -- people don't trade on what they think will happen tomorrow or next year, they trade on what they think other people think will happen tomorrow or next year.

So there is no misleading, and honestly only "rubes" fall for the "I predicted this" bit.


>>> were the fodder for every economic fear monger, rightly, for literally years

You realize "everyone" and "every economic fear monger" are very distinct groups? Before the crisis struck, those fear mongers were generally thought of as curiosities or cranks, not visionaries. It wasn't that long ago, too early yet to rewrite history.

Moreover, theories like these: http://www.wired.com/wired/archive/7.09/zeros.html http://bullnotbull.com/archive/predictions-2007.html http://useconomy.about.com/b/2007/01/03/2007-forecast-for-th... http://knowledge.wpcarey.asu.edu/article.cfm?articleid=1344 or even this one: http://www.amazon.com/gp/product/0385514344

were pretty popular. At the last one, we've got excellent review from top people at Federal Reserve and Fannie Mae.

>>> There was zero specificity in Taleb's proclamations.

Huge surprise from a guy who talks about principally unpredictable events as the basis of his philosophy. You expect him to talk about black swans and then say "the market would go down X points at day Y"?

>>> In the same way that the world is pretty certain we're going to run out of oil.

Are you sure? http://cnsnews.com/news/article/gao-recoverable-oil-colorado...


Actually, Taleb has explicitly stated that the 2008 financial crisis was not a Black Swan nor does he claim to be able to predict any of them.


In fact, most of Taleb's ideas are about how to not rely on predictions of events that are unpredictable - so faulting him for predicting or not predicting this and that means one does not really understands what Taleb is talking about.


Yes, his Black Swan stuff is a completely separate issue from whether he has or hasn't predicted anything.


Fascinating, but how is that at all relevant? I didn't call 2008 a black swan, I called Black Monday in 1987 a "Black Swan", using Taleb's own descriptor for it. So what is the issue?

A different discussion concerns the fact that Taleb's possible luck in 1987 makes him a prescient seer for things like the financial crisis. But he isn't, and a tremendous number of things he has said have simply been wrong (most notably, and importantly, everything he said about government intervention. He was 100% wrong in every way).

People ask why people get the heckles up about Taleb, and it isn't actually about Taleb at all. It's about the, for lack of a better phrase, Taleb "fan boys". Bizarre that such a thing exists, but people like having their prophet.


Your post was in reply to someone asking about Taleb's Black Swans so I assumed you were talking about 2008 in reference to that.


I understand that he made a lot of money in 2008 from betting on 'Black Swans'.


All I know is this reminds me a lot of high school where we had to always compute std dev in problems, homework, and sometimes labs, but nobody really ever explained how to interpret it. It was always like "This is std dev. This is how you compute it. Make sure you put it your tables and report."

Eventually someone (or something) did explain it, but once I understood it, it became clear that it wasn't always a sensible thing to be asked to calculate but was instead just an instinctive requirement.


Why, it is pretty good in describing probability distributions. What we should retire are idiots, who assume that it predicts an outcome of the next event.


Everyone is an idiot some of the time.


You gotta love the acronyms: STD versus MAD!

Taleb is definitely mad but his use of the MAD acronym (mean absolute deviation) is actually correct. However the STD acronym (all caps) refers to "sexually transmitted disease" and not generally used for "standard deviation". Most people use SD, Stdev, StDev or sigma.

Once again his ability to coin new terminology outstrips his ability to form coherent ideas that are anything more than trivial (eg. we have known about fat tails in stock returns for 50+ years). Like George Soros[1], Taleb's success says more about the state of the world of finance than their contributions to our knowledge.

[1]-See his book "The Alchemy of Finance"


STD is also an obsolete term. Epidemiology, Medicine etc. most often now refer to them as STIs, or Sexually Transmitted Infections.

The reasoning for this is that many sexually transmitted infections can be acquired, passed on to others, etc. without causing any clinical symptoms. See: HPV, among others.


Taleb has a textbook draft up which is more technical than his popular writings:

http://www.fooledbyrandomness.com/FatTails.html

There might be something there for the more rabid critics. At least it will keep them off the internet for a few days...


It's not that we should retire the notion of standard deviation. It's more that we should understand the tools that we are using and use the appropriate tool for the job.


NNT is my intellectual superhero but the amount of hate he gets is tremendous.

Please understand that NNT's biggest issues are not so much with the way statistical models are applied to economics and finance, but how social scientists sometimes feel compelled to apply them to social fields as well, which is plain unscientific, dumb, and mostly disastrous.

So when you bear down on his arguments, please keep this context in mind.


You're making too many gross generalisations here, as does Mr Taleb. He's right that in all fields there are statistically illiterate people making huge errors, and academic theorists have been making such - and more complete - criticisms for a while now. So, there's a real issue. However, there are a good number of competent social scientists too, beyond the mediocre morass, who understand just what variable construction and limitations to statistical operations are.


Among other things, I've seen way more appalling applications of statistics in finance and economics than I have in social science.

Also, the assertion in your post that the misapplication of statistical models in social science is "disastrous" but somehow giving finance a pass? You've got to be kidding me.


NNT, at least, doesn't give finance a pass. He excoriates them at every opportunity.


That's why I was so puzzled - NNT rather eagerly takes finance to task.


The notion of area has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of circumference. Area should be left to mathematicians, topologists and developers selling real estate. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good.

Say someone just asked you to measure the area of a circle with radius pi. The area is exactly 31. But how do you do it?

scala> math.round(math.Pi * math.Pi * math.Pi).toInt

res1: Int = 31

Do you pack the circle with n people, count them up and verify n == 31 ? Or do you pour a red liquid into the circle and fill it up, then drain it and measure the amount of red ? For there are serious differences between the two methods.

If instead, you were asked to measure the circumference of a circle with radius pi.

scala> math.round(2 * math.Pi * math.Pi).toInt

res2: Int = 20

You just ask an able-bodied man, perhaps an unemployed migrant, to walk around this circle while another man, an upstanding Stanford sophomore, starts walking from Stanford to meet his maker, I mean VC, well its the same thing...

So by the time the migrant finishes walking around the circle, our upstanding Stanford entrepreneur is greeting the VC on the tarmac of the San Francisco International Airport. This leads one to rightfully believe that the circumference of the circle of radius pi is exactly the distance from Stanford to the SF Airport ie. 20 miles. It corresponds to "real life" much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the area, they act as if it were the distance from their university to the airport.

It is all due to a historical accident: in 250BC, the Greek mathematician Archimedes introduced Prop 2, the Prevention of Farm Cruelty Act ( http://en.wikipedia.org/wiki/California_Proposition_2_(2008) ). No I believe this was a different Prop 2. This Prop 2 states that the area of a circle is to the square on its diameter as 11 to 14 (http://en.wikipedia.org/wiki/Measurement_of_a_Circle ) .The confusion started then: people thought it meant areas had to do with being cruel to farm animals. But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of data scientists, which found that a high number of data scientists (many with PhDs) also get confused in real life.

It all comes from bad terminology for something non-intuitive. Despite this confusion, Archimedes persisted in the folly by drawing circles in the sand, an infantile persuasion, surely. When the Romans waged war, Archimedes was still computing the area of the circle. The Roman soldier asked him to step outside, but Archimedes exclaimed "Do not disturb my circles!" (http://en.wikipedia.org/wiki/Noli_turbare_circulos_meos)

He was rightfully executed by the soldier for this grievous offense. It is sad that such a minor mathematician can lead to so much confusion: our scientific tools are way too far ahead of our casual intuitions, which starts to be a problem with a mad Greek. So I close with a statement by famed rapper Sir Joey Bada$$, extolling the virtues of the circumference: "So I keep my circumference of deep fried friends like dumplings, But fuck that nigga we munching, we hungry." (http://rapgenius.com/1931938/Joey-bada-hilary-swank/So-i-kee...)


Possibly the best thing I've read all week. Thank you sir.


Except the difference here is that generally Taleb is right and has a point. You know the b difference between the probabilistic hacker news titles generator and the actual titles found here are that even though they sound the same and have like weight same style. .. one is actually real.


I think we'd be better off if we recognized that there are statistical distributions in the world besides the plain old Gaussian. For example, wealth does not follow a Gaussian, so why the heck do we throw around ideas like "above average wealth"?

Is MAD any better? Definitely. But I'd like to see a visual demonstration of how well it models exponential-based distributions. How well does it describe their "shape", the skew of the tail?


In that specific case, I'd submit that the word "average" doesn't belong in a conversation attempting any level of rigor. It only encourages the confusion between median and mean.


Medians and means are interrelated. Averages are best thought of as being on a scale from meanlike to medianlike: a median requires a mean of the middle 2 values when there's an even number of them, and a mean can have outliers on each side to be eliminated before calculation. The word average abstracts away this detail. (I guess there'd be another scale from meanlike to medianlike absolute deviation, but that's another story.)


"In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation."

Boy, is that statement useless without any kind of context, example or citation.


- Strips a statement of context, examples, and citations.

- Claims it is useless therewithout.


No. There are absolutely no citations or specific examples. There is the claim that "every time a newspaper has attempted to clarify the concept of market 'volatility', it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation," which is not an example, it's a fact claim without a shred of evidence behind it (and as it is an absolute claim, you only need to find one example of a journalistic article on the stock market that properly defines standard deviation to prove it wrong).


The worst part about this is that it makes Taleb appear to be a complete idiot. Volatility under the Black-Scholes is the standard deviation of the daily price changes (log adjusted) not the standard deviation of the daily price. When the daily newspaper has a better understanding than the supposed expert it is just time to stop listening to said expert.


The next two paragraphs have examples with some context.


Like this?

"But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life."

It doesn't tell us what happened, it just asserts that it did in certain contexts. It doesn't cite the paper he presumably wrote with Goldstein (which Goldstein?) about it. I feel like I'm getting a summary of an abstract with all the citations missing.


> which Goldstein?

From ClementM above, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=970480


Thank you.

From that paper:

"We first posed this question to 97 portfolio managers, assistant portfolio managers, and analysts employed by investment management companies who were taking part in a professional seminar. The second group of participants comprised 13 Ivy League graduate students preparing for a career in financial engineering. The third group consisted of 16 investment professionals working for a major bank."

From the article:

"What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life."

I get that "data scientist" is a really broad term at this point, but I don't think it's a very good description of the people quizzed in this paper, if this is the paper he was indeed referring to.


And all his examples come from the financial sector, it would seem, but in the article he refers to "hordes of scientists," "people in social science," and even "problems with social and biological science." So even the hard sciences get pulled into his indictment of standard deviation, even though he never once gives an example from them.


Wherein the author makes the suspect claim that newspapers discussing the stock market always define standard deviation using the definition for mean deviation.


Climate scientists--among others--have made similar recommendations to use the absolute mean error in place of the standard deviation, depending on the application. Taleb might have cited the extensive methodological literature--for example:

Cort J. Willmott, Kenji Matsuuraa, Scott M. Robeson. Ambiguities inherent in sums-of-squares-based error statistics. Atmospheric Environment 43 (2009) 749–752.

URL: http://climate.geog.udel.edu/~climate/publication_html/Pdf/W...


Nassim Taleb somehow likes to beat up on normals...

We Bayesians have similar notions, but we usually try not to overly bully frequentist methods, the poor things. Also, being familiar with Bayesian methods, a lot of what Taleb is saying sounds vaguely familiar...


Well, frequentist methods are taking the blame for the most recent financial crisis, such as assuming a normal distribution when the empirical one is fat-tailed.

Perhaps the Bayesian methods will take the blame in the next financial crisis. Such as the error in estimating a non-stationary distribution and quantifying the uncertainty.


Statistical economists or analysts always said that the normal distribution is a simplification and that this simplification has its own problems.

It's just that traders and bankers would fire statisticians who were too vocal about it as wasting their time with unnecessary explanations...

Bayesian methods in general can only take the blame if you can prove some other method being more reliable.

Also they have another advantage: Bayesian methods are so mind blowing and beautiful, that it is hard to blame them for anything!


Standard deviation and mean absolute deviation are both useful, but I think it's silly to suggest that we all adopt exactly one measure of variability to summarize data sets. When in doubt, make a fucking histogram.


Histograms can be misleading too, especially when the breaks are not set well.


Anything can be misleading, but in most cases a histogram conveys more information than a single number ever could.


Absolutely. They just aren't a panacea.


It is sad that Taleb does not see the value in the standard deviation; standard deviation is far more natural, and more useful, than MAD.

For example, if X has a standard deviation of s, and Y has a standard deviation of t, then the standard deviation of X + Y is sqrt(s^2 + t^2). There is a geometry of statistics, and the standard deviation is the fundamental measure of length.

To retire the standard deviation is to ignore the wonderful geometry inherent in statistics. Covariance is one of the most important concepts in statistics, and it is a shame to hide it from those who use statistics.

Additionally, I will mention that we do not need normal distributions to make special the idea of standard deviations. In fact, it is the geometry of probability - the fact that independent random variables have standard deviations which "point" in orthogonal directions - which causes the normal distribution to be the resulting distribution of the central limit theorem.


There is nothing wrong with STD or MAD. The real problem is a lot of people apply them without realizing the nature of their data and what kind of analysis they want to do.

In this case what matters in the end is the kind of impact deviation from mean has on the real world variable you have. I agree that in most Gaussian experiments MAD might be more useful than STD.

STD is more useful when the real world impact of the deviation increases exponentially with the magnitude of deviation and hence it is a good idea of magnify the (x-n) by squaring it. In many cases the impact is linear where MAD clearly works better. For example in cricket where n runs are n times better than 1 run. But in case of shooting. Hitting 9 targets out of 10 might be 100 times better than 1 out of 10 so there MAD will be misleading.


There is some argument that MAD is actually better than RMS for a lot of applications. Apparently it predated RMS, but one of the reasons it was switched to was because RMS minimizing linear regression is much, much simpler to calculate. Also consider comparing the robustness of RMS based regression with MAD based regression. See: http://matlabdatamining.blogspot.com/2007/10/l-1-linear-regr...


I had hoped this would be about the revolution occurring in statistics/econometrics where confidence intervals based on strong parametric assumptions (e.g. the confidence intervals you would obtain using the standard deviation) are being replaced by confidence intervals obtained using the bootstrap (and other non-parametric methods) that don't rely on such strong assumptions.

But no, it is just advocating using Mean absolute distance instead of the standard deviation. Which I guess is to be expected from someone whose work focuses mostly on long-tailed distributions.

Still, I think that non-parametric methods are much more valuable as a solution to dealing with non-normal data than what Taleb is proposing.


He makes a good point about infinite MAD vs. STD.


I've found MAD a potentially useful measure for monitoring whether something gets out of whack; when using STD I needed to modify it to give less weighting to outliers.


The way I read it he's proposing two things:

1) Refer to the analysis of Root Mean Square Error always by that name. (RMS is already often used in certain jargon instead of stddev).

2) Stop treating RMS as a default measure of variance. Treat Mean Absolute Deviation as the default measure of variance, because the figure it provides is more consistent with people's psychological interpretation.

It's not really retiring RMS, just retiring the idea that it is a good default statistical analysis.


How often do "six sigma" events occur in financial markets? A hell of a lot more often then the 0.0000001973% that they would in a normally distributed system.


Exactly. I'm astounded that this basic fact - that financial modelers have been using the wrong distribution for decades IS NOT BEING DENOUNCED. For example, the Capital Asset Pricing Model (CAPM) which assumes that returns fall on a normal distribution.


If data is drawn from a Laplace distribution of the form p(x) = exp(-|x|), the mean absolute deviation is more informative than the standard deviation, but if its form is close to the normal, p(x) = exp(-x^2), the standard deviation is more important. So whether to use the mean absolute or standard deviation depends on the distribution of the data. There is a field called robust statistics that looks at this question.


I've always thought his writings were more allegorical than scientific; you can't rely on the standard deviation to never go against you at the worst possible time. But like anything else, it can and it (probably) will.

Also, yes, his writing style is grating and he takes opportunistic character swipes at pretty much everyone.


I'm a physicist, so I'm one of the people this guy says standard deviation is still good for. However, despite some "oddities" (pointed out by others here) in his article, I'm more than willing to admit a simpler, easier to understand term would be helpful for explaining many things to the general public. Hell, it would be helpful for explaining things to journalists, who we then trust to explain things to the public!

Look at an reputable news site or paper. Odds are they post articles based on polls several times a day. How many report confidence intervals or anything of the sort? These are crucial for interpreting polls, but are left out more often than not. Worse yet, many stories make a big deal about a "huge" shift in support for some political policy, party or figure, when the previous month's figure is actually well within the confidence interval of the current month's poll!

Standard deviation, confidence intervals, etc. are all ways of expressing uncertainty, and it's become abundantly clear that the average journalist, to say nothing of the average person, has no clue about what the concept means. If the goal is to communicate with the public, then we really need to take a step back and appreciate the stupendously colossal wall of ignorance we're about to butt our heads against. When we talk about the general public, we should keep in mind that rather a lot of people know so little about the scientific method that they interpret the impossibility of proving theories as justification for giving religious fables equal footing in schools. This kind of ignorance isn't a nasty undercurrent lurking in the shadows. It's running the show, as evidenced by many state laws in the U.S.! There is absolutely no hope of explaining uncertainty to most of these people.

There is hope of explaining basic statistics to journalists, if only because they are relatively few in number and it's a fundamental part of their job to understand what they are reporting. Yes, I just said that every journalist who has reported a poll result, scientific figure, etc. without the associated uncertainty has failed to adequately perform their job. We need to make journalists understand why they are failing. If simplifying the way we report uncertainties will assist with this, then I'm all for it. Bad journalism is a root cause of a great deal of ignorance, but it's not an insurmountable task to fix it.

If you are a scientist who speaks to journalists about your work, make sure they include uncertainties. If you are an editor, slap your peons silly if they write a sensationalistic poll piece when the uncertainties say it's all a bunch of hot air. If you are a reader, please mercilessly mock bad articles and write numerous scornful letters to the editor until those editors pull out their beat-sticks and get slap-happy. We should not tolerate this kind of crap from people who are paid to get it right.


Maybe a solution would be to only communicate the bounds of a value rather than the value itself. E.g. Instead of “party A has 35% with sigma=2%” something like “we expect party A to have 33%–37%”. Providing a range is shorter than explaining the SD, and can be visualized easily, e.g. in a bar chart.


I was just wondering about a very related problem. I do 5 measurenments of some random variable (let's say execution time) and average them. How should I report the variability of that average?

State the sample size and standard deviation?


If it were 10 or more measurements, I'd use average, standard deviation and sample size if we can expect that variable to be normally distributed.

I don't know how to make meaningful statistics with fewer data points.


Citing 10 measurements in a table or plot is probably easier than explaining how and why you are taking mean and standard deviation (or anything else for that matter...)


90% of the data fall between this point and this point. 50% of the data fall between this point and this point. etc.

MAD is more accurate than Standard Deviation to me, if you ask me.


Shorter social scientists: "Gaussian distribution sez wut?"


I'm seriously questioning some people's reading comprehension - he NEVER said STD is not useful! He's only saying the name "Standard Deviation" is badly chosen.


Read the blog post carefully:

> it is time to retire it from common use and replace it with the more effective one of mean deviation

> Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems

>There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good

He is saying it's not useful for real world things and people are just confused. What you wrote is reasonable view. What Taleb writes isn't.


The minimum uncertainty wave equation is ~ e^(-x^2) ergo the standard measure is in terms of x^2. QED.


at least he's leaving us physicists alone with it...


I've been a long time fan of Dr. Nassim Taleb. First book I've read was the one about his time as a day trader and how on the Black Friday market crash, he made a killing and cleared his desk and never had to work again.

There are those that dislike his ideas because it is threatening to their existing assumptions about probability and statistics. He argues that experts and majority of people do not account for the unpredictable but significant impact a single event can have which often shatters the commonly held belief. For example, swans were white until the discovery of black swans in Oceania, too big to fail multi-national corporations going bankrupt like Lehman's brothers and etc.

He's not anti-academic, but he is against teachings in the common academia that is based on naive assumptions that is specifically tailored to serve those that thrives most off the limited quantitative measures, such as market callers, hedge funds selling complicated quantitative algorithm trades, academics seeking fame and fortune by writing the most logical and quantitative paper without questioning any of the tools they are using, it is this hypocrisy and laziness that is apparent and those that try to deny to the point of making ad hominem remarks against a man, who simply observes these things and decides to write it in an entertaining manner (otherwise nobody would give a shit because the topic would be dry without lay man's linguo).

Keep an open mind, a lot of what he says I do find interesting ideas and it has influenced my thinking process quite a bit, however it's no way in anyway, grounds for cracking jokes or ridicule, in fact when I read some of the comments here, it's a bit shameful. We should be embracing new ideas in order to explore them, regardless of who the explosive nature of the claim, because the black swan event is very real and is not captured or understood completely by our current set of statistical tools and methodology based on questionable assumptions about how the real world operates. For example, 1/2500 chance is not what we really think it means in the real world because black swan events are more common than we think, a percentage probability do not fully reflect it's frequency and the magnitude of it's event.

Note the fall of crime rates in the United States following a decision to legalize abortion, economists and experts would come on television and bring up all sorts of random theories and ideas but little did they realize it was a chain effect from a court ruling passed decades ago until two economists came out with a paper that was ridiculed because it suggested that 'killing babies from poor neighbourhoods = lower crime rate' where most poor neighbourhoods is occupied by African Americans. Because such idea was earthshakingly controversial and still denied even to this day. Because Galileo claimed the earth was round instead of flat, he was executed. This is simply the nature of our world, almost all part of life, there exists a hierarchy that people simply do not ask questions either due to blind trust or the fear of reprisal.


I think I agree with most of this, but now aren't we supposed to say that the drop in crime is due more to reduced childhood lead exposures than to abortion? Not just because it's a feel-good, humanistic explanation rather than a horrific racist eugenics explanation, but also because recent studies show a pretty strong lagged correlation...


four-day returns of stock x: (-.3, .3, -.3, .3) -> MAD = 0; four-day returns of stock y: (-.5, .5, -.5, .5) -> MAD = 0.


The A in MAD stands for 'absolute', so no, the MAD for those two stocks is .3 and .5 respectively - versus standard deviations of 0.35 and 0.58.


No, it's the median absolute deviation. Absolute deviation is the absolute difference between an average and a data point. So (-.3, .3, -.3, .3) -> MAD = 0.3; (-.5, .5, -.5, .5) -> MAD = 0.5.


You have calculated the mean. This is only the first step in calculating the MAD (mean absolute deviation). You then need to take the absolute value of the difference (deviation) of each datapoint from this mean - for your x you get (.3, .3, .3, .3) and for your y (.5, .5, .5, .5). Finally you take the mean of these absolute deviations to get MAD(x)=0.3 and MAD(x)=0.5.


nope. look at what the "A" in MAD stands for.


he's really lost the plot. :(




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: