This is an extremely flawed initial assumption. There is no requirement for chance to be centered around zero. Consider rolling a dice: sometimes you'll get more than the mean, sometimes less, but you'll never roll a negative number. You can certainly win on chance in the long run, that's the foundation of casinos and insurance companies. It's hard to imagine a scenario where trial and error can possibly lead to knowledge regressing.
Consider randomly digging holes in the ground: after enough holes you will eventually strike gold, and you will never lose physical gold in the process. However, you may lose significant time, wealth, and effort that could have been better converted to gold. The optimal way to strike gold is not to dig more, shallower holes, but to learn enough geology to understand where gold is likely to be found and concentrate your prospecting there.
No experiment could ever possibly hurt scientific knowledge. People tinkering will certainly make occasional discoveries. In a brand new field with a lot of low hanging fruit, these discoveries will be numerous and the cost will be low. But in a developed field where people have a good idea where the remaining discoveries are likely to be found and the effort to conduct such experiments is substantial, targeted approaches become optimal. Reducing the unit cost of experiments is always nice, but is not generally feasible. This strategy of "convexity" is a very poor substitute in the real world for understanding.
In simple terms, he's just saying that it needs to be safe to take chances in order for it to be worthwhile to take chances. As you say, science is an example of a system where it's often safe to take chances, because you don't risk losing any knowledge from a failed experiment. (But that doesn't mean there are no costs! You can lose time and money. And I'll also point out that some experiments can be dangerous.)
In any search, whether you can find something interesting is going to depend at least partly on the landscape, so understanding the landscape better will improve the search process, along with your estimates of whether it's worth doing at all. Calling this property of a desirable landscape "convexity" doesn't, in itself, help you understand the landscape, but it doesn't seem wrong?
I’m not sure it is worth my time to read something that is badly written to provoke controversy, even if it does make good clickbait.
He is not "fundamentally" wrong here though. The parent comment misread what TFA says (as I replied above).
There's nothing against a work with a clickbait title or opening being the most important thing one would read all year.
Like whether an author is a "bad man" in their personal life tells us nothing about the worth of their work, the ways an author/editor tries to attract viewers do not mean that the actual content is also of bad quality.
I don't know enough about geology to be able to comment intelligently on your digging holes in the ground point. However, from my own background in biology research, a lot of what he writes strikes true with some caveats.
Biological systems are highly complex, and a lot of reductionist basic research work does seem to be driven by understanding along the forms of "I have a mental model of this subsystem, if I do X, then I expect Y." However, a lot is also discovered via his "convexity" principles (which I agree most laypeople would label as "trial and error"). Biologists often discover functions by randomly mutating billions or trillions of individual microbes, screening for interesting phenotypes, and then sequencing to discover the supposedly causal mutations.
Where the understanding approach really breaks down is in engineering systems -- which I believe requires an even higher level of understanding than the qualitative mental models bandied about in biology. We simply don't understand enough to be able to develop most drugs with that sort of rational approach, so reducing costs per attempt (while keeping all else equal, which is something often overlooked) would be beneficial. Unfortunately, things in the industry appear to be heading the other direction.
> No experiment could ever possibly hurt scientific knowledge.
Sure could. An experiment might yield a false result. You might get a false negative and abandon a promising discovery. You might get a false positive and waste more experiments on an impossible setup. It's all about opportunity cost.
But overall I couldn't follow Taleb's writing. It's not accessible to me anymore.
that was not an initial assumption but (presumably) the subject of the essay. its not clearly stated in the text though
> you can certainly win on chance in the long run
No you can't ; both casinos and insurance companies make choices that give them an assymetry in gains, and thus they benefit from the randomness of events. Chance is by definition centered around the mean.
The question of scientific research directions is interesting , but i m not sure that research is a random walk. There are biases and "hunches" that guide scientists.
It's similar to why betting $1 to make $1 million at million to one odds is a smart bet, but betting $1 million to make $1 at the equivalent odds isn't, unless you have an unlimited bankroll.
 If they use the Kelly criterion they won't bust hard, but they'll still lose most of their bankroll.
what about one with a falsified result?
fwiw I think convexity and understanding are relatively orthogonal, but how could one employ the former without the latter? However the author's position seems to be more that gaming systems works better than exploring their contexts. Sometimes you might make more money in less time, but the money is all you'll get out of it. In practice, maybe understanding and convex payoff functions are both useful at different scales.
From Oxford via https://google.com/search?q=falsify :
1. alter (information, a document, or evidence) so as to mislead. "a laboratory which was alleged to have falsified test results"
2. prove (a statement or theory) to be false. "the hypothesis is falsified by the evidence"
Nonsense. Falsification of data happens all the time. But more importantly, falsification as applied to hypotheses and falsification as applied to data are two completely different concepts.
Falsification in the sense "we tried this, and got unexpected results, disconfirming our hypothesis" is something you do to hypotheses. This is Popperian falsification.
In the sense of what happens to data, falsification is "we tried this, and got data that disconfirmed our hypothesis. But instead of recording that data, we recorded spurious data which confirms our hypothesis". (Or, of course, "we didn't try anything, but here are some numbers that we feel reflect what would have happened if we had".) This is falsification in the same sense you'd see it applied to, say, accounting records.
It has a bit of a meta role I suppose - a system must be robust enough with replication that it shouldn't matter. Knowing bad actors are about can promote better verification practices than a blind trust.
All beside the point: his observations are not invalid, his conclusions are
"”Males have more teeth than females in the case of men, sheep, goats, and swine; in the case of other animals _observations have not yet been made_”
Casinos and Insurance companies don't win on "chance". There is an expected value of the events that are in consideration here, and these firms price their services such that their return is higher than the expected value...very little down to chance. It's as if we played a die roll game where I paid you the value on the dice each time you rolled it and you paid me >3.5 units per roll to take your turn. That's the house edge.
That's the convexity property Taleb argues must lie underneath research though.
So, you're saying the same thing.
What he means with the first paragraph is not that chance can't lead to gains -- it's that chance alone cannot lead to gains. There should be an additional property, and that's what the article is about.
Consider your counter-argument: "You can certainly win on chance in the long run, that's the foundation of casinos".
And yet, that's not the foundation of casinos. That's what the article speaks against. For if it was chance + long run alone the "foundation" then it would work for the players too. But players face ruin in the long run (unless they have an infinite supply of money), while casinos do not.
The foundation of casinos is chance + resilience to chance events (a casino doesn't go under from this or that player winning) -- e.g. the exact convexity the author talks about.
There are some cases where that model can itself seem pretty fragile (like estimating the opportunity cost on your hole digging), but others where it night feel reasonable. Pick your poison!
The thesis of the rest of the article contradicts the initial premise.
Your scenario of ramdomly digging holes are a perfect example of what the author is explaining: “Critically, convex payoffs benefit from uncertainty and disorder.“
Yes, the try-and-error process and chance are important, but the convexity of the output function is exactly what makes them so rewarding.
To your mining example, that's where prospecting comes in. Use your knowledge to make checking candidate locations cheaper (increasing convexity) - knowledge of geology to approximate likelihood, improvements to technology to determine if gold exists, etc. Then go check as many candidate mines as possible. It's much more cost effective to have a lot of shallow mines than it is to extract every ounce all the way down to the crust from a single mine.
A lot of arguments are built on abstractions like "competition" and "chance", and having a short list of common exceptions to those heuristics on hand is pretty useful. Now when discussing federalism in the U.S., I'll not only wonder whether there are free-rider problems or economies of scale missed out on, I'll also think about whether local decisions are effectively "locked in" forever.
Maybe The Great Leap Forward when Mao killed a bunch of intellectuals?
If this was modified to “chance alone” then it might be correct. The way it’s worded now makes it sound like chance cannot contribute to long term gains, which is clearly false. Evolution depends on chance (generation of diversity) followed by a selection process and clearly that works pretty well.
The point the author is trying to make is that the structure of the payoff function matters a lot. Specifically, you need it to be convex for a try-and-error (or random walk) process to become very rewarding.
For example, think about fuzzing C programs, which has been proven to be very productive in terms of software security. But why is it so productive? This is essentially because a bug in a C program can have a quite significant implication (e.g. remote code execution), thus its payoff function is extremely convex. If there was no such property, fuzzing just wouldn't be so much rewarding (This explains why fuzz tests are less used for programs written in memory-safe languages).
The author believes this idea of "convexity" can explain a broad range of phenomena in the human world. I'm not so sure about its applicability, though.
And he's saying you can't just keep the model as is, but you need to make certain adjustments to the incentive structures.
It's becoming blindly obvious that this is the case in psychology at least with the replication crisis and the "publish or perish" mentality. We can see these things playing out.
Does the economics of the scientific machine need to be revisited and tweaked? I'd say there is a good conversation to be had about that. I can already see a little evidence of a minor self-correction, but given economics drives absolutely everything then yeah I'd say it's likely there are some changes that would produce different results that might be better than what the current system is producing. Though it's not easy to compute ahead of time whether changes themselves would have unintended consequences.
He probably needs to spend more time trying to explain things to 5 year olds to offset his "I am so smrt" persona.
So, I expected some more general point about theoretical understanding of a thing being distinct from the actual computation of that thing, and that theoretical understanding does not necessarily lead to optimal outcomes.
I wish he’d made that point instead.
> By definition chance cannot lead to long term gains (it would no longer be chance)
Heh. The whole universe might be made by "chance". It is, in fact, quite possible that the total energy of the universe is 0. Our existence is a fluctuation.
Define long term gains. Since infinity is out of our possible reach, it is possible (though unlikely) to make long term gains just by chance. Especially, if you have a large audience. Some of them will get lucky.
Isn't that YC Combinator in a nutshell?
Hey guys! Look at all the
big, big words I can use!
Don’t I sound smart???
Why do they need to sound smart? Is it really because they know they’ve got nothing to say? Is this an SAT reading comprehension test?
You could sum up the sentiment with an anaology to paraphrase the concept: “defensive programming is no replacement for accomplished programming skill” (to borrow a concept comparable to investing)
Big words, small mind.
Could you please not create accounts to break the guidelines with?
I might be taking bait here. I guess I take umbradge at the post I am replying to because it has insults in it, which is not so pleasant. Maybe it's just the hypocrisy, as its language is also verbose. To paraphrase:
> The writing style of this article strikes a tone that seems overly eager to place an eloquent vocabulary on display.
"This article uses too many big words"
> You could sum up the sentiment with an anaology to paraphrase the concept: “defensive programming is no replacement for accomplished programming skill” (to borrow a concept comparable to investing)
"You could sum up his article as: 'following best practices won't replace programming skill'"
You might argue that paraphrasing like I have detracts meaning but I'm sure the original author of the article would say that too.
Correct. We live in a world where sentient intelligence persists upon a substrate of inert, newtonian determinism. Life, itself, stands bound by gravity, upon a rock floating in space. So where’s the investment tactics?
Meanwhile, mysogyny as a personality flaw of the author would not invalidate the merit of any ideas expressed into a vacuum devoid of mysogyny. As adults, individuals should be capable of reading and digesting a non-inflamatory article, if its ideas have merit, and it doesn’t seek emotional agitation.
You should be able to read an article, and ignore the name of the author. Ask yourself: If it were the same article written under a pseudonym, could it still be found valuable?
Are there actually people who argue that brute trial and error with nothing else in play can yield structured knowledge? Note that biological evolution doesn't count because it is not mere trial and error.
Edit: by which I mean, what definitively demonstrates his misogyny as opposed to a gender neutral disdain for historians in general? He's definitely the abrasive ass I described earlier, but misogynist doesn't seem warranted, and it's a label that's thrown around far too freely these days.
I acknowledge that there is a chance I am remembering it incorrectly. Lots of people who joined in his appalling baiting were certainly openly misogynist in their language, and maybe I am accusing him of being guilty-by-association. However perhaps joint-enterprise is a fair stick to beat him with, he certainly and very deliberately tries to get his fan club to wade in on social media. Maybe people who do this bear some responsibility for what their followers inevitably say?
The further complication is that Dame Professor Mary Beard is an outspoken feminist, and this has caused her to receive more than her fair share of misogynist abuse both on social and mainstream media (incidentally her book Women and Power is very readable and I see now in paper-back). Therefore it could be that I was showing my prejudice in assuming that a man attacking Mary Beard was automatically misogynist. Then I think to myself that a man who has no credentials at all in that field attacking a renowned person in that field (from one of the worlds great institutions) in such a vile way must have some special reason to do so.
I guess the final thing is that he did indeed accuse her of having "used feminist cover" (Damn I said I wasn't going to look up quotes). What he seemed to be saying is that she was trying to be exempt from criticism because she was a feminist. I would actually say that that was very misogynist thing to say. It reminds me of the people who try and perpetuate racial slurs an say things like, 'you can't even criticise them for (insert racist trope) for fear of being called racist'.
What is worse, is I came out thinking that he was wrong, and badly wrong at that, but was trying to bully his view to the forefront. But as you can see from reading what I wrote, I am not a scholar of the humanities at all. It is still my personal opinion that he is a misogynist bully, and his ego is so big that it compromises his objectivity.
> Did Pagliucci simply not quote what you're referring to?
Before I finally posted I read the article you linked, I think you have missed this bit
"It isn’t because of the not-so-subtle sexist undertone (not just of Taleb’s, but of many of his Twitter-based supporters)"
So indeed Pagliucci does mention it.
BTW, I predicted the financial crisis of 2008 too. I say that as it seems to be Taleb's main claim to fame. I was wrong about what the trigger would be, but what I identified as a potential cause certainly helped it spread (mind you I could have changed that around when I wrote my book). But then so did my dad, and a plumber I remember talking to at the time. I heard mainstream radio shows about it. In fact it only seemed to be people who were in the financial game who couldn't see it. So what? We could just have easily been wrong.
Taleb is unsurprisingly obnoxious and inflammatory, and you're right that his ego is huge. I don't see anything specifically misogynist from in that thread (although maybe that last one crossed the line, hard to say without context), but if you're right that he encourages his followers in that manner, that's skirting that line pretty damn closely.
At the very least, this is an interesting case study in historical scholarship and philosophy of language for sure (does Beard calling it "accurate" entail that it's "representative" and thus "typical"?). I would have to read a lot more to take a side, but I probably won't. Thanks for the info though!