A good case in point is small biotech companies focused on research.
Technically traded on the capital markets as common stocks (and if there are enough adults in the room - without debt), but in reality the securities of such companies behave exactly like very convex options - whatever drugs the company is currently working on will either work or not. If it clicks, great, you get FDA approval and the stock is suddenly worth 5x or maybe even 50x more on the basis of patent royalties or outright drug sales. If not, the failed research operation is either recapitalized by patient investors or decomposed by liquidators, while the "real world resources" (researchers and equipment) find a new thing to do.
Dispersing the operation if things didn't pan out is actually the wrong approach and probably partly to blame for the poor results in Pharma these days. If you disband a research operation you lose all informal knowledge, and if you sell off assets separately you have to reassemble the instrument park again to start another company. In the past, when there were industrial research labs, the scientists and equipmetn would stay and be reassigned to another project.
Yeah, nice example.
I guess, the interesting aspect in that case is more that the option is way out of the money and almost binary, not so much its Gamma?
Like, you still need to have some kind of idea about probabilities of outcomes to see whether the option is not crazy overpriced?
This is similar to the "lottery ticket" critique addressed in the end ... you wouldn't just invest in any biotech firm?
Taleb makes great points here. The only counterargument I am aware of is Nick Bostrom's "Vulnerable World" hypothesis[1]. Taleb assumes that negative outcomes may simply be abandoned at zero cost, and therefore reducing research costs (options contract prices) is the sole necessary step for increasing optionality. The VWH claims that certain technological advances, once they're out of Pandora's box, are impossible to abandon or ignore.
"What allows us to map a research funding and investment methodology is a collection of mathematical properties that we have known heuristically since at least the 1700s and explicitly since around 1900 (with the results of Johan Jensen and Louis Bachelier)."
Is this the earliest know document written by GPT-3?
Because Taleb writes to not be understood (which would bring scrutiny) It's hard to know exactly what it's saying but take 2. for instance. It doesn't explain how to make this actionable.
If I understand it correctly, Paul Graham famously wrote about this a few months earlier in English - http://www.paulgraham.com/swan.html (Ironically using Taleb's 'Swan'). It's not great at answers either, but at least it's understandable.
How do you action 3. ? It's just a clever way to say you might have to pivot, I think?
Things have costs. Being open to pivoting has a cost. Of course no cost pivoting is a no-brainer, like the numberless graphs. But in the real world with costs what do you do?
It's an essay against the common pattern of "architecting expensive highways to the destination" (teleological) as opposed to "cheap meandering towards payoffs" (convexity, optionality).
> Things have costs. Being open to pivoting has a cost. Of course no cost pivoting is a no-brainer, like the numberless graphs. But in the real world with costs what do you do?
The first is to avoid voluntary teleological ("I know where we're going and how we're going to get there, and here's the itemized budget for the next 3 years") costs.
Reading between the lines, it sounds like what you should do is make it as easy as possible to pivot early in your research process... Lowering the temperature and increasing the activation energy over time as you find the black swan. Concretely:
* Socialize that you plan to frequently reevaluate and that don't get too attached to an idea
* Don't spend time automating while you're just testing ideas out
* When you do automate, make sure it's there so the cost of switching to a new idea is as close to 0 as possible.
The first few paragraphs seem ignorant, almost designed to drive away the impatient; but the following paragraphs demonstrate they lead to deep results.
A key result for current society is that the system by which grants are apportioned by science foundations is absolutely counterproductive. The only mark in their favor is that they do issue grants, in smallish amounts to a large number of recipients. Demanding up front the expected outcome is the most harmful feature; they should instead favor the expectation of surprising results from new and poorly-understood phenomena.
The failure of grants committees to foster development of mRNA vaccines is telling. That a for-profit company proved able to develop the technology does not mean that for-profit companies are good at research. Rather, it means that the bar for improving on the current system is very low.
>The failure of grants committees to foster development of mRNA vaccines is telling. That a for-profit company proved able to develop the technology does not mean that for-profit companies are good at research.
well, may it be that the actual mRNA vaccine development wasn't much about fundamental science and the level of knowledge there had reached the state "ready to be productized", thus the grant committees were right?
This reads like someone who doesn't understand how evolution works. Even with a badly behaved utility function, gains can be made with a random process because less fit offspring die off, and more fit offspring survive and reproduce. Convexity is not required.
It reads like someone who understands perfectly how evolution works. The point of anti-fragility is to make the losses non-fatal, and wins by proxy can be convex. The idea is you can keep playing the game. You might be looking at it from a species perspective vs an individual.
Exactly. And there are ways to make the system have far less convexity, for example by dampening the diversity of the gene pool. This is exactly the case with bananas, all of the world's supply is one virus or bacteria away from being completely wrecked.
I believe Taleb sees the death of less fit offspring a small pain vs. the increased survival of fitter offspring - a big gain because it compounds with generations.
Strangely, it also means that with the big brain and low numbers of offspring per pregnancy, human evolution itself turned away from high-mutation, high number of trials convex exploration to a more conservative exploitation of existing benefits.
Are we already that close to the best we genetically could be?
It is not as strange when you see information move from sexual/asexual genetic reproduction, to markings on clay tablets and the mental diversity of human knowledge propagating with faster, more flexible fitness functions than the foundational survival/extinction binary.
DNA is an arbitrary (albeit important) replicant and not at all the only form of physical memory.
> human evolution itself turned away from high-mutation, high number of trials convex exploration to a more conservative exploitation of existing benefits.
> Are we already that close to the best we genetically could be?
Well nature as a whole hasn't exactly halted the high-mutation, high N strategy elsewhere in favor of humans (COVID-19?). It's just that cultural and technological evolution has outpaced it for humans, no nature was "right" to evolve the capacity for this additional vector of evolution.
It'd be like if we ever develop AGI, the silicon we initially run it on won't be ideal, and whatever algorithmic evolution the AI experiences will far outpace our technological progress. If AI improves its physical computation platform, that would be like humans editing their genome.
lain, rather—laid is the past participle of "lay", which has the same form as the simple past of "lie", because English is logical, consistent language. (And this is why "lay vs lie" is a staple of word choice mnemonic lists.)
The argument made is seemingly harmless, but historically has been proven incompatible with morality.
For example, one could argue those that are poor or uneducated are that way because of nature, their IQ, their race or ethnicity, but we know this to be false and immoral.
Technically traded on the capital markets as common stocks (and if there are enough adults in the room - without debt), but in reality the securities of such companies behave exactly like very convex options - whatever drugs the company is currently working on will either work or not. If it clicks, great, you get FDA approval and the stock is suddenly worth 5x or maybe even 50x more on the basis of patent royalties or outright drug sales. If not, the failed research operation is either recapitalized by patient investors or decomposed by liquidators, while the "real world resources" (researchers and equipment) find a new thing to do.