
The financial cost of not knowing things - andyferris
https://medium.com/@andyferris/the-uncertainty-tax-the-financial-cost-of-not-knowing-things-2de05a5d21f6
======
danieltillett
Just to be a pedant the FDA doesn't consider the value of a drug to society,
just if it is safe and effective. It is up to the pharmaceutical company to
decide if it is worth developing and selling a new drug.

One thing the FDA doesn't consider that I think they should is the cost of not
approving a drug. Safety and effectiveness is not black and white when faced
with uncertainly.

~~~
andyferris
A agree, that's totally true. That "exercise" was just my thought experiment
about what might be for "the greater good" (whatever that might mean,
precisely).

------
tomxor
> However, the perverse situation that can (and does!) occur is when people
> _don’t know what they don’t know_. In being naive, in not doing the above
> analysis, in making a quick decision, and in choosing to “use your gut” and
> make a decision, you limit your options. You loose the opportunity to
> address weaknesses in your knowledge.

As an interesting aside: "Not knowing what you don't know" is known as second
order ignorance or the Dunning-Kruger affect [1]. The "orders of ignorance"
beyond this one stage are quite interesting when analysing how we learn. Andy
Hunt describes this in great detail in "Pragmatic Thinking and Learning".

I have always suspected it's possible to reduce your propensity for 2nd order
ignorance through more varied learning. For financial decision making I can
imagine this as a key meta skill where you must identify unknowns quickly and
accurately and be able to intuit their potential impact... as each decision
can appear to have very little in common to previously encountered ones such
that knowledge accumulation alone does not result in significant accumulation
of skill.

[1]
[https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect](https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect)

~~~
joker3
That's not what the actual Dunning-Kruger effect is. See
[https://www.talyarkoni.org/blog/2010/07/07/what-the-
dunning-...](https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-
kruger-effect-is-and-isnt/) for an explanation of why this is wrong and what
Dunning and Kruger actually found.

~~~
tomxor
I understand the difference, but I should have framed it better: it is not 2nd
order ignorance itself, but it is one expression of it in the extreme.

I suppose the point is that we all have 2nd order ignorance (just pick a
context until you find it), but we act inappropriately upon it to varying
degrees depending on... what might as well be called wisdom i suppose, you
could call it the anti-dunning-kruger effect I suppose.

------
cm2012
This is why I think colleges should pay people who work in industries to give
students the scoop on things regularly. Unless you have close family that
works in a business, there's so much you don't know when you're in college or
just graduated about the careers you want to join.

~~~
chatmasta
Isn’t that exactly the point of internships?

~~~
TeMPOraL
Internships seem to be structured as yet another hoop to jump through on the
path to a career, and by that time it's already expected of you to know what
you want to do.

~~~
jaclaz
Actually - as it is often used - internship is only a way to pay less for the
same (or sometimes more) not-fully-qualified job.

Internship (what was once called apprenticeship) made (and makes) a lot of
sense where the employer supplies knowledge (arts and crafts being a typical
example), nowadays to actually be considered for an internship you need to
have a full, recognized cycle of uni-level study (which is the thing that _in
theory_ should have already provided you with all the knowledge - bar the
experience - needed for your profession).

One thing is having a (lower paid but with "full dignity") entry-level job,
and another is having "stages", "internships" and similar, where very often
the young people is not actually taught anything and are just a replacement
for generic secretarial or similar activities.

~~~
leetcrew
> Actually - as it is often used - internship is only a way to pay less for
> the same (or sometimes more) not-fully-qualified job.

are you speaking from experience here? i have had several internships, and at
each one they let me fix a couple trivial bugs in production before giving me
a toy project they have no intention of actually using. my peers have had very
similar experiences. i would actually have loved to be given a full engineer's
assignment.

~~~
jaclaz
>are you speaking from experience here?

No, not really personal experience, due to age I am way past any internship or
apprenticeship possible offer, but I have (much younger than me) cousins and
sons of friends that went into the stuff (a couple still are).

Also, you are speaking about a very definite field (I presume software
engineer or similar), while I was more broadly speaking, an internship at -
say - a legal firm or - still say - an accounting firm or other "generic"
office, where internships usually (not always) revolve around data entering,
making photocopies and similar.

~~~
leetcrew
ah, i assumed we were talking about SWE type internships. i could certainly
see interns doing entry-level work in other fields.

------
nilanp
Hey Andy — love this. But really you are developing a framework for figuring
out how much time to invest in research.

A couple of questions — which was prompted by your blog

What situations is there no value in research ? Corrollary — are there ways
you can systemically reduce your risk by increasing your gut feel for
probabilities without reesearch ? Final one — you make the great point on
unknown unknowns — apart from systemically working through all possible states
and thinking through various envrionmental changes — are there any systemic
approaches to reducing this risk. In working with startups — I would summarise
an alternative approach that covers all 3 by the following pieces of accepted
wisdom from startups.

Launch early — to validate in the market with real people Talk to customers to
build an empathy or gut feel for relative priorities of stuff — and what
really matters I think these two principles go some way to solving the
problems you are highlighting through a different approach.

I’m sharing — as I’m sure you have considered this appraoch — and have a
reason for discarding it.

I wrote a bit about it here [https://medium.com/@nilanp/building-conviction-
the-art-of-pr...](https://medium.com/@nilanp/building-conviction-the-art-of-
product-management-c65d481d653c)

N

~~~
andyferris
Hi Nilan - thanks for your words. It’s true, that as someone who is ultimately
employed as a researcher in some role, I do wonder (a) when research is
valuable and (b) whether the people controlling the decisions of when/how much
research to undertake can justify this decision quantitatively. (But rather
than think of “research” I’d prefer to back that out into a more fundamental
concepts like “information”.)

I think your question 1 is partially answered by the inequality — it provides
a upper bound for how much utility you could gain from the research. More
practically, perhaps a decision maker could use gut feeling to break it down
into quartiles — outcomes which are worst case (0th quartile), below average
(1st quartile), average (2nd quartile), above average (3rd quartile) and best
case (4th quartile). If the estimated cost of research is much less than
ROI(3rd quartile) minus ROI(1st quartile), then the research seems worthwhile
even for “typical” outcomes. If the estimated cost of research is greater than
ROI(4th quartile) minus ROI(0th quartile) then the research is obviously not
worthwhile (that’s the inequality).

I think your questions 2 and 3 are the questions I would also like to see
answers to :) I think that given decision makers come from a variety of (often
non-mathematical) backgrounds, tools should be simple and easy to
use/remember, which is why I’m focusing on using things like “worst case
scenario” and “above average scenario” and so on. That way we get to combine
quantitative analysis and gut feeling to get good outcomes from typical
decision makers.

And yes — I completely agree that the agile “release often, get feedback
early” and the startup “move fast and break things” philosophies are designed
for revealing information quickly so that you can make good decisions as early
as possible (thus also maximizing the utility of that information). As a
general work pattern, I put this under a kind-of “don’t be stupid” mentality —
even if this maybe wasn’t obvious 25 years ago. I think the topic of the
article is to be able to address specific, large decisions (like exercise 2).

------
ikeboy
1\. You should be able to sell the camera for about $1500 if you don't like
it, therefore the cost of the wrong decision is about $500, or $100 in
expectation. Spend no more than 4 hours.

2\. Regardless of the answer, worst case you'd be spending $250k this year to
make 500k over the next few years, which is a terrific return on capital. Buy
it, information is useless here.

3\. Studies showing 95% confidence are wrong more than 50% of the time. The
answer is let private companies decide if they think the odds are good enough
to conduct expensive trials.

4\. The current ROI for pharma companies is below cost of capital and will be
negative in 2 years. It's not a great business anymore.
[https://endpts.com/pharmas-broken-business-model-an-
industry...](https://endpts.com/pharmas-broken-business-model-an-industry-on-
the-brink-of-terminal-decline/)

I'm not sure if these are the answers the author is expecting, but they seem
to be the correct ones to me.

Edit: also, the cost of a drug is mostly development. Production is dirt cheap
once it's been developed. The model here of a cheaper drug is just wrong. They
might be cheaper, which would force A's cost down, etc, but has nothing to do
with cost of production.

~~~
imtringued
>Buy it, information is useless here.

Information isn't useless in this example. For some unexplained reason you
have already obtained information for no cost which doesn't make any sense.

The real scenario is: cost overrun by $250k to $1 million over x months,
profit increase between $50k to $100k per year. How much are you willing to
spend to obtain 100% certainty?

~~~
ikeboy
Well then you need to estimate the distribution of those results. If it's
uniform between those numbers then it's an expected loss, and whether it's
worth spending a significant amount of money on information depends on whether
the company can afford it. If, say, the value of information was 200k in
expectation, that doesn't imply that the company should spend anywhere near
that amount on information, because they should be somewhat risk averse if
they can't spread the risk over many such decisions.

------
jimnotgym
I thought this was rather obvious. Businesses is all about risk. Every step in
business is a balance of risk vs potential reward.

That is why we pay so much for insurance. That is why we pay so much for
hedging. That's why we do risk assesments and keep risk registers. All the
machine-learning age adds is the potential for better identifying
opportunities and the potential for measuring past performance more
accurately, in order that we may extrapolate better. It isn't going to answer
the big question that CEOs are paid for, and that is "how much risk an I
willing to take, what potential reward would I need too make that worthwhile"

------
3pt14159
Look no further than Bitcoin. Without knowing a combination of cyber security,
computer science, economics, law, and incentive modelling Bitcoin could have
been the thing that cost you tens of thousands of dollars or could have made
you hundreds of thousands or millions.

It's so obvious that information and understanding is directly tied to wealth.
The growing gini coefficient within developed countries as partially
attributable to the information super highway that's enabling the intelligent
to hyper charge their mental models of the world with a just in time
information delivery system to boot.

The whole system is speeding up for the top 5%. Focussing on the _cost_
shrouds the true takeaway. The money is all on the growth side and the
intellectually impoverished are getting left behind.

~~~
foxhop
Bitcoin is dumb.

------
bcaa7f3a8bbc
> 1\. Both models I’m interested in cost $2000. I’m worried that there’s a 20%
> chance I’ll end up with a camera that I never use because I don’t like it,
> and therefore have a $2000 device which is essentially useless to me. I
> value my spare time at roughly $25/hour. On financial reasons alone, how
> much time could I reasonably spend on doing research?

> ~16 hours. Because 20% × $2000 / ($25/hour).

I'm not sure if I have understood, I think in this example, the expected loss
here is, 0 x 80% + -2000 * 20% = -400, thus the answer, is my understanding
correct?

~~~
gwern
He's not being clear about what he's talking about and mish-mashing stuff from
Bayesian decision theory
([https://www.reddit.com/r/DecisionTheory/](https://www.reddit.com/r/DecisionTheory/)).
The calculation he seems to be trying to do there is 'Expected Value of
Perfect Information' (EVPI): the value of omniscience. Which is an upper
bound, not an exact amount. He also is unclear about the choice being involved
(is he choosing between buying a camera and not buying a camera, or buying
camera A versus buying camera B? If camera A fails, why expect camera B to
succeed? Couldn't they both fail?)

If you interpret it as buying a camera or not buying a camera, a 20% risk of
the camera being useless, then the EVPI is 20% * $2000 = $200: you would be
willing to pay up to $200 for certainty about whether the camera would be good
or not. The EVPI serves as an upper bound on more realistic Value of
Information quantities like Expected Value of Sample Information (EVSI; the
value from a reduction of posterior uncertainty by a sample of data eg a small
survey). If, for example, there were some piece of information you could buy
for <$100 which reduced the risk to 10%, you would want to buy it; if you
could do that by doing research for <4 hours and you value your time at $25,
then it would be profitable to do that research. Similarly, if you could
somehow reach certainty, you would need to do so at under 16 hours (not
exactly 16 hours!)

#2 is also wrong because it ignores discounting/opportunity cost (imagine the
integration cost could be $499,999 instead). #3 can't be answered because it
omits the side-effects and magnitude of the gain, which affect both the net
benefit and EVSI (smaller differences are less profitable and harder to
detect) and 95% 'confidence' does not mean 95% probability the drug is better,
which is the fallacy of inverting the p-value. (For a good paper on how you'd
actually do drug approval in an decision-theoretic way, see
[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2641547](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2641547)
) Not to mention again discounting, and elasticity of demand to boot.

~~~
bcaa7f3a8bbc
So you think it's just some half-cooked Bayesian? Perhaps I should put
"systematically indoctrinate myself with the LessWrong methodology" to my
agenda, seriously.

~~~
gwern
I think it definitely is and he's either explaining it poorly or reinventing
the wheel. Try applying any of his explanations to something more complicated
than binary data with perfect information, such as a normal distribution with
small sample _n_, it falls apart immediately - you need the full Bayesian
perspective in order to work with the posterior and apply a loss function
sensibly.

(I wouldn't call it 'LessWrong methodology', though. LWers pretty much never
do a formal true subjective Bayesian decision analysis, and to the extent they
do, you should be calling it LessWrong indoctrinating itself into the 'Raiffa-
Schlaifer-Savage methodology', as they preceded us by ~60 years.)

------
chiefalchemist
Interesting. I'd be curious to know if the author has read "The Influential
Mind" by Tali Sharot. Long to short, humans are wired to be anything but
rational and purely logical (even when necessary).

[https://mobile.nytimes.com/2017/09/22/books/review/the-
influ...](https://mobile.nytimes.com/2017/09/22/books/review/the-influential-
mind-tali-sharot-decision-making.html)

The book was on FT's short list for best of 2017.

~~~
andyferris
No, but seems like an interesting read. Thanks.

------
lhuser123
> I think the take-home statement here is that it pays (literally) to know
> what you don’t know.

So true.

