
Bayesian Financial Models - toddmoses
http://toddmoses.com/articles/read/bayesian_financial_models
======
gjm11
Although the title is "Bayesian financial models", this article describes no
Bayesian financial models. It gives an example of Bayes' theorem in action
(the usual medical "probability of false positive" one), and then gives
another toy example in which the events we're looking at are labelled
"happy/sad market sentiment" and "stock price increase/decrease" \-- and
that's it.

I guess this is part of a series on probability or quantitative analysis or
something[1], and there will be more contentful stuff later in the series, but
I don't see how anyone's going to learn very much from this one.

[1] Looking at other articles on the site, it looks like they're all about
quantitative analysis. They all have rather the same character as this one:
they introduce some notion (binary options, "Greeks" (first partial
derivatives of derivative price w.r.t. various things), etc.), they state some
definitions and maybe give a toy example or a couple of formulae, and then
they stop.

I assume the _toddmoses_ who posted the link to HN is also the Todd Moses who
wrote the article. Todd, if you're reading this, would you like to say a few
words about the intended future direction of your articles? Is the idea that
they will form a kind of informal course on the elements of quantitative
analysis? Will they be getting "deeper" than the ones so far, or are you
intending that they will all be basically giving some definitions and a few
words of explanation?

------
Booktrope
I am troubled by the mammogram analysis. These sorts of health care examples
(mammograms, AIDs testing, etc.) are often used to explain Baysian statistics
and obviously make an interesting point. But as someone below points out,
these tools are meant to be used in situations where we have incomplete
information. We have direct experimental evidence of the accuracy of a
positive mammogram in predicting cancer, and the accuracy is far, far higher
than 7.8% From direct studies of patients who get positives, it looks like
about 80% of them actually have breast cancer. See, for exmaple,
[http://www.cancerresearchuk.org/health-
professional/cancer-s...](http://www.cancerresearchuk.org/health-
professional/cancer-statistics/statistics-by-cancer-type/breast-
cancer/screening#heading-Two)

It may be that the author of this article simply go mixed up -- he reports
than 1% of the population that 80% of the women who have cancer get positive
mammograms, while from what I can see, evidence actually shows that 80% of the
women who have positive mammograms have cancer. So the problem with the
example might just be a matter of garbage in garbage out.

Mammograms, incidentally, are still controversial, especially doing them on an
annual basis, because if you,for example, do a test with 20% false positives
10 times, you're pretty likely to get a false positive in at least one. But
that probability is still not 92.2%, which is what the article suggests is the
false positive rate for a single mammogram.

But it's disturbing to see statisticians flippantly saying things like, only
7.8% of positive mammograms represent actual cancer, when evidence shows, 80%
of positive mammograms represent actual cancer. Survival rates for breast
cancer have gone up and most everyone agrees early detection of cancer plays a
role. I certainly hope that any woman who reads this article recognizes that,
if she has a positive mammogram, the changes are much, much higher than 7.8%
that she has cancer.

I've seen the same arguments made about AIDS tests. Why is it that
statisticians like to use examples of life threatening illnesses and present a
Baysian model that vastly underestimates the effectiveness of tests that could
be crucial in saving lives?

~~~
bobcostas55
On the other hand this paper:
[http://www.ncbi.nlm.nih.gov/pubmed/21249649](http://www.ncbi.nlm.nih.gov/pubmed/21249649)
suggests that for every correct treatment there are 10 mistreatments and 200
initial misdiagnoses.

How could the numbers differ so much from study to study?

~~~
Booktrope
I think you're misreading that study. It compares the number of women
estimated to have their life prolonged because of a mammogram, to the number
of women estimated to have been treated unnecessarily because of a mammogram.
The issue with whether mammograms prolong life is different than the issue of
whether mammograms accurately predict cancer. Depending what research you look
at, other approaches are sometimes seen as just as effective as mammograms.
This is very controversial right now among people who do cancer treatment.

The point is, however, the article you cite does not say for every correct
treatment there are 10 unneeded treatments; it suggests that for every life
saved there may be 10 unneeded treatments. The summary doesn't say what they
mean by a life saved. Mammograms find a certain number of cancers. Other
diagnostic techniques also find a certain number of cancers. These sets
overlap. It may be that the authors consider the number of lives saved by
mammograms to be the delta between cancers detected/treated due to mammograms
and due to other diagnostic techniques. This seems to me to be a valid issue,
but it has nothing to do with how accurate mammograms are in detecting actual
cancers, which seems to be 80%. (So, for example, what if other diagnostic
techniques have a 70% chance of detecting cancer -- this might lead to the 10
unneeded treatments per one life saved statistic)

The issue I have with the article posted above is that it claims the
probability of a woman having cancer after a positive mammogram is 7.8% when
actual results show this to be 80%.

I did contact a statistics site that had a similar statement about AIDS tests
and their response was, Baysian statistics are correct and they saw no problem
with telling people that a positive AIDS test only indicated a very low chance
of having AIDS.

------
vegabook
The analogy incorrectly compares a binary outcome (cancer / no cancer) with a
continuous variable (stock price returns, which may have skewed magnitudes
depending on upside or downside). So while it is nominally correct about the
formula for "stocks up" or "stocks down", it does not help for an actual
trading situation where "stocks down" could have a negative return mean much
larger than the corresponding positive mean return of "stocks up", for
example.

~~~
clairity
then somebody should tell those guys who create the binomial options pricing
model (and by extension, black-scholes) how flawed their approach is! =P

[https://en.wikipedia.org/wiki/Binomial_options_pricing_model](https://en.wikipedia.org/wiki/Binomial_options_pricing_model)

seriously though, the example is just a model, and a simple one at that. it's
certainly useful to call out where and how the model breaks down, but it's
also useful to understand the conditions under which a model performs well.
like most models, this one is not designed for black swan events (to which you
allude).

also, the author acknowledges your point at the end: "...it is only part of a
financial modeling solution" and "one needs ... to learn more through careful
experimentation and study". you'd need to include some information about the
magnitude of losses and gains, to be sure. i do like that he used a relatable
example like sentiment analysis though.

~~~
zhte415
Black-Scholes is pretty flawed in assuming normal distributions, which
financial markets don't show. Following power laws or Omega metrics can be
informative. The more, the merrier.

------
Little_Peter
Shouldn't P(increase | happy) = P(increase) P(happy | increase) / P(increase)
be P(increase | happy) = P(increase) P(happy | increase) / P(happy)?

Also I am not getting why he cannot simply compute 950/1000 = 95%? How does
Bayes rule help us in his example? What am I missing?

------
toddmoses
Thank you for your feedback. Yes there will be more articles coming on the
subject. This was just an intro to gauge interest before diving deeper.

------
perfTerm
I really wanted to read this but for some reason it doesn't resize for mobile
unfortunately. I'll have to bookmark it. It may also possibly be my phone
which is not on its best legs.

