
How to see into the future? - danboarder
http://www.ft.com/intl/cms/s/2/3950604a-33bc-11e4-ba62-00144feabdc0.html#axzz3ClFR6nj1
======
danboarder
An excerpt of key points on forecasting from end of the article:

"How to be a superforecaster:

Some participants in the Good Judgment Project were given advice on how to
transform their knowledge about the world into a probabilistic forecast – and
this training, while brief, led to a sharp improvement in forecasting
performance. The advice, a few pages in total, was summarised with the acronym
CHAMP:

● Comparisons are important: use relevant comparisons as a starting point;

● Historical trends can help: look at history unless you have a strong reason
to expect change;

● Average opinions: experts disagree, so find out what they think and pick a
midpoint;

● Mathematical models: when model-based predictions are available, you should
take them into account;

● Predictable biases exist and can be allowed for. Don’t let your hopes
influence your forecasts, for example; don’t stubbornly cling to old forecasts
in the face of news."

As a participant in the Good Judgement Project I recall this training
(basically an interactive Powerpoint). I would agree that it was helpful in my
forecasting performance last year.

~~~
hywel
It's an interesting article, but those conclusions seem so weak to me; little
more than formalised common sense. Maybe the training you had did a better job
of making them tangible.

~~~
danboarder
I think you nailed it, it's formalized common sense (the article hints at this
conclusion too). However, the unique addition of a way to track and improve
forecasting performance over time is what makes this project work. This was
accomplished through a few attempts at custom software in the last few years,
and for this year they are using Inkling Markets, so far it is best platform
I've experienced for tracking and improving prediction performance.

------
Osmium
This whole concept reminds me very much of a character in an Iain M Banks
novel (I forget which) whereby this character had--historically--shown an
incredible knack for predicting/analysing future events, but there was always
doubt whether they were just a statistical anomaly (given enough people,
_someone_ has to get it right all the time--until, suddenly, they don't) or a
real effect. Interesting idea. Presumably they try and control for this.

~~~
lutusp
> Presumably they try and control for this.

How could they? They're using a retrospective design, which means they wait
until someone's past forecast turns out to be correct, then they tally that as
a data point in favor of their thesis. In actual science, one must use a
prospective design, to avoid the perversities and statistical pitfalls of a
retrospective one, and one must craft and then test a falsifiable theory -- an
explanation -- about the forecasting ability.

In a prospective design, one being run by people with even a tiny bit of
common sense, on the first day of the study the experimenters would say,
"Okay, if there were really an ability to forecast the future, anyone
possessing the gift would be busy making a billion dollars in equities, and
coincidentally they would refuse to talk to us."

In a retrospective study, the falsifiability criterion on which science
depends is easy to circumvent -- all one needs to do is tally successful
predictions and discard failed ones. The originally huge experimental group
will keep getting smaller, but unless everyone has made a bad prediction, the
theory still has merit.

Locating an astonishing record of predictions is trivial -- for a group of X
participants, after Y binary predictions, there will likely remain X / 2^Y
subjects with perfect prediction records. That means for a million
participants making binary predictions (the stock market will rise/fall in the
coming month), after a run of 16 predictions, 15 subjects are very likely to
remain standing _with perfect records_.

How the designers of this bogus study don't understand this basic statistical
fact is beyond me.

It's pseudoscience. If it were not pseudoscience, someone would become richer
than Bill Gates or Carlos Slim by making equities calls from his penthouse
overlooking the drab world occupied by those who understand logic.

~~~
Houshalter
You are completely wrong. I don't know why everyone seems to think the
researchers are incompetent. From
[here]([http://www.economist.com/news/21589145-how-sort-best-rest-
wh...](http://www.economist.com/news/21589145-how-sort-best-rest-whos-good-
forecasts)):

> The big surprise has been the support for the unabashedly elitist “super-
> forecaster” hypothesis. The top 2% of forecasters in Year 1 showed that
> there is more than luck at play. If it were just luck, the “supers” would
> regress to the mean: yesterday’s champs would be today’s chumps. But they
> actually got better. When we randomly assigned “supers” into elite teams,
> they blew the lid off IARPA’s performance goals. They beat the unweighted
> average (wisdom-of-overall-crowd) by 65%; beat the best algorithms of four
> competitor institutions by 35-60%; and beat two prediction markets by 20-35%

~~~
lutusp
> You are completely wrong.

Citation needed. There has never been a study of forecasting ability that has
stood the test of time -- ever, anywhere.

> I don't know why everyone seems to think the researchers are incompetent.

The null hypothesis, the gold standard of statistical science, offers this as
the default explanation, and the researchers have the burden of evidence to
contradict it. They will fail, indeed they don't seem to realize they have the
burden of evidence.

Have you even considered that the positive results arise from chance? What is
the p-factor for their result? Do you know what I am referring to?

Your quotation doesn't have a single reliable word in it, or any content
recognizable as science. If it were a scientific paper, it would contain the
standard phrase, "Here is our p-factor, the statistical assessment that our
result arises from chance."

But it's not science, it's public relations. The thesis is assumed to be
correct until incontrovertible evidence proves it false -- the opposite of the
scientific outlook, which assumes an idea to be false until incontrovertible
evidence supports it.

~~~
Houshalter
I gave a citation. The quote is more than enough evidence to disprove your
naive "the researchers didn't bother to test for regression to mean"
hypothesis. They didn't give a p-value, but the reported effect size is quite
huge to be chance, and the study included thousands of people.

~~~
lutusp
> I gave a citation.

You clearly don't understand the meaning of "citation" among scientists. You
posted a quote from a press release that didn't even pretend to be a
scientific assessment of the work.

> The quote is more than enough evidence

Press releases aren't evidence. Were this not the case, Bigfoot would come
into existence on the strength of his press coverage.

> They didn't give a p-value

So, which is it? "The quote is more than enough evidence", or "They didn't
give a p-value"? They cannot both be true.

> ... but the reported effect size is quite huge to be chance, and the study
> included thousands of people.

I can see you have no idea how science works.

> ... to disprove your naive "the researchers didn't bother to test for
> regression to mean"

You are also not above inventing positions for other people.

~~~
Houshalter
I never said anything about "science". Only that your accusations are garbage.

It's one thing to say "let's not take this too seriously because they haven't
published anything yet", which I don't even know is true, but whatever.
However you went above and beyond that:

* claiming their methodology was "retrospective design"

* claiming they picked statistical outliers and didn't bother to test if they continued to make good predictions: "The described program collects forecasts, forecasts that can be expected to be half right and half wrong for binary questions, throws out those that by chance were wrong, and preserves those that by chance were right."

* various insults: they don't understand logic, statistics, it's pseudoscience, etc.

* claim they twisted language to make failed forecasts look successful.

* claim they didn't do any statistical analysis

------
conjectures
TL;DR - some things are hard to predict. If you still want to have a go, get
some other people's opinions too. Oh, and stubbornness is a vice unsuited to
the forecaster.

------
dkarapetyan
Being actively open-minded. I wonder why some people are like that and others
not. I've met people like that from different cultures and backgrounds and
I've similarly met their opposites. It doesn't seem like cultural and
educational background is a factor. Even the article contrasts two such
personalities with similar backgrounds but that's just good storytelling.

~~~
joosters
I wonder if it is possible to train yourself to be open-minded, or even to
spot if you _aren 't_ open-minded. After all, while it is easy to spot open or
closed-mindedness in others, I expect that almost everyone thinks that _they_
are open-minded.

~~~
madaxe_again
If you're close-minded, you're not going to be conscious of it - that's the
nature of the beast.

You can't force someone's mind open, as all you're doing then is teaching them
to be close-minded with a different set of axioms that you've behooved upon
them.

As to yourself - it's hard to say, as having any objective view of self is
challenging to say the least.

I like to think I'm extremely open-minded, but I might be wrong.

------
lutusp
Quote: "But ultimately one might expect the same basic finding as always: that
forecasting events is basically impossible. Wrong. To connoisseurs of the
frailties of futurology, the results of the Good Judgment Project are quite
astonishing. Forecasting is possible, and some people – call them
“superforecasters”– can predict geopolitical events with an accuracy far
outstripping chance. The superforecasters have been able to sustain and even
improve their performance."

This is such perfect nonsense. The described program collects forecasts,
forecasts that can be expected to be half right and half wrong for binary
questions, throws out those that by chance were wrong, and preserves those
that by chance were right.

In the next round, using only subjects that were right already, the same
procedure is used -- throw out the bad forecasts, preserve the good ones. Each
experimental cycle eliminates 1/2 the participants. After 8 repetitions,
1/256th of the original participants remain, all with perfect records. These
are the "superforcecasters" the article describes. A computer could do as
well, with an equal amount of logic -- none at all.

There is a financial scam called "Miracle Man" that uses the same scheme. It
seems very convincing until you think about it:

[http://arachnoid.com/equities_myths/index.html#Miracle_Man](http://arachnoid.com/equities_myths/index.html#Miracle_Man)

If this were't the most absurd kind of pseudoscience, if real forecasting
ability were being modeled, it could be tested using a prospective model (the
gold standard for real science) instead of the useless retrospective design
presently being used.

The authors of this study need to learn statistics, probability, the
undermining influence of confirmation bias, the importance of the null
hypothesis in result analysis, and a bit of common sense wouldn't hurt.

------
tsunamifury
It has always seemed to me that 'predicting' the future is useless from an
actionable perspective.

Even if you were able to give me a perfect prediction of the future, I don't
need that. I need to model how my actions to take advantage of that future
will change the outcome from what you predicted.

Most smart people can predict future trends, the value is knowing the right
methodology to interact with that data to take meaningful action.

I see nothing here that relates to that.

~~~
dredmorbius
Or as the Cowboy Junkies put it: you can always see it comin', but you can
never stop it. (Bea's Song)

[http://www.cowboylyrics.com/tabs/cowboy-junkies/beas-song--
r...](http://www.cowboylyrics.com/tabs/cowboy-junkies/beas-song--river-
trilogy-part-2--10703.html)

------
ntaso
1\. Find 20,000 people to participate in Coin Predicting.

2\. Flip a coin. Sort out all participants that were wrong.

3\. Repeat step 2 until there are 10 people left. These 10 people now have a
track record of predicting the future. Some of them even will continue to
predict the future for a number of times. Eventually, all will be wrong.

The thing is, you don't know who will be the expert forecaster in the first
place and the same is true for the Good Judgement Project.

~~~
danboarder
The Good Judgement Project does encourage forecasters to make informed
predictions. In fact there is an RSS feed of relevant news stories on the
topic up for trade.

Your coin example is a game of pure chance while trading on current events
have much more context to inform predictions. It's really quite a difference.

~~~
lutusp
> Your coin example is a game of pure chance while trading on current events
> have much more context to inform predictions. It's really quite a
> difference.

The only difference is that, in a coin toss, the statistics are much clearer.
In real event forecasting, the perversities of language can make a failed
forecast seem successful. Here's an example:

A seer in the Greek city of Troy issues a warning: "Beware of Greeks bearing
gifts." The citizens don't know what to make of this saying, so they proceed
as before.

After the now-famous Trojan Horse incident, the seer repairs her reputation by
revising her earlier warning -- "I actually meant to say, 'Beware of gifts
bearing Greeks'."

The bottom line is that, whether flipping coins or evaluating real-world
predictions, one can separate successes from failures with equal efficiency,
and an equal lack of meaning.

~~~
danboarder
In a prediction marketplace there are predictions that are more successful or
less successful, resulting in more or less profit for the predictor. While the
result may be binary, the forecast intelligence is useful in that it contains
trend data that provides better predictions than any one forecaster can
provide. Further, the outcomes of many individual prediction questions are
multifaceted rather than binary, with four, five or more possible outcomes.

~~~
lutusp
Yes, and without rigorous scientific controls, predictions can be made to look
much, much better than they really are. The study being discussed didn't even
perform a statistical analysis based on the null hypothesis or produce a
p-factor -- the possibility that the result came about because of chance.
These are basic to science, and in their absence, the result is anecdote, not
science.

[http://en.wikipedia.org/wiki/P-value](http://en.wikipedia.org/wiki/P-value)

Quote: "In statistical significance testing, the p-value is the probability of
obtaining a test statistic result at least as extreme or as close to the one
that was actually observed, assuming that the null hypothesis is true."

Chance occurrences can be made to look very convincing. Consider the "Miracle
Man" scheme used in investing -- on first hearing about it, many people assume
it's not possible:

[http://www.arachnoid.com/equities_myths/index.html#Miracle_M...](http://www.arachnoid.com/equities_myths/index.html#Miracle_Man)

------
logicallee
If I wanted to know how to see into the future, I'd ask a futurologist with a
good track record, not a publication whose MO is writing about things that had
_just_ happened (hours or days ago.)

