
A.I. accurately predicted the full baseball post-season back in July - Cortexia
http://www.marketwired.com/press-release/ai-makes-yet-another-remarkable-prediction-2172570.htm
======
sixhobbits
This reminds me of one of the chapters from "How Not to Be Wrong: The Power of
Mathematical Thinking" by Jordan Ellenberg (highly recommended). He describes
how "stock brokers" would send out a "free stock prediction" to thousands of
email addresses. The prediction would be a simple up/down prediction for a
specific stock. The prediction was randomly chosen. But these "brokers" would
send an equal number of up and down predictions, ensuring that they got a
correct prediction for half of their recipients. They would then throw away
half of the emails (the wrong half), and repeat with the remaining half. After
ten predictions, there would still be a small number of people remaining for
whom they'd sent only correct predictions to (10 in a row, which seems really
impressive if you can't see the full picture). They would then contact these
few people and offer to keep selling them predictions for a fee.

Stories like this (And Paul the Octopus, who I see was mentioned already) are
exactly the same thing. Thousands of people are trying to using deep learning
(i.e. stats), or other crazy methods as in this article, to make predictions.
Of course every now and then one of them is going to work better than
expected. This would be the case even if people were simply using random
numbers. But we ignore all the ones that fail and give heaps of attention to
the Pauls.

~~~
CapacitorSet
If anyone is interested, this is known as p-hacking in statistics
([https://en.wikipedia.org/wiki/Data_dredging](https://en.wikipedia.org/wiki/Data_dredging)),
and works in a similar way.

For instance, you have a statistical population of one hundred men and one
hundred women: you collect as much data as possible about them - as many
features as possible, actually - until you find something which happens to be
statistically significant for your group (eg. salt consumption). Then, you
publish your results, pretending that the feature you found was the original
hypothesis for the study ("Our study confirms that salt consumption is higher
in males.")

~~~
verbify
It would be far more specific - you'd collect all their medical details, their
ethnicity, age, etc., and then you end up with:

'Salt consumption can increase the risk of liver consumption for middle-aged
males of African descent'

~~~
rrobukef
... liver consumption ...

~~~
verbify
I meant liver disease. But I'll leave it this way because it's funnier. And
pretty tasty.

~~~
apetresc
"Consumption" is an old-fashioned word for classes of tuberculosis, which can
affect the liver. So you could still be right :)

------
no_protocol
Nothing about this seems to add up.

They claim they made the prediction in early July, but link to a newspaper
article dated 4 August that indicates the predictions were made just one day
earlier.

They picked the team with the best record all season long to win the
championship. They got one of the division winners wrong.

Just publishing the current favorites from MLB.com's probability page [0] as
of 3 August would have also gotten 9 of 10 postseason teams correct, including
going 6/6 on division winners. So the 'knowledge' of fans voting actually did
worse than a monte carlo simulation.

I'm not impressed.

There's no way this should be considered predicting the "full baseball post-
season," and I am not seeing any evidence that it happened in July. Wish
they'd have shared it.

[0]
[http://mlb.com/mlb/standings/probability.jsp?ymd=20161002](http://mlb.com/mlb/standings/probability.jsp?ymd=20161002)

~~~
fleitz
There's also the issue of the full suite of predictions, if these were the
only predictions made then it's impressive, but if they made lots of
predictions then some of their predictions coming true may be no better than
chance.

~~~
Cortexia
They also predicted which managers would win the MVP awards, and which players
would win the CY YOUNG awards but those don't get announced for 2 weeks.

~~~
hvs
Managers don't win MVP awards.

------
llamataboot
UNU seems to get their press releases on here a lot. As far as I can see
there's not much "AI" involved, just a UI over the "wisdom of crowds" method
of making predictions. In this case, the Cubs were heavily favored all season
to win the World Series, had arguably one of the best GMs and managers in
baseball, and a raft of all-star players. Goat aside, it was fairly smart
money to lean towards them from mid-season on.

Same thing with their Kentucky Derby prediction this year. The swarm literally
decided the horses in the exact odds they were going off at (which makes sense
since gambling odds by their very nature are "the wisdom of the crowd") and
that's how they finished.

~~~
wrsh07
Agreed - predicting that the Cubs would win the world series isn't impressive
- the majority of SI writers [4/7] did that at the beginning of the season:
[http://www.si.com/mlb/2016/03/31/playoff-picks-awards-
picks-...](http://www.si.com/mlb/2016/03/31/playoff-picks-awards-picks-mvp-cy-
young-rookie-of-the-year)

Correctly predicting who would advance in the post season is mostly luck.

------
Tangokat
Not to be overly critical but:

It does not match my definition of A.I:

"UNU enables groups of online users to think together as a unified emergent
intelligence -- a "brain of brains" that can express itself as a singular
entity. Touted to as the world's first "hive mind," the UNU platform has had
over 60,000 human participants in swarming sessions this year, together
answering over 250,000 questions."

Also I would reasonably expect some of those 250.000 questions to beat the
odds and get answered right.

~~~
Cortexia
Except this was a prediction that was done formally for the Boston Globe, at
their request. You can see their article about it here:

[https://www.bostonglobe.com/sports/redsox/2016/10/04/group-g...](https://www.bostonglobe.com/sports/redsox/2016/10/04/group-
globe-readers-predicted-nine-mlbs-playoff-
teams/Fssfk351Wgy3xhRBXkg2CJ/story.html)

~~~
1024core
Still, this is not "AI" in the traditional sense of the phrase. Asking a bunch
of humans and then deciding an outcome is not AI.

~~~
amperexorange
Well, I think it's pretty clearly an "emergent intelligence" that is distinct
from any of the individuals' unique intelligence.

In other words, whose intelligence is being represented by the swarm?

~~~
psyc
Corporations have been collectively intelligent for centuries, but we don't
call that AI.

------
mehwoot
1) The AI was just sythesizing answers given by human readers. It didn't do
any of its own analysis of the data set.

2) The experiment was published in August, when the regular season was
_already two thirds completed_. The cubs were well ahead of everybody at that
point and were favourites to win (although in baseball that doesn't
necessarily mean you are going to win in the postseason). Here are the
standings at that Date: [http://www.baseball-
reference.com/games/standings.cgi?year=2...](http://www.baseball-
reference.com/games/standings.cgi?year=2016&month=8&day=4&submit=Submit+Date)

You can see that the 10 playoff teams were ranked 1-5 in each league at that
point. So predicting the playoff teams was just "Which 10 teams are leading
right now", which they asked humans about.

The AI didn't predict the full post-season, just which two teams would be in
the World Series, which happened to be the team everybody thought it would be
from one league and the second placed team from the other.

------
bluetwo
This reminds me very much of delta polling, where you survey experts in a
field with a complex and unsolvable question, tally the results, send that
information back to the experts, and then ask them again. After a few rounds
this tends to arrive at what is usually a pretty solid answer.

It is used sometimes in scientific and medical research. An automated tool is
pretty neat, but like others said, it doesn't really classify as AI. I'm not
sure how much money I would really put down on the bets the site makes, but it
is similar in some ways to the scandal that rocked Draft Kings/Fan Duel, where
admins were using high-level data to make bets on opposing systems. They did
in fact make money.

------
macawfish
It irritates me that this is called "A.I".

------
losteverything
Anyone remember Tamara Rand. [0]

Well, one of the greatest Tamara Rand jokes was from CNN sports tonight: "The
Cubs are predicted to win the World Series. Only thing is it was predicted by
Tamara Rand."

Quite cool at a time when tv commentary was never light hearted.

[0]
[http://hoaxes.org/archive/permalink/tamara_rand](http://hoaxes.org/archive/permalink/tamara_rand)

------
zitterbewegung
What UNU does is more like "An live online poll of a group of people picked
the post-season in July".

------
andrewclunn
How many AIs screwed it up? Remember the hits, forget the misses.

------
gnicholas
I'd be curious to know what else they predicted that turned out to be wrong.
This could be an impressive run, or it could be that the company's press
release highlights several victories and omits several (or more) failures.

I have no evidence one way or the other but would be interested to see more
context.

------
Xeroday
Has Unu made any incorrect predictions? Their blog only seems to cover the
big, successful ones.

~~~
Cortexia
In response to such skepticism, reporters come up with their own questions and
ask UNU to make predictions. And the reporters monitor the process. That's
what this set of picks is - it was done for the BOSTON GLOBE, at their
request, with their own participants:

[https://www.bostonglobe.com/sports/redsox/2016/10/04/group-g...](https://www.bostonglobe.com/sports/redsox/2016/10/04/group-
globe-readers-predicted-nine-mlbs-playoff-
teams/Fssfk351Wgy3xhRBXkg2CJ/story.html)

~~~
macintux
That doesn't actually answer the question.

------
lawnchair_larry
Survivorship bias.

Also why the stated historical performance for your 401k funds are probably
tricking you.

------
orasis
Oh cool. How many AIs did they have doing the predictions? Survivorship bias.

------
pgodzin
The article mentions "swarm intelligence" that essentially forms a hive-mind.
Where is the AI/ML when it seems like it just picks the most popular responses
from its many respondents?

------
Cortexia
Here is the latest UNU election pick: [http://unu.ai/election-
fatigue/](http://unu.ai/election-fatigue/)

------
davesque
What are the chances? Probably not that slim considering how many people are
trying to make predictions using methods like this.

------
FonzieBear
What a game. What a series.

~~~
vecter
Hi and welcome to Hacker News! Please only post comments that add something
meaningful to the topic of discussion (a proclaimed artificial intelligence
that claims to have predicted this result much earlier).

------
joshagogo
Found this forward-looking post on which states will pass marijuana
legalization ballot issues.
[http://unu.ai/legalization/](http://unu.ai/legalization/)

------
joshagogo
Who does the A.I. say will win the election next week?

~~~
Someone
[http://unu.ai/election-infographic/](http://unu.ai/election-infographic/)

~~~
duaneb
I'd love to see an update; 538 has cut a third off of Trump's chance.

~~~
Jerry2
[https://i.imgur.com/09Sf7jj.png](https://i.imgur.com/09Sf7jj.png)

