
To Test a Powerful Computer, Play an Ancient Game (1997) - luso_brazilian
http://www.nytimes.com/1997/07/29/science/to-test-a-powerful-computer-play-an-ancient-game.html
======
grimgrin
The sole quote responsible for the reworded title is:

''It may be a hundred years before a computer beats humans at Go -- maybe even
longer,'' said Dr. Piet Hut, an astrophysicist at the Institute for Advanced
Study in Princeton, N.J., and a fan of the game. ''If a reasonably intelligent
person learned to play Go, in a few months he could beat all existing computer
programs. You don't have to be a Kasparov.''

~~~
dang
Right—it's against the HN rules to rewrite a title to single out just one
detail. That's a form (actually the leading form) of editorializing.

Submitters: if you want to point out what you think is important about an
article, the place to do that is in the thread, on a level field with other
participants.

~~~
Dylan16807
When it's not a news article, it seems perfectly fair that someone should be
able to submit a particular paragraph for conversation, rather than being
forced to generically link 'the original source'.

There's no necessity to have a level playing field when there is no playing
field. There's generally no one else submitting an article if it's 20 years
old.

~~~
dang
The level playing field I'm talking about isn't between multiple submitters,
but between the submitter and other users. On HN, submitting a story confers
no special rights to frame the discussion for everyone else. Readers here can
make up their own minds about what's important in a story. For that it's
important (critical, actually) to have accurate, neutral titles that avoid
spin.

~~~
Dylan16807
> The level playing field I'm talking about isn't between multiple submitters,
> but between the submitter and other users. On HN, submitting a story confers
> no special rights to frame the discussion for everyone else.

You can't avoid framing. The act of submitting an article frames a discussion
around its contents. If you came up with the submission idea by yourself, you
are unavoidably privileged in deciding what the discussion is about.

If there is a collection of 200-word pages on a site, and I submit one of
them, nobody will accuse me of editorializing. But if I try to submit a
200-word segment of a large page, it's a problem. Something's off there. The
url structure of a site should not be the determining factor in whether
editorializing is happening.

> Readers here can make up their own minds about what's important in a story.
> For that it's important (critical, actually) to have accurate, neutral
> titles that avoid spin.

In many cases, including this one, the story _is_ the important facts. The
story here is not "computer played go in 1997". The story is the prediction.
Avoiding spin is great, but a title that doesn't mention the story is a bad
title.

~~~
pvg
There's a really simple cure for this - write your own piece pointing out and
commenting on what you think is the interesting bit and submit that, with the
title of your choosing. Then you absolutely get to frame, comment,
editorialize. You just don't get to do it in the submission titles.

~~~
Dylan16807
Ah but that doesn't work, because it's not the 'original source', and people
will replace your link entirely.

~~~
pvg
No they won't, if you write something that is of interest as the sites rules
ask you. This touches on the more fundamental problem is that a tiny snippet
from an old article is likely not of great intellectual interest as the
comments in this thread show.

~~~
Dylan16807
If you're just focusing on a particular fact reported elsewhere, you're not
writing something of interest.

>This touches on the more fundamental problem is that a tiny snippet from an
old article is likely not of great intellectual interest

If there had been slightly more written about the justification it probably
would have been very interesting to dissect in the comments, despite being a
mere snippet of an old article.

------
mchahn
Predictions seem to be consistently wrong, or at least random. We don't have
flying cars. My mother was told in the 30's that there would be movies in the
home in 80's. She really hoped her eyesight would be good enough to see the TV
at that age.

I think if anyone would actually be good at predictions they could somehow
take advantage of that. Maybe in the stock market?

~~~
jfoutz
Think about weather forecasts. If you want to know if it'll rain on this day
next year, you can look at historical data, and make a guess. If you want to
know if it'll rain tomorrow, the forecast is really good.

Technically, predictions are just made up, forecasts have some model behind
them. but the notion of near vs far is helpful. Back then, there really wasn't
any hope of beating a human at go. We were in a position of not knowing what
we don't know. As we get better at it, the upper and lower bounds can be
reevaluated.

A nice comparison is self driving cars. 20 years ago, the answer would be a
long time, without really complicated road infrastructure. Like magnets
embedded in highways. But once the DARPA grand challenge fell, we were able to
revise the time down dramatically. Alphabet is saying 30 years for perfect in
all conditions. But as we understand and revise the problem, we can get pretty
good highway drivers right now. They'll steadily improve handling more and
more adverse situations. so the predictions can be revised.

Far flung predictions are cultural aspirations. As we understand, we improve
or abandon that prediction.

------
fsiefken
Not sure if Tezhi Luzhanqi or L'attaque (1910) qualify as ancient games, but
let me play a heuristically tuned AlphaStratego mesosphere and I'll beat it
5-0, now and for the next 5 years at least! Jack6/Shark bridge AI doesn't
consistently beat the top human pairs yet. Then there are Connect6 and Emergo.

------
wodenokoto
At the same time there were predictions that computers would have more
processing power than humans within 1 or 2 decades.

Those two things obviously don't go together.

~~~
aab0
Seems consistent to me. Just because you have hardware doesn't mean you know
how to use it well. Consider everyone accidentally writing an O(n^2) algorithm
on their multi-gigahertz/gigabyte/terabyte (CPU/RAM/HDD) machines. At the
time, there was no reasonable algorithm which even on the top supercomputer
could hope to beat the world champion. Then MCTS came along a few years later,
and at that point maybe you could've scraped together enough hardware to beat
him; then another few years, and AlphaGo... We are in the same position now
for AI: the bottleneck is not the hardware but knowing what to run. (Some
technical examples of this: model compression, binary networks, and residual
networks, show that what we can run far outstrips what we're able to train.)

~~~
Moshe_Silnorin
It's not clear we have enough hardware. I've been trying to get a handle on
the processing power needed for human level AI. If you take the naive approach
and assume it's just a matter of getting models with the same number of
parameters as the brain has synapses, our biggest models are about a million
times too small. If you assume Moore's law will remain dead, then it may be a
long while. Then again if modern deep learning is amenable to some sort of
ASIC it could happen much quicker than one would expect, depending on how much
investment is put into such things. And who knows how algorithmic advances
will change things. If we ever get quantum computers, they're likely to be
very useful in the design of conventional computers and materials technology
in general, so another age of exponential advancement isn't out of the
question.

It's very, very hard to be certain about these things.

~~~
aab0
> If you take the naive approach and assume it's just a matter of getting
> models with the same number of parameters as the brain has synapses, our
> biggest models are about a million times too small.

And if you had that hardware, what would you run on it, exactly? There is no
off the shelf algorithm which we could plop on that futuristic GPU and say,
yes, this will definitely yield a general human-level intelligence. That is my
point. We may or may not have the necessary hardware, but we sure don't have
the necessary software right now.

~~~
Moshe_Silnorin
I agree. But I often here the claim that we have enough hardware and now it's
just a software problem. I don't think that's at all clear.

------
xiphias
Maybe it would have been more insightful to ask Game engine engineers and AI
researchers than an astrophysicist.

~~~
ars
AI researchers are not any better at predictions. There was a time they
thought true AI was just a matter of enough hardware, so they extrapolated
hardware growth speed.

Obviously they were wrong: For true AI we need new software too - all the
hardware in the world would not accomplish it.

