
Technology predictions - dmnd
http://blog.samaltman.com/technology-predictions
======
jeffreyrogers
This post seems to be a response to some criticism of Sam's opinions in other
blog posts. The relevant line is this one

> Superhuman machine intelligence is prima facie ridiculous.

> \- Many otherwise smart people, 2015

I was among those criticizing Sam's point of view, but not because I think the
idea of superhuman machine intelligence is ridiculous. I don't know one way or
the other whether we'll achieve it or not, but I don't consider it impossible.

What I, and I believe many others as well, was criticizing was Sam's
insistence that the current techniques have the potential to cause such great
harm that they need to be regulated and that extreme measures need to be taken
to address the possible harm of modern AI and ML techniques.

This type of fear-driven reasoning is wildly out of proportion with the facts,
as anyone who does research in the area of is familiar with the research in
the area can attest to. Modern machine learning techniques are heuristic
methods to find optima in functions over high dimensional spaces. Neural
networks and support vector machines produce impressive results on image
classification and other well defined problems, but are far from general
purpose techniques that can be easily applied to any given problem.

So, the bullet points:

1\. Current ML techniques aren't anywhere near creating general intelligence,
nor are there areas of research that appear likely to yield superhuman levels
of intelligence in the near to moderate-term future.

2\. This doesn't mean superhuman levels of machine intelligence are
impossible, just that we don't have any current methods that are likely to
lead to them.

3\. Taking action against an ill-defined, likely phantasmic threat is neither
cost effective nor helpful for the progress of AI and ML.

4\. When the people who have real, expert knowledge of something all tell you
one thing and the people who have something to gain from getting your
attention by promoting sensational opinions and cherry-picked facts tell you
something else you should rationally assess their motives and weight that when
deciding who's views more closely approximate the truth.

~~~
karmacondon
Did anyone actually say that superhuman machine intelligence is prima facie
ridiculous? I followed the discussion pretty closely, both on hackernews and
the quotes and discussion by prominent researchers, and I don't remember
reading that. It's possible that one or two people somewhere in the discussion
said the idea was completely ridiculous, but it didn't seem to be the common
sentiment. The main dissenting ideas I saw said that it was too far away to be
considered as a threat, and that most high profile researchers weren't giving
it serious consideration.

I don't doubt that "otherwise smart people" have said that superhuman machine
intelligence is completely improbable, but I don't know about the "many" part.
There's always someone who will say anything. The closest I've seen to a
consensus is disputing the timeframe for AGI/SMI, not the possibility.

For that matter, I don't remember seeing a lot of people say "Computers will
never play chess" or "Computers could never drive cars" or "Computers could
never compete on Jeopardy". I can't recall one prominent researcher saying any
of those things definitively, much less "many" people saying any of them.

It seems like there might be some inserting of quotations into the mouths of
others going on here.

~~~
andreyf
> I don't know about the "many" part

I'll go there, as has everyone I've talked to about the subject (probably half
a dozen or so people). I have not heard a convincing proposal of a path
towards a rogue autonomous AI, and I've looked reasonably hard. I'm not sure
why otherwise smart people are for some reason just now getting worried about
an idea first popularized by The Terminator in 1984.

> Superhuman machine intelligence is prima facie ridiculous

It's not just prima facie ridiculous, it's ridiculous after thinking about it
for some time. We will have problems with enormously powerful machine-
augmented individuals and machine-augmented totalitarian organizations (be
they corporations or governments) long before we have problems with a
completely autonomous out-of-control AI.

------
NhanH
"I believe that in about fifty years' time it will be possible, to programme
computers, with a storage capacity of about 10^9, to make them play the
imitation game so well that an average interrogator will not have more than 70
per cent chance of making the right identification after five minutes of
questioning." \- You-Know-Who, in 1950.

It's a bit ironic that Sam missed this one quote that would have be very
relevant to the post, both in term of the conclusion and the topic at hand
(it's hard to predict stuffs) ;). It's interesting to note that the prediction
was wrong on both counts: our machine is like a gazillion times more powerful
than the quote, and we're doing nowhere near the capability of the prediction.

From the last HN comments on the topic, while it's true that there are certain
people who believe that strong AI is impossible, the stance I've got from
AI/robotics researchers were that it's just silly to talk about general AI at
this point: it's so far away. This post from Karpathy would be one, for
example: [http://karpathy.github.io/2012/10/22/state-of-computer-
visio...](http://karpathy.github.io/2012/10/22/state-of-computer-vision/) .

Because of that, any talk and concern on the topic is a bit silly, especially
about regulation of the subject. I mean, can you imagine people in the 1600
trying to figure out how to regulate the air traffic that we have right now?
Or should we talk about space travel regulation now? Because seriously, chance
are that will happen before we have singularity/ strong AI.

If we're really concern about the danger of strong AI, then I'm more in
favored dealing with it by the way of Eliezer Yudkowsky and the Singularity
Institute (by making sure that whatever we're researching toward is
"Friendly"). Even though I'm also thinking that they're too optimistic. I'm
not against immortality in my lifetime though.

Now, a talk about more sophisticated automated/ autonomous system that do
_funny_ things (in a good or bad way), or their risk, that's something worth
discussing about.

For a fun (philosophical) remark, if the robots are to become our overlords,
it may be a bad idea trying to regulate them! Google "Roko's basilisk" for
more details

~~~
dougmany
The article you linked with ends with

"only way to build computers that can interpret scenes like we do is to allow
them to get exposed to all the years of (structured, temporally coherent)
experience we have"

This may appear daunting until you realize robots can share memories. Five
robots running around for a year is equivalent to one robot running around for
five years. Does not Google have 25 cars driving around experiencing the world
right now?

I also see skeptics running to computer vision as an example of how far we are
from human level AI. Is that just the hardest problem to solve? Is it the most
useful problem to solve?

~~~
craigching
Besides sharing, who's to say that the machines couldn't do it faster/more
efficiently than humans do and so gain the experience at a faster rate.

------
Animats
"Space travel is utter bilge." \- Dr. Richard van der Reit Wooley, Astronomer
Royal, British government, 1956

He was right. He did the math - it's not possible to get any significant
payload to Earth orbit with a single-stage vehicle propelled by chemical
fuels. With multi-stage rockets, it's possible to put a little mass in orbit
with a huge booster. That's just low earth orbit. Going further out is even
more expensive. Going to the moon required something the size of a 50-story
building to move a payload the size of a SUV. Nobody has bothered for over 40
years now.

~~~
baq
OTOH orbit is halfway to everywhere.

~~~
Iftheshoefits
Orbit is precisely halfway to the boundary of a region of space for which
double of the to-orbit energy is required to attain. It's barely "anywhere"
much less "everywhere."

~~~
TeMPOraL
It's a saying, "If you get to LEO, you're halfway to anywhere in the Solar
System", and it reflects the amount of deltaV you need to spend to reach Earth
orbit, which happens to be around the half of what you need to complete the
trip to another body.

(NOTE: you lose a lot of deltaV fighting air resistance)

------
spenczar5
Sure, "That'll never catch on" is funny in retrospect. But don't forget all
the cases of "This is gonna be huge" which flop, too.

Popular Mechanics did a retrospective which is an absolute _circus_ of hits
and misses: [http://www.popularmechanics.com/flight/g462/future-that-
neve...](http://www.popularmechanics.com/flight/g462/future-that-never-was-
next-gen-tech-concepts/)

~~~
joncalhoun
Sam covers this briefly with the prediction near the bottom about bitcoin.

~~~
lojack
Attributed to "Many otherwise smart people."

------
fchollet
Like every other technology, AI has risks and benefits. The main issue here,
is that we are letting fear dominate the discussion, and that fear is not
based on any supporting facts or evidence.

Meanwhile, we are completely missing the discussion we need to have about the
_realistic_ , short and medium-term dangers associated with the development of
AI. Intelligent automation is about to disrupt economic production and
existing power balances, much like software has done not too long ago, with
significant social consequences. This is what we need to be talking about, and
preparing for. We need to plan for a smooth transition to a post-AI world.

Regulating AI for fear that it may take over makes about as much sense as
outlawing space travel for fear of aliens. Can we start having a sane
discussion about AI now?

~~~
Geee
I'm not sure why it hasn't come up, but my biggest fear is that a powerful
nation will use AI for their own advantage, mostly in the form of propaganda
and mass manipulation through Internet. That's at least my fear, and I think
that "AI taking over" is complete bullshit. Although, if let loose, AI could
even accidentally manipulate humanity into any non-positive state. We know
that being easily manipulable is probably the biggest vulnerability of humans
(and there's lots of history to prove that).

What comes to regulations, I think it should be required by law that AI
identifies themselves as AI on the Internet discussion boards and social
networks. I'm not sure how it's possible to enforce this requirement (without
needing proof from everyone that you're human). Captchas are already too
difficult (only bots get in these days). Popular captchas (such as Google's
ReCaptcha) are also centralized, which is dangerous because it gives the
captcha-owner a controlling position.

~~~
fchollet
_> my biggest fear is that a powerful nation will use AI for their own
advantage, mostly in the form of propaganda and mass manipulation through
Internet._

Chances are it is already happening. More advanced AI will give governments
and corporations the ability to do the same in a much more effective way, and
on a larger scale. Large-scale intelligent data mining will make it possible
to use people's data to build actionable models of what they think, what they
will do next, and how to affect what they think and do. Better than humans
could.

It doesn't even need to occur through sockpuppets, so the anti-sockpuppet
regulation you propose would be not only highly intrusive but also
ineffective. Here's an example: Facebook can manipulate your emotional state
by selecting what goes into your newsfeed [1].

[1]
[http://www.theguardian.com/technology/2014/jun/29/facebook-u...](http://www.theguardian.com/technology/2014/jun/29/facebook-
users-emotions-news-feeds)

------
vilhelm_s
> X-rays are a hoax. -Lord Kelvin, ca. 1900

Gwern Branwen did a bunch of research to track down this quote, concluding:

> there there is no reliable primary source for any Kelvin quotation running
> "X-rays are a hoax"; and there's some reasonable doubt about what he
> actually believed in the short interval between reading newspaper articles
> (I think we've all had the experience of seeing newspaper articles on some
> new scientific proposal which bore scant resemblance to reality!) and
> getting Rontgen's paper & photos.

[https://en.wikiquote.org/wiki/Talk:William_Thomson#.22X-rays...](https://en.wikiquote.org/wiki/Talk:William_Thomson#.22X-rays_will_prove_to_be_a_hoax.22)

------
guelo
This post feels childish and passive aggressive. Just make your argument, if
you have one.

~~~
keithwhor
Exactly my thoughts. A hint of condescension and lack of self-awareness - the
camp Sam Altman seems to be a part of ("regulate AI, it's an existential
threat to humanity!") is just as much a prediction of the future as anything
else. Yet, somehow, he seems to be subtly implying that he's "more" correct.

Marc Andreessen has been relatively level-headed about the topic of AI
recently on Twitter, and it would be nice to see other industry figureheads be
less emotionally involved and more scientifically rigorous in their assessment
of the industry. The debate is devolving into an ego battle (especially with a
post like this!), and it's rather unfortunate.

 _Edit:_ Additionally, Altman appears to be primarily attacking a strawman
with this article. "Superhuman" intelligence already exists. The emergent
intelligence (via technological amplification) of society is, by definition,
super-human. What's less realistic is anticipating a _human-like_ artificial
intelligence that would, in any way, represent an existential threat to the
human race. There are many, many problems with the latter argument. (From a
technological, philosophical, economic, and evolutionary perspective.)

------
powera
"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright
brothers. But they also laughed at Bozo the Clown."

~~~
aaronbrethorst
To be fair, you can't reach Asia by sailing west across the Atlantic.

~~~
sandstrom
I agree with your point. Though some ~420 years after Columbus tried you more
or less can[1].

[1]
[http://en.wikipedia.org/wiki/Panama_Canal](http://en.wikipedia.org/wiki/Panama_Canal)

~~~
jonathansizz
Interestingly, if you look on the map you'll see that you have to sail SE
through the canal to go from Atlantic to Pacific.

------
fixxer
"Super machine intelligence is something that can be controlled with
government regulation."

\- Delusional VC

~~~
leot
No.

A genuinely sympathetic paraphrase might be:

"Machine superintelligence may or may not be controllable. If we do nothing to
regulate it, or to prevent horrible outcomes, we will with X > [too big]
probability find ourselves doomed.

We need to find a way to reduce X. I propose regulation is at least not likely
to be counter-productive, and may be strictly incrementally useful."

------
sitkack
Arthur C Clarke's, "Hazards of Prophecy" is required reading,
[http://www.sfcenter.ku.edu/Sci-Tech-
Society/stored/futurists...](http://www.sfcenter.ku.edu/Sci-Tech-
Society/stored/futurists_hazards_of_prophecy.pdf) and frankly should have been
referenced in the post and here already. Cite!

~~~
GrantS
Thank you for the link, that was a wonderful essay. And you've just reminded
me that I've had an old used copy of Clarke's "Profiles of the Future" (the
apparent source of this article) sitting on a shelf in the next room unread.
Time to change that.

------
csentropy
How about if we already have created a superhuman AI, that is tricking us
humans into believing that it does not exist? And thus preventing us from
building something else that might be more friendly to the human race and
counter it's bad intentions? What if @sama has somehow been coopted by that
AI's plan to stop further development of AI? :) On a more serious note, no one
can predict human behavior to the degree of accuracy needed to call it
deterministic. So, it is unlikely that we can predict a superhuman AI's
behavior with any degree of accuracy. In other words, we just do not know what
we are talking about. Does risk calculus make any sense in the domain of true
uncertainty?

~~~
leot
Relevant, and very worth reading:

[http://www.simulation-argument.com/](http://www.simulation-argument.com/)

------
zep15
Do many "otherwise smart" people actually believe "superhuman machine
intelligence is prima facie ridiculous"? I'd like to see some citations :-). I
think smart people tend to have much more nuanced views.

~~~
edmccard
>Do many "otherwise smart" people actually believe "superhuman machine
intelligence is prima facie ridiculous"?

I don't know how "otherwise smart" I am, but I wonder how we would be able to
tell that a machine intelligence was "superhuman" as opposed to "buggy".

For example, suppose we build a super-AI and ask it, "Is Shinichi Mochizuki's
proof of the ABC conjecture correct" [1]. What would we do if it said "yes"?

(Of course, if "superhuman" just means "able to do things humans already know
how to do and verify, but lots faster", then we're already there).

[1]
[http://www.newscientist.com/article/dn26753-mathematicians-a...](http://www.newscientist.com/article/dn26753-mathematicians-
anger-over-his-unread-500page-proof.html)

~~~
coderzach
We'd ask it to produce a simplified version.

~~~
edmccard
>We'd ask it to produce a simplified version.

Yeah, that would work :)

Maybe the question I should have asked, is:

What if we ask a super-AI for a proof of the ABC conjecture, and the result is
something too complicated for humans to verify?

My point, if I have one, is that when I read about "superhuman machine
intelligence", sometime people seem to mean "capable of knowledge that humans
couldn't figure out on their own but that humans can understand once they see
it"; and sometimes they seem to mean "capable of knowledge that is beyond
human capacity to even verify".

I think development machine intelligence of the first kind is extremely
likely, but I'm more skeptical about the second kind.

------
Animats
Prediction: in 5-15 years, there will be a corporation mostly run by an AI.
Visualize Goldman Sachs run by an AI program.

Corporations don't have "consciousness", nor do they need it. Maximizing
shareholder value is the goal. Machine learning systems are good at optimizing
for numerical goals.

~~~
candeira
Corporations are already AIs, just as accounting departments before 1940 were
already computers, only made of people processing marks on paper instead of
being made of transistors processing electrons.

~~~
TeMPOraL
Indeed. They are an interesting model of a completely alien mind, and
fortunately this mind runs _very, very slow_ for now.

------
eldude
Fun fact, some of those (obviously false) statements regarding nuclear energy
may in fact be true depending on interpretation of their original meaning. We
still have yet to harness mass-energy-equivalence for power. Modern nuclear
energy, both fission and fusion, are more accurately nuclear potential energy
(like a hydro electric plant and water).

From Wikipedia[1]:

    
    
        E = mc2 has frequently been invoked as an explanation for the origin of energy
        in nuclear processes specifically, but such processes can be understood as
        converting nuclear potential energy in a manner precisely analogous to the
        way that chemical processes convert electrical potential energy.
    

I don't know, but it seems reasonable to me to conclude that when Einstein
stated the following, he may have been referring to mass-energy-equivalence
rather than nuclear potential energy, which in fact continues to hold true for
the foreseeable future:

    
    
        There is not the slightest indication that [nuclear energy] will ever be
        obtainable. It would mean that the atom would have to be shattered at will.
    

[1]
[http://en.wikipedia.org/wiki/Mass–energy_equivalence](http://en.wikipedia.org/wiki/Mass–energy_equivalence)

------
compbio
_The Navy revealed the embryo of an electronic computer today that it expects
will be able to walk, talk, see, write, reproduce itself and be conscious of
its existence. [...] Later Perceptrons will be able to recognize people and
call out their names and instantly translate speech in one language to speech
and writing in another language._

\- The New York Times in 1958 after a press conference with Rosenblatt. ("New
Navy Device Learns By Doing; Psychologist Shows Embryo of Computer Designed to
Read and Grow Wiser")

We now have walking, talking, object recognizing, writing, self-replicating,
face-detecting, text-to-speech converting, and translating computers. All at a
scale and accuracy surpassing us mere mortals. We do not know enough about
"being conscious of our existence" to measure this in other animals and
digital life forms. Perhaps "humans predicting future predictive capability of
machines" is fundamentally flawed. Perhaps the above article drew an
unnecessary amount of ire and criticism. Probably a fuzzy combination of the
two.

~~~
nostromo
Self-replicating?

> A self-replicating machine is a construct that is capable of reproducing
> itself autonomously using raw materials found in the environment, thus
> exhibiting self-replication in a way analogous to that found in nature.

~~~
compbio
I meant reproducing (like in the quote), but sure:

[http://www.apollon.uio.no/video/a_robot_e.mp4](http://www.apollon.uio.no/video/a_robot_e.mp4)

And the press release:

 _“In the future, robots must be able to solve tasks in deep mines on distant
planets, in radioactive disaster areas, in hazardous landslip areas and on the
sea bed beneath the Antarctic. These environments are so extreme that no human
being can cope. Everything needs to be automatically controlled. Imagine that
the robot is entering the wreckage of a nuclear power plant. It finds a
staircase that no-one has thought of. The robot takes a picture. The picture
is analysed. The arms of one of the robots is fitted with a printer. This
produces a new robot, or a new part for the existing robot, which enables it
to negotiate the stairs.”_

-2014 Kyrre Glette "Using 3D printers to print out self-learning robots"

------
walterbell
Genrich Altshuller worked as a clerk in Russia's patent office, later
developing a theory (TRIZ) of structured innovation, based in 50,000 patents,
[http://en.m.wikipedia.org/wiki/Genrich_Altshuller](http://en.m.wikipedia.org/wiki/Genrich_Altshuller)
& [http://www.mazur.net/triz/](http://www.mazur.net/triz/)

Some patents are classified each year. A clerk who has seen many classified
patents would have a unique opinion on "blue oceans" for investment
opportunities, especially if they knew how to prevent new patents from being
classified, by avoiding certain areas of research,
[http://fas.org/sgp/othergov/invention/](http://fas.org/sgp/othergov/invention/)
. In TRIZ terminology, they would have different psychological inertia.

Is there a yearly list of declassified patents over the last few decades? This
would be similar to lists of expired patents or books which enter the public
domain in some countries.

------
minimaxir
The original source linked at the end is more informative:
[https://www.lhup.edu/~dsimanek/neverwrk.htm](https://www.lhup.edu/~dsimanek/neverwrk.htm)

~~~
joncalhoun
My favorite from the source:

If the world should blow itself up, the last audible voice would be that of an
expert saying it can't be done. \- Peter Ustinov

------
vincetogo
I suspect that the augmentation of human intelligence through tech is
something we're more likely to get to before full-on AI. Assuming it's
unaffordable for most of us, I'm much more concerned about a caste of super-
intelligent, super-rich humans than computers that have no history of violence
or hunger for power.

------
praptak
"Prediction is very difficult, especially about the future." \-- Niels Bohr

------
sz4kerto
Predictions about something not being possible are difficult, because you can
never be right. You are either proven wrong our 'we don't know yet'. Therefore
these quotes are rather meaningless I believe. (I see SAs point of encouraging
people to go for the moonshots, but still.)

~~~
Sakes
His point is that many experts have been wrong when assuming the limitations
of technology in the fields that they have mastered. He is simply trying to
mute the argument that predictions from AI experts are sufficient in
dismissing AI concerns.

The Einstein/Wright Brothers quotes really hit this home for me.

Maybe the quotes are meaningless to you because you already agree. But they
might persuade others that would otherwise assume discussing AI impacts on
humanity is a waste of time.

~~~
TheOtherHobbes
I have doubts about the Einstein quote. Atoms had been split at will for
decades by then.

Szilard hadn't yet proposed a theory of nuclear chain reactions, but according
to some cites of the quote Einstein didn't say it until 1934 - which was after
Szilard.

I don't have a problem with the possibility that suprahuman intelligence may
be possible. I do have a problem with the fact that currently we have no idea
what the concept may even mean - and right now, more immediate cybersecurity
issues are being neglected.

Computers are already better than humans at many activities. From playing
chess to landing planes to learning how to play a video game - a computer with
the right software is _much_ better at these than an average human, and is
often at least good as the best humans.

Take that to the black corner, and worms and botnets are already a serious
problem.

We don't need to wait for the Internet to become sentient and start talking to
us in a deep echoey robot voice to worry about cyberthreats.

There's more than enough to deal with already. And if you're going to try to
regulate and contain a future AI, making current systems as secure as possible
seems like a realistic place to start.

~~~
Animats
_" I have doubts about the Einstein quote. Atoms had been split at will for
decades by then."_

Not in a chain reaction. When Szilárd described the concept of a chain
reaction to Einstein, Einstein was shocked. He said "I never thought of that!"

Until then, nuclear physics was purely an academic enterprise. There were few
applications for radioactive materials. Radioactive decay just happened at its
own slow pace, and not much could be done with it. X-rays could be used to
pump the process, but less energy came out than what was put in. Suddenly the
nuclear physicists realized they had a tiger by the tail. This was going to
change the world, not necessarily for the better.

~~~
mturmon
Like @TheOtherHobbes above, I had disbelief about the Einstein quote ("Wasn't
Einstein presciently aware of where nuclear fission technology was going?").

But, poking around a bit, I came to the same understanding you have. Here's
some more of the time line:

The quote in the OP (which I can't find online; the Einstein archives at
Caltech are, alas, not indexed) about Einstein's skepticism about nuclear
energy is dated 1932. The first demonstrations of nuclear fission were years
later, in late 1938 and into 1939. And as you said, Einstein is reported to
have said, "I had not thought of that." \-- regarding the chain reaction.

The fabled Einstein-Szilard letter to Franklin Roosevelt, warning about the
Nazis getting the atomic bomb, was written in August 1939
([http://en.wikipedia.org/wiki/Einstein–Szilárd_letter](http://en.wikipedia.org/wiki/Einstein–Szilárd_letter)),
and then relayed to Roosevelt in October after the flurry of activity due to
the Nazis invading Poland had died out.

------
baby
I found the bitcoin intruding on the others, it is pretty impressive how this
"digital money" has grown. I remember a time where it was 20$ a bitcoin and no
one would have believed it if I had tell them: in 6 months a bitcoin will be
worth more than 1000$.

~~~
mcintyre1994
I think it felt really out of place because many "otherwise smart people" have
been shouting it'll reach both zero and the sky soon and its future is still
really unclear.

------
dxbydt
I don't want to go all Clinton here, but, please, lets first define
"prediction". Here are some predictions that have general consensus-

1\. You will die someday ( so will I )

2\. The Bay Area will experience an earthquake in next decade

3\. A few islands will go under due to sea level rise.

I wouldn't like to call the above predictions - they are too sun-will-rise-in-
the-east obvious. Its like looking at the dumbarton bridge & predicting -
someday that bridge will fall. That's a biblical prediction - all standing
things must fall, bridge is a standing thing, ergo, given enough wear and
tear, it too will crumble and fall.The oldest standing bridge in the world is
like 2800 years old & is in a much more geologically stable place than the Bay
area, so what chance does dumbarton have ?

Now here are what I call predictions -

1\. qqq $200 by 2018

2\. esn replaces fizzbuzz in 2019 :)

3\. cnn,rnn,esn become middle school curriculum in 2020

I mean, here you have a reasonable level of confusion. Yet, if you plot the
probability over time for each of those predictions, the slope is definitely
positive. qqq has doubled in the past 3 years, give it another 3 years & it'll
probably double again. Given this pervasive SMI fetish, it's only logical that
startups replace their fizzbuzz with "in the next 20 minutes, code up an echo
state network in haskell". And if sama's actually right, cnns & rnns are going
to get so commonplace society is going to want middle schoolers to ace their
exams with questions on "ten key differences between the recurrent neural net
& the convolution neural net" instead of the pedestrian garbage we teach them
now - "on a z3, if 1+2=3 and 1+1=2, how much is 3+3 ?" So the poor kids
instead of sweating bullets & laboring through convoluted reasoning like
"since 1 + 2 is 3, so 2 +1 is 3 per abelian, and 1 + 1 is 2, that means 3 + 1
per cayley unique column entries must be 1, which implies 3 is the identity,
so 3 + 3 must be 3 as well. Ergo 3+3=3. Voila!" can actually make useful
technological predictions about which esn based startup will cross a trillion
dollar market cap by the time the kid hits puberty.

~~~
Rmilb
What is esn?

~~~
jjoonathan
[http://www.scholarpedia.org/article/Echo_state_network](http://www.scholarpedia.org/article/Echo_state_network)

~~~
Rmilb
Thank you for the link.

------
jsnk
I will appreciate bold predictions about the future that may turn out to be
wrong over today's writings where it's all about super safe post hoc analysis
after the fact. Things like "Why X succeeded!" or "X failed because of these 5
reasons" don't impressive me one bit. I wish more writers wrote about bold
predictions and explain why you predict that way.

------
vezzy-fnord
I like how the second-last one stands out as a _positive_ prediction that was
wrong. Good show on that one.

------
testingonprod
Honestly, I think this whole hoopla about AI is overblown. I think when we
reach the point where this discussion matters, we will know it, and know it
quite obviously.

Right now, we're at a point in AI where Amazon recommends me new types of
deodorant immediately after I just placed an order for some. That tells you
the state of AI and if we really need to be worried about some of the things
that these pundits are, imo, dreaming about.

Let progress happen, and we'll deal with it as it comes. No need for pre-
mature fear to halt the speed of progress.

------
Alex3917
Though if instead of saying that X will never be invented, they had said that
_if_ X is invented then its inventors won't make a substantial sum of money
from their invention, then they would have been right almost every time.

------
api
Sure, predictions are hard. We often get them wrong on the upside (over-hype)
and the downside. As a result, "AI isn't going to happen" really is not a good
argument against discussing the potential risks of it. (... and I say that as
someone who _is_ an "AI is around the corner" skeptic.)

Yet that doesn't change the basic risk calculus. In his previous post, Sam
advocated imposing draconian licensing and observation requirements on what
_in practice_ would be the majority of non-trivial CS research. He advocated
this on the basis of the _potential_ risk that as-yet-to-be-developed
hypothetical AIs might pose to human beings.

I did a short post on it here: [http://adamierymenko.com/did-sam-altman-of-y-
combinator-just...](http://adamierymenko.com/did-sam-altman-of-y-combinator-
just-start-a-campaign-to-license-and-regulate-all-cs-research/)

In addition to what I wrote there, I think that the risk of dramatically
slowing progress in CS/AI also has to be taken into account. There is risk of
doing and there is the risk of not doing.

The problem is that we currently face a number of existential risks -- like
catastrophic economic collapse due to fossil fuel depletion -- where the
majority of the risk is in the "risk of not doing" category. We know with
total certainty that if we continue business as usual with no change, our
civilization _will_ collapse. It's simple physics and high school math --
exponential growth in consumption of a finite resource without any
substitution or path to replacement can only end in one way.

Smart computers might help us crack tough problems like fusion, safe and
scalable fission, better batteries to make renewable energy more practical,
etc. That in turn might help us avoid an absolutely real, tangible, non-
hypothetical, definite existential risk. I see no reason to hamstring that
kind of progress to defend against extremely hypothetical low-probability
risks.

That's why I consider Sam's suggestions to regulate CS research more dangerous
than any risk posed by speculative AI scenarios.

I am not opposed to all regulation, but I am opposed to regulations based on
extremely hypothetical hand-wavey risks. I'm also opposed to regulations that
are virtually impossible to define accurately or enforce fairly. Regulations
should be clear, objective, rationally justified by _tangible_ problems or
risks, and _minimal_. We should have regulations around, say, the use of
nuclear materials, but that's because we know for an absolute fact that it is
dangerous. We should have financial regulations because we know financial
fraud has happened and will continue to happen without them. ... etc. But I
positively cringe at the imposition of ill-defined broad regulations based on
fear-mongering and "precautionary principle" thinking -- a.k.a.
institutionalized paranoia and cowardice. Such regulations can do nothing
other than halt progress in the name of vague paranoia.

Make no mistake: Sam's proposal in his previous post would halt all non-
trivial CS research, or at least would slow it to such a crawl that it would
effectively stop. It would also cause a mass exodus from the field, since
nobody wants to operate under that kind of nonsense. Given that CS is the
primary driver now of progress in other fields, that would also likely halt
major progress in energy, materials, propulsion, transportation, etc.

If you read my blog post above, I take this in almost a conspiracy direction
and speculate that this is some sort of political power play to lock down the
field. The reason for this is that I find it hard to believe that someone of
Sam's intellect and education would _not_ realize the implications of what
he's suggesting.

------
martinesko36
More of this:
[http://zimmer.csufresno.edu/~fringwal/stoopid.lis](http://zimmer.csufresno.edu/~fringwal/stoopid.lis)

------
ryan_j_naughton
[http://imgur.com/gallery/5hiUM1e](http://imgur.com/gallery/5hiUM1e)

More interesting failed technology predictions.

------
Iftheshoefits
"Space travel is utter bilge" is correct, in my opinion, from one viewpoint:
specifically with respect to its cost effectiveness along any axis you care to
measure save one: the emotionally-driven ego/pride/curiosity axis.

I really wish the trillions "invested" in manned space travel over the decades
had instead gone to basic research in biology, chemistry, physics,
mathematics, and some of the applied research disciplines that derive from
these.

~~~
chigiskob
Trillions? The total amount (2014 nominal) given to NASA since 1958 is less
than 1.1 trillion. If you don't think there have been advancements in biology,
chemistry, physics, and mathematics as a result of NASA's research then I'm at
a loss.

~~~
wtbob
> Trillions? The total amount (2014 nominal) given to NASA since 1958 is less
> than 1.1 trillion.

Thus it's likely that the sums involved across all nations in the world are 2
trillion or more.

> If you don't think there have been advancements in biology, chemistry,
> physics, and mathematics as a result of NASA's research then I'm at a loss.

Oh, they certainly were, but those advances were for the most part accidental
and incidental.

------
dustingetz
What about flying cars and jetpacks?

------
kordless
I think he forgot "All this is a dream." by Michael Faraday

------
sidcool
Looks like Sam's on for some moonshots.

------
sudioStudio64
My criticism of Sam's prediction is really centered around the extreme
disparity between the concerns of the digerati wealthy elites and normal
people.

------
thomasmarriott
Survivor bias strikes again.

------
state
A thought on readability:

It would be nice if there were either (a) two newlines between each quote or
(b) only one newline between the quote and attribution.

------
bbcbasic
Just add 'prima facie' to your predication and you've covered your ass.

I predict Bitcoin to $10k sometime in the next 5 years. Prima facie of course!

------
ocdtrekkie
Elon Musk, Bill Gates, and Stephen Hawking all agree AI research is dangerous.
Google does not seem to feel there is a credible risk and is doing it anyways.

