
This AI Boom Will Also Bust - KKKKkkkk1
http://www.overcomingbias.com/2016/12/this-ai-boom-will-also-bust.html
======
ma2rten
I think this field is suffering from some confusion of terminology. In my mind
there are three subfields that are crystallizing that each have different
goals and thus different methods.

The first one is Data Science. More and more businesses store their data
electronically. Data Scientists aim to analyze this data to derive insights
from it. Machine Learning is one of the tools in their tool belt, however
often they prefer models that are understandable and not a black box.
Sometimes they prefer statistics because it tells you if your insights are
significant.

The second one is Machine Learning Engineering. ML Engineers are Software
Engineers that use Machine Learning to build products. They might work on spam
detection, recommendation engines or news feeds. They care about building
products that scale and are reliable. They will run A/B tests to see how
metrics are impacted. They might use Deep Learning, but they will weight the
pros and cons against other methods.

Then there are AI Researchers. Their goal is to push the boundaries of what
computers can do. They might work on letting computers recognize images,
understand speech and translate languages. Their method of choice is often
Deep Learning because it has unlocked a lot of new applications.

I feel like this post is essentially someone from the first group criticizing
the last group, saying their methods are not applicable to him. That is
expected.

~~~
sidlls
It's also suffering from hype.

And the criticism you note isn't one-directional in the field at large. I'm
finding that ML/AI researchers deriding ML/Data engineers and "scientists" as
not doing "real" ML or AI is becoming a thing, similar to how some computer
scientists deride engineering as not doing real computing.

~~~
logicallee
It is not suffering from hype. There is too little hype. People are vastly
underestimating what is about to happen.

See my comments here:

[https://news.ycombinator.com/item?id=13079598](https://news.ycombinator.com/item?id=13079598)

under our recent article " Artificial Intelligence Generates Christmas Song".

Basically, if there is no pixie dust that makes humans intelligent, and
instead it is a matter of the architecture of the brain and the first few
years of supervised sensory input, then neural net breakthroughs (which use a
similar architecture/topology) have the potential at any moment in time to
break through and match general human intelligence.

What I mean is that if someone sent back source code from 80 years from now,
but we had to run it on a bunch of Amazon / Google servers in a server farm,
we're pretty much _guaranteed_ to have enough computing power to do so!

(This is a combination of the number of neurons, their number of connections,
and their very slow speed.)

We have the hardware.

Now: we do not actually have the source code from 80 years from now that we
can go ahead and run on those machines.

So, we're at like heavier-than-air travel _right_ before the Wright Brothers
flew at Kittie Hawk. Except we have like jet engines already -- just no way to
design them into something that flies.

I think that AI is vastly underrated and I watch with incredible interest
every single breakthrough.

For the second category above, ML built into products or engineering
solutions, Alphago surprised me, because Go had an intractibly large
possibility space, it's not in any sense subject to brute-forcing or
exhaustive search.

Dragon's Natural dictation surprised me in that using its model its able to
get basically perfect dictation. I've never worked as a transcriptionist, but
a small search reveals it has basically annihilated the industry of medical
transcription.

These are not the big, general breakthrough.

But the big, general breakthrough is right there, somewhere. The results
researchers are coming up with are astounding, and they're doing it in many
cases with neural nets, quite similar to the wiring of the human mind.

The hype is faaaaaaaaaaaaaaar less than warranted for the stage that we're at.
At any moment someone can put together something that achieves higher-level
intelligence and can be set loose upon the world corpus of culture.

True, there are no clear indications that this is about to happen (For
example: people do not extract innate language algorithms from the human
genome which encodes them), so we are not exactly taking many steps that we
could be trying to. As far as I know we're not even genetically engineering
people to see what different parts of DNA do -- which is obviously a very,
very good thing, and who would allow anyone to bear to term a child made as an
experiment to see what DNA does.

But despite not going from a human starting point, the results that we are
achieving in many cases match and surpass human ability - while we do know
that in many cases some of their architecture is similar. I feel quite
strongly that we have more than enough hardware for general intelligence - and
I see advances every day that could end up going past the ppint of no return
on it.

\--

EDIT: got a downvote, but I would prefer a discussion if you think I'm wrong.

~~~
jpatokal
I didn't downvote you, but the TL;DR of the article is that most ML
demonstrations to date have been toys and there are no known real-world
applications that would justify the "40% of jobs lost!" hype.

And you're trying to rebut this by referencing an AI-generated _Christmas
jingle_. I think the author rests their case...

~~~
logicallee
No, I just meant to reference my _discussion_ from there (i.e. for people to
read through my comments there, after clicking.)

IOW I meant to transclude that discussion here. (Perhaps within that comment
thread a good specific summary comment is:
[https://news.ycombinator.com/item?id=13090869](https://news.ycombinator.com/item?id=13090869))

Obviously it is hard to know when that magic moment will happen that some kind
of general AI is created that can learn in some sense similarly to how humans
do. My every indication and astonishment at the results that are being
produced strongly suggests "at any moment". The results are absolutely
astonishing every day and we have vastly more than enough firepower.

You might also be interested in this separate thread where I dealt with
questions of consciousness and pain. I again reference it here:

[https://news.ycombinator.com/item?id=13026020](https://news.ycombinator.com/item?id=13026020)

and you can click through at the top to follow my reference.

We are way past the point of no return here, and in my estimation it is a
question of years or at the most decades - not centuries.

~~~
Noseshine

      > Obviously it is hard to know when that magic moment will happen that some kind of general AI
      > is created that can learn in some sense similarly to how humans do. My every indication and 
      > astonishment at the results that are being produced strongly suggests "at any moment".
    

As an IT guy with a basic but solid neuroscience education (which isn't even
needed for what I'm about to say): Yep, you are on the hype train, and very
deeply. I really like _reasonable_ discussions, I have no idea what _this_ is
right here. We will see more amazing results, sure - but applications will be
specialized narrow subjects. From creating " _some kind of general AI is
created that can learn in some sense similarly to how humans do_ " we are
still very far away. Your statements remind me of 1960s "future" hype,a
nuclear reactor in every car by the year 2000, stuff like that.

~~~
logicallee
My hype is different, because in my estimation we already have the hardware.
You write:

>As an IT guy with a basic but solid neuroscience education

\-- could you go ahead and take a few minutes (maybe will take you 5-10) to
read through my above-referenced links referencing my previous discussion and
tell me whether I'm correct in your estimation on the bottom-up aspect - i.e.
the amount of computation that human neural nets can likely be doing, and how
it compares to server farms with fast interconnects today?

I'm not an expert in neuroscience so your feedback might be helpful there.

~~~
xamuel
If P=NP then we already have the hardware to crack RSA encryption.

The above sentence is true, but it has no bearing on anything.

~~~
logicallee
don't you think it would have a lot more bearing if you had 7 billion devices
nonchallantly walking around cracking RSA every day using the same or less
hardware? (but we couldn't reverse-engineer them, because they were obfuscated
in biology)?

The fact that they weren't reverse-engineered (yet) would still have _huge_
bearing on everything.

By 7 billion samples I mean the humans walking around. Your analogy with an
RSA crack is fundamentally different beccause biology doesn't do it in 3
pounds of grey goo in seven billion different bodies already.

so you would have to come up with an analogy that uses something we cant use,
to say, okay fine it exists and fine, we have the hardware to also do it, but
the former doesn't have any bearing on _us_ doing the latter.

~~~
xamuel
Interesting observation. If brains routinely cracked RSA, that could be
evidence that P=NP.

Still it wouldn't help us _find_ the P-time algorithm in question. We could
say "it seems to exist", but that would not imply "we'll discover it any day
now".

~~~
Noseshine

      > If brains routinely cracked RSA
    

For brains numbers have a completely different meaning and internal
representation than for computers. Brains don't "think in numbers". Doing the
kind of math we invented is a major effort for the brain, it's not what it
developed for, and it is very poorly equipped to do _explicit_ numerical
calculations (emphasis on "explicit"). So looking at brains to "crack RSA"
seems like waiting for a hammer to be useful in driving screws.

------
Phait
I understand that most people working with deep learning wouldn't want this
type of thinking to spread amongst the public, and I surely don't want it
either. But you have to be totally unaware of reality to think that DL is
__the __definitive tool for AI. Most impressive results in DL in the past 2
years happended like this:

>deepmind steals people from the top ML research teams in univerisites around
the world

>these people are given an incredible amount of money to solve an incredibly
complex task

>a 6000 layers deep network is run for 6 months on a GPU cluster the size of
Texas

>Google drops in their marketing team

>media says Google solved the AI problem

>repeat every 6 months to keep the company hot and keep the people flow
constant

>get accepted at every conference on earth because you're deepmind (seriously,
have you seen the crap that they get to present at NIPS and ICML? The ddqn
paper is literally a single line modification to another paper's algorithm,
while we plebeians have to struggle like hell to get the originality points)

I'll be impressed when they solve Pacman on a Raspberry Pi, otherwise they are
simply grownups playing with very expensive toys.

Deep learning is cool, I truly believe that, and I love working with neural
networks, but anyone with a base knowledge of ML knows better than to praise
it as the saviour of AI research.

Rant over, I'm gonna go check how my autoencoder is learning now ;)

~~~
dquail
Agree generally. Except being unimpressed unless performance is achieved on
sub Google scale hardware. Today's Google supermachine is tomorrow's raspberry
pie. No need to artificially constrain our bounds. There is, after all, the
inevitability of Moores law.

~~~
dkarapetyan
Moore's law has been dead for a while now. Most of the chip in your phone is
powered off because otherwise it would burn up. Highly recommend watching this
video:
[https://www.youtube.com/watch?v=_9mzmvhwMqw](https://www.youtube.com/watch?v=_9mzmvhwMqw)

~~~
dquail
Interesting. I've yet to hear a Moores law is dead argument, so perhaps I
should watch the video before commenting further. But the fact that most of
the chip is turned off, doesn't falsify the fact that most of it still exists.
Cooling it properly is a separate problem independent of computation no?

~~~
tekni5
The talk mentions that there is a physical law to how many cores you can add
to a CPU before it becomes useless, even with parallel computing.

------
pesenti
When I was at Watson this is the first thing I told every customer: before you
start with AI are you already doing the more mundane data science on your
structured data? If not, you shouldn't go right away for the shiny object.

This said I still believe the article is mistaken in its evaluation of
potential impact (and its fuzzy metaphore of pipes). Unstructured or semi-
structured or dirty data is much more prevalent than cleaned structured data
on which you can do simple regression to get insight.

Ultimately the class of problems solved by more advanced AI will be
incommensurably bigger than the class of problems solved by simple machine
learning. I could make a big laundry list but just start thinking of anything
that involves images, sound, or text (ie most form of human communication).

~~~
dheera
And before you do mundane data science on your structured data, you should
figure out if there is a better way to get cleaner raw data, more data, as
well as more accurate data.

For example, I predict stereo vision algorithms will die out soon, including
deep-learning-assisted stereo vision. It's useful for now but not something to
build a business around. Better time-of-flight depth cameras will be here soon
enough. It's just basic physics. I worked on one for my PhD research. You can
get pretty clean depth data with some basic statistics and no AI algorithm
wizardry. We're just waiting for someone to take it to a fab, build driver
electronics, and commercialize it.

~~~
dTal
Stereo vision is obviously highly effective in biology as it has independently
evolved a great many times. Time-of-flight may be poised for a renaissance,
but it scales badly and is active, not passive. Stereo vision, and its big
brother light fields, are far more general and are certainly not going to "die
out".

~~~
revelation
Yet humans produce terrible depth data.

~~~
kd0amg
Terrible for what purpose? Humans seem pretty good at throwing things to each
other and catching them. I'm very bad at coming up with a good numeric
estimate of linear size. As a fencer, I could never tell you how many inches
between me and my opponent, how long his or my arms are, how tall he is, etc.
I could definitely tell you which parts of our bodies are within reach of each
other's arm extension, fleche, lunge, etc.

~~~
michaelt
Terrible for the purpose of proving a general-purpose stereo machine vision
system is practical.

The distance at which a stereo vision system can capture precise depths
depends on the distance between eyes, and the eyes' angular resolution. Human
depth perception works well for things within about 10m, but when you get out
to 20-40m humans get a lot less info from stereo vision.

When you get to that distance, humans seem to have a whole load of different
tricks - shadows, rate of size change, recognising things of known size,
perspective and so on. You can see a car and know how far it is even without
stereo vision, because you know how big cars are, and how big lanes and road
markings are. You can even see two red lights in the distance at night and
work out whether they're the two corners of a car, or two motorbikes side-by-
side and closer to you.

On the other hand, your basic general-purpose stereo machine vision system
doesn't try to understand what it's looking at - you just identify 'landmarks'
that can be matched in both images (high contrast features, corners etc) and
measure the difference in angle from the two cameras. This is relatively
simple and easy to understand!

For tasks that humans can do that involve depth perception of things more than
~40m away - flying a plane, for example, where most things are more than 40m
away if you're doing it right! - nice simple stereo vision can't get the job
done, because humans are actually using their other tricks.

Of course, despite this limitation stereo vision comes up a lot in nature -
it's still a beneficial adaption, because most things in nature that will kill
you do so from less than 10m away :)

~~~
matt_kantor
> Of course, despite this limitation stereo vision comes up a lot in nature -
> it's still a beneficial adaption, because most things in nature that will
> kill you do so from less than 10m away :)

It's actually pretty rare for non-predatory animals to have good stereo
vision. Most of them are optimized for a wide field of view instead, evolving
eyes placed on either side of their head. Think rabbits, parrots, bison,
trout, iguanas, etc.

[https://en.wikipedia.org/wiki/Binocular_vision](https://en.wikipedia.org/wiki/Binocular_vision)

[https://www.quora.com/Why-have-most-animals-evolved-to-
see-o...](https://www.quora.com/Why-have-most-animals-evolved-to-see-only-in-
front-of-their-bodies-and-not-all-directions-simultaneously)

------
brudgers
_Most firms that think they want advanced AI /ML really just need linear
regression_

That's how AI always looks in the rearview mirror. Like a trivial part of
today's furniture. Pointing a phone at a random person on the street and
getting their identity is already in the realm of "just machine learning" and
my phone recognizing faces is simply "that's how phones work, duh" ordinary.
When I first started reading Hacker News a handful of years ago, one of the
hot topics was computer vision at the level of industrial applications like
assembly lines. Today, my face unlocks the phone in my pocket...and,
statistically, yours does not. AI is just what we call the cutting edge.

Open the first edition of _Artificial Intelligence: A Modern Approach_ and
there's a fair bit of effort to apply linear regression selectively in order
to be computationally feasible. That _just linear regression_ is just linear
regression these days because my laptop only has 1.6 teraflops of GPU and
that's measley compared to what $20k would buy.

The way in which AI booms go bust is that after a few years everybody accepts
that computers can beat humans at checkers. The next boom ends and everybody
accepts that computers can beat humans at chess. After this one, it will be Go
and when that happens computers will still be better at checkers and chess
too.

~~~
kgwgk
> Open the first edition of Artificial Intelligence: A Modern Approach and
> there's a fair bit of effort to apply linear regression selectively in order
> to be computationally feasible.

Does the book mention linear regression at all? The term doesn't appear in the
index.

~~~
tnecniv
Third addition has it [0]. The second does as well, but the first doesn't have
an online index.

[0] [http://aima.cs.berkeley.edu/aima-
index.html](http://aima.cs.berkeley.edu/aima-index.html)

~~~
kgwgk
Thanks. The section "Regression and Classification with Linear Models" from
the third edition and the chapter "Statistical Learning Methods" from the
second edition do not appear in the first edition.

------
vonnik
[Disclosure: I work for a deep-learning company.]

Robin's post reveals a couple fundamental misunderstandings. While he may be
correct that, for now, many small firms should apply linear regression rather
than deep learning to their limited datasets, he is wrong in his prediction of
an AI bust. If it happens, it will not be for the reasons he cites.

He is skeptical that deep learning and other forms of advanced AI 1) will be
applicable to smaller and smaller datasets, and that 2) they will become
easier to use.

And yet some great research is being done that will prove him wrong on his
first point.

[https://arxiv.org/abs/1605.06065](https://arxiv.org/abs/1605.06065)
[https://arxiv.org/abs/1606.04080](https://arxiv.org/abs/1606.04080)

One-shot learning, or learning from a few examples, is a field where we're
making rapid progress, which means that in the near future, we'll obtain much
higher accuracy on smaller datasets. So the immense performance gains we've
seen by applying deep learning to big data will someday extend to smaller data
as well.

Secondly, Robin is skeptical that deep learning will be a tool most firms can
adopt, given the lack of specialists. For now, that talent is scarce and
salaries are high. But this is a problem that job markets know how to fix. The
data science academies popping up in San Francisco exist for a reason: to
satisfy that demand.

And to go one step further, the history of technology suggests that we find
ways to wrap powerful technology in usable packages for less technical people.
AI is going to be just one component that fits into a larger data stack,
infusing products invisibly until we don't even think about it.

And fwiw, his phrase "deep machine learning" isn't a thing. Nobody says that,
because it's redundant. All deep learning is a subset of machine learning.

~~~
orthoganol
> that will prove him wrong

I'm skeptical of claims about a one-shot learning silver bullet, unless people
are talking about something different from how it has been classically
presented, .e.g. Patrick Winton's MIT lectures. Yes, you can learn from a few
examples, but only because you've imparted your expert knowledge, maintain a
large number of heuristics, control the search space effectively, etc. There's
a lot of domain-specific work required for each system, so I consider it more
an approach of classical AI and not something that figures out everything from
the data alone, like deep learning.

But again, maybe people are talking about something different than my above
description when they talk about one-shot learning today. Either way, I don't
think having to rely on a lot of domain specific knowledge is necessarily a
bad thing.

~~~
bayonetz
Succinctly, there is no free lunch...

------
jeyoor
This article matches what I've been seeing anecdotally (especially at smaller
tech firms and universities in the Midwest US).

I've been hearing more folks in research and industry express the importance
of applying simpler techniques (like linear regression and decision trees)
before reaching for the latest state-of-the-art approach.

See also this response to the author's tweet on the subject:
[https://twitter.com/anderssandberg/status/803311515717738496](https://twitter.com/anderssandberg/status/803311515717738496)

~~~
IIIIIII
Saying that linear regression is easier to do properly than more complex
methods like random forests, DL, boosting etc is like saying that people
should code assembly instead of python

~~~
kristjankalm
This is a false dichotomy. Both OLS regression and, say, random decision
forest regression have the same objective (predict values) and achieve it with
similar means (build a generative model / function). They solve the same
problem. Contrastingly, assembler and python are broadly aimed at completely
different use cases.

Broadly, whether you should move from OLS to random forest regression = SNR
increase / increase in manhours and money spent.

~~~
throw_away_777
It is actually much easier to apply a random forest (or really gradient
boosted decision tree, which almost strictly dominates random forests) than a
linear regression. Decision tree methods require far less data preprocessing
than linear regression, because the model is able to infer feature
relationships. Obviously if your features are linearly related to your target
than linear regression is much more viable.

~~~
jbrambleDC
This is absolutely true, the one caveat is that you can explain the
significance of features and the relationship to the response variables in
simpler terms.

------
WhitneyLand
This article is tries to be right about something big, by arguing about things
that are small and that do not necessarily prove the thesis.

Notice now you can cogently disagree with the main idea while agreeing with
most of the sub points (paraphrasing below):

1) Most impactful point: The economic impact innovations in AI/machine
learning will have over the next ~2 decades are being overestimated.

DISAGREE

2) Subpoint : Overhyped (fashion-induced) tech causes companies to waste time
and money.

AGREE (well, yes, but does anyone not know this?)

3) Subpoint: Most firms that want AI/ML really just need linear regression on
cleaned-up data.

PROBABLY (but this doesn't prove or even support (1))

4) Subpoint: Obstacles limit applications (though incompetence)

AGREE (but it's irrelevant to (1), and also a pretty old conjecture.)

5) Subpoint: It's not true that 47 percent of total US employment is at risk
.. to computerisation .. perhaps over the next decade or two.

PROBABLY (that this number/timeframe is optimistic means very little. one
decade after the Internet many people said it hadn't upended industry as
predicted. whether it took 10, 20, or 30 years, the important fact is that the
revolution happened.)

It would be interesting to know if those who are agree in the comments agree
with the sensational headline or point 1, or the more obvious and less
consequential points 2-5.

~~~
jbrambleDC
another point is that Linear Regression IS Machine/statistical Learning. Sure
its been around for more than 100 years before computation, but regression
algorithms are learning algorithms.

Arguing for more linear regression to solve a firms problems, is equivalent to
arguing for machine learning. Now, if instead he wanted to argue that the vast
majority of a businesses prediction problems can be solved by simple
algorithms, that is most likely true. but economic impact of this is still a
part of the economic impact of machine learning.

~~~
notahacker
If we're classing linear regression as machine learning and agreeing it's a
representative example of the type of simple algorithm that's most likely to
benefit firms, I think it probably helps his point rather than harming it.
It's a technique that's been around for ages, is far from arcane knowledge and
every business has had the computing capability to run useful linear
regressions on various not-particularly-huge datasets in a user-friendly GUI
app for at least a couple of decades now.

For the most part they haven't run those regressions at all, and where they
have, they haven't been awe-inspiringly successful in their predictions, never
mind so successful the models are supplanting the research of their knowledge-
workers.

~~~
AndrewKemendo
This overshoots the target. It's like saying that we use algebra and therefore
=/= AI.

LR and general regression schemes are captured in supervised learning methods.
So yes, the systems use linear regression as a fundamental attribute but build
on them significantly.

------
randcraw
After a good look behind the curtain of Deep Learning, I've come to agree with
Robin. No, Deep Learning will not fail. But it _will_ fail to live up to its
promise to revolutionize AI, and it won't replace statistics or GOFAI in many
tasks that require intelligence.

Yes, DL has proven itself to perform (most?) gradient-based tasks better than
any other algorithm. It maximizes the value in large data, minimizing error
brilliantly. But ask it to address a single feature not present in the zillion
images in ImageNet, and it's lost. (E.g. _Where_ is the person in the image
looking? To the left? The right? No DN using labels from ImageNet could say.)
This is classic AI brittleness.

With all the hoolpa surrounding DL's successes at single task challenges
(mostly on images), we've failed to notice that nothing has really changed in
AI. The info available from raw data remains as thin as ever. I think soon
we'll all see that even ginormous quantities of thinly labeled supervised data
can take your AI agent only so far -- a truly useful AI agent will need info
that isn't present in all the labeled images on the planet. In the end the
agent still needs a rich internal model of the world that it can further
enrich with curated data (teaching) to master each new task or transfer the
skill to a related domain. And to do that, it needs the ability to infer cause
and effect, and explore possible worlds. Without that, any big-data-trained AI
will always remain a one trick pony.

Alas, Deep Learning (alone) can't fill that void. The relevant information and
inferential capability needed to apply it to solve new problems and variations
on them -- these skills just aren't present in the nets or the big data
available to train them to high levels of broad competence. To create a mind
capable of performing multiple diverse tasks, like the kinds a robot needs in
order to repair a broken toaster, I think we'll all soon realize that DL has
not replaced GOFAI at all. A truly useful intelligent agent still must learn
hierarchies of concepts and use logic, if it's to do more than play board
games.

------
chime
> Good CS expert says: Most firms that think they want advanced AI/ML really
> just need linear regression on cleaned-up data.

Cleaning up data is very expensive. And without that, the analysis is good for
nothing. AI helps provide good analysis without having to cleaning up data
manually. I don't see how that is going away.

~~~
cardine
> AI helps provide good analysis without having to cleaning up data manually.

My own experience has shown that dirty data impacts advanced AI just as much
as it impacts far more basic ML techniques.

Even for the most advanced AI we work on, we spend just as much time worrying
about clean data as we do anything else.

~~~
throw_away_777
When you say "clean data", what exactly do you mean? I've often seen this
claim that cleaning data takes a lot of time, but it seems like an ill-defined
term.

~~~
bonoboTP
It can mean different things.

In general: duplicate data, missing fields, different formats for different
parts of the data, inconsistent naming schemes

For text: character encodings, special symbols, escape characters,
punctuation, extra or missing spaces and newlines, capitalization

For images: different sizes, rotations, crops, blurry images

For numbers: inconsistent decimal point/comma, outliers with obviously
nonsense values or zeros, values in different units of measurement etc.

~~~
nostrademons
For user behavior: bots, clickfraud/clickjacking, bored teenagers, competitors
who are sussing out your product, people who got confused by your user
interface, users who have Javascript disabled and so never trigger your
clicktracking, users who are on really old browsers who don't have Javascript
to begin with.

And then there's bugs in your data pipeline: browser (particularly IE) bugs,
logging bugs, didn't understand your distributed databases's conflict
resolution policy bugs, failed attempts at cleaning all the previous
categories, incorrect assumptions about the "shape" of your data, self-DOS
attacks (no joke - Google almost brought down itself by having an img with an
empty src tag, which forces the browser to make a duplicate request on every
page) which result in extra duplicate requests, incorrectly filtering requests
so you count /favicon.ico as a pageview, etc.

------
felippee
There is a never ending confusion caused by the term "AI" to begin with. Term
coined by John McCarthy to raise money in the 60's is really good at driving
imagination, yet at the same time causes hype and over-expecations.

This field is notorious for its hype-bust cycles and I don't see any reason
why this time would be different. There are obviously applications and
advancements no doubt about it, but the question is do those justify the level
of excitement, and the answer is probably "no".

When people hear AI they inevitably think "sentient robots". This will likely
not happen within the next 2-3 hype cycles and certainly not in this one.

Check out this blog for a hype-free, reasonable evaluation of the current AI:

[http://blog.piekniewski.info/2016/11/17/myths-and-facts-
abou...](http://blog.piekniewski.info/2016/11/17/myths-and-facts-about-ai/)

~~~
dmfdmf
Thanks. This is why I love HN for finds like this, after poking around a bit
this site looks like a really good blog that doesn't hype AI nor deny it which
is congruent with my views.

------
rampage101
The more I get into machine learning and deep learning it seems like there is
an incredible amount of configuration to get some decent results. Cleaning and
storing the data takes a long time. And then you need to figure out exactly
what you want to predict. If you predict some feature with any sort of error
in your process the entire results will be flawed.

There are a few very nice applications of the AI techniques, however most data
sets don't fit well with machine learning. What you see is that in tutorials
use the Iris data set so much because it breaks into categories very easily.
In the real world, most things are in a maybe state rather than yes/no.

~~~
ffwd
> In the real world, most things are in a maybe state rather than yes/no.

Not to get too far afield, but I disagree with this on a certain philosophical
level. All states are yes/no. All states of all things should result in a
yes/no and be differentiable, with enough data. This doesn't speak to the
practicality of that but as far as I can tell the theoretical potential is
huge, almost infinite even.

~~~
JonnieCache
Isn't this just shifting the ambiguity into your choice of state definitions,
rather than the states themselves?

~~~
ffwd
But there should be no ambiguity, with enough data. Maybe that means there
will always be ambiguity, but maybe it doesn't, especially not with man-made
things and complex natural objects, and also if you can contextualize the data
over time and 'geographically', there is more 'signal' there to differentiate

~~~
jonathankoren
> But there should be no ambiguity, with enough data.

What? I'm sorry but this runs counter to everything in my experience, both
professionally, and just casual very day experience.

More data, helps to a point, but then there's diminishing returns, and it
certainly doesn't eliminate the ambiguity. On the contrary, you discover
diversity, and you still have a misclassification and perhaps even a harder
data cleaning problem, because now you're seeing cases that aren't actually
clear cut. Even if you're only talking about adding more features, well again,
that works up to a point, but then you hit sparsity issues.

~~~
ffwd
Yeah that's why I called it philosophical, because the idea is a little more
involved, shall we say. I'm not a god of this, so speculation ahead bewarned.
In cases that aren't clear cut, you would also need contextual data like
bigger actual physical area, or over time dimension, really any data point
that can help narrow down what the thing is. It wouldn't just be pure deep
learning stuff, it would be some kind of memory and data store of already
classified objects and contexts. In the ultimate end, ALL of it would be
sparse, but classify perfectly just that one thing it is built to classify.
And if that doesn't work, several sparse things combined would result in one
unique thing. On the sparsity matrix wikipedia page there is an example of
balls with a string through them, this would correspond to the data being the
balls and the systems we build (or alternatively unsupervised learning methods
for finding new strings), whatever they may be, would be the strings (assuming
all the strings are actual informational and correct to natural world). But
you need the balls to begin with etc. Since all of this information should be
in the natural world by its own, and also accessible to us

~~~
jonathankoren
This is literally a philosophical problem. It's called ontology. And no amount
of data solves this problem, because ultimately it's a labeling problem, and
the border between things is ill defined, and additional data doesn't help
resolve labeling ambiguity, if anything it finds out just how ill defined the
world actually is.

Think about it. Let's say you had a problem which was find the black squares.
So you collect some data and you find that you have a whole bunch of squares
that are on the blackness scale of 0.0, and bunch that are 0.1, and then
there's one at 0.5. Is 0.5 black? Maybe not. What about 0.7? Maybe. What about
0.999? Probably, but is it? It's not 1.0. And if we say 0.9 and higher are
black, why not 0.89? Even discounting measurement error, there's nothing that
supports a threshold at 0.9 beyond, "Well, I think it should be this."

~~~
ffwd
> if anything it finds out just how ill defined the world actually is.

Yeah I hear this but it seems only half-true to me. While for most intents and
purposes the world is ill-defined, in another sense the world itself is "100%
signal" and no noise. If we "zoom out" and take a grand view, imagining that
we have a supercomputer and a huge database, and the algorithms are solved, I
think every 'thing' in the universe has some unique features, and if you start
to have them all in a database you may be able to uniquely identify any thing,
at least those important to us. Everything one has excludes something else,
but it also includes that specific thing. Every thing adds context to one
thing and removes context from another. If you can draw a map of it, it seems
to me like deep learning can, hypothetically, automatically differentiate it.
Deep learning isn't just about one vector or one hierarchy of features, it's
about how the world is ALL vectors like this, even if right now, the CS around
it is pretty limited. It seems to me intuitively true at least. At the bare
minimum, seeing as us humans are absurd about categorizing everything into
objects, and it actually works very well functionally (we can manipulate,
create and predict in the world)

~~~
randcraw
If I understand your point, I'd suggest that it may apply best to the use of
DL for low level AI -- seeing, hearing/generating speech, and recognizing/
navigating/ interpreting complex signals of other kinds. There classification
is secondary to modeling the many subsymbolic facets endemic to raw analog
signals.

I suspect DL will eventually settle into a less vaunted role in the historical
saga of AI than it portends now. And that role may well be the 'grounding' of
sensory experience -- the modeling of the world into something perceptually
and cognitively manageable, like Plato's shadows on a cave wall.

~~~
jonathankoren
This problem is deeper (HA!) than trying to apply a computer algorithm. It's a
labeling problem. It's an interpretation problem. It's a _human_ problem.

------
shmageggy
Here's why the pipes metaphor is a bad one: we already are doing everything we
can and ever will do with pipes. Pipes have been around for a really long
time, we know what they are capable of, we've explored all of their uses.

OTOH, the current progress in AI has enabled us to do things we couldn't do
before and is pointing towards totally new applications. It's not about making
existing functionality cheaper, or incrementally improving results in existing
areas, it's about doing things that have been heretofore impossible.

I agree that deep nets are overkill for lots of data analysis problems, but
the AI boom is not about existing data analysis problems.

~~~
taeric
If there is a curse of our industry, it is almost willful ignorance of just
how hard the physical engineering fields are.

The simple things with pipes are simple. Yes. However, to think we haven't
made advances, or have no more to make, is borderline insulting to mechanical
engineers and plumbers.

Ironically, deep learning will likely help lead to some of those advances.

~~~
shmageggy
> However, to think we haven't made advances, or have no more to make

Not what I was saying at all. My point was that pipes are used to transport
something from point A to point B, and that regardless of what advances we
make, they are still going to be used for that purpose, and that this is
unlike the situation with AI.

~~~
taeric
My apologies for twisting your point, then. I confess I do not think I see it,
still. :(

Pipes do much more than just transport from a to b. Though, often it is all a
part of that. Consider how the pipes of your toilette work. Sure, ultimately
it is to get waste out of your house. Not as simple as just a pipe from a to
b, though. You likely have a c, which is a water tank to provide help. And
there are traps to keep air from sewage getting back in.

Basically, the details add up quick. And the inner plumbing for such a simple
task are quite complicated and beyond simple pipes.

So, bringing it back to this. Linear algorithms are actually quite
complicated. So are concerns with moving all of the related data. And that is
before you get to things that are frankly not interpretable. Like most deep
networks.

------
tim333
It seems a little odd that the author is focusing on machine learning not
being terribly good for prediction from data to counter the "this time is
different" argument. The reason this time is different is we are in a period
when AI is surpassing human intelligence field by field and that only happens
once in the history of the planet. AI is better at chess and go for example,
is slowly getting there in driving and will probably surpass general thinking
at some point in the future though there's a big question mark as to when.

~~~
coldtea
> _The reason this time is different is we are in a period when AI is
> surpassing human intelligence field by field and that only happens once in
> the history of the planet._

Citation needed.

------
jondubois
Journalists and investors only seem to get excited about buzzwords - Maybe
that's because they don't actually understand technology.

To say that technology is like an iceberg is a major understatement.

The buzzwords which tech journalists, tech investors and even tech recruiters
use to make decisions are shallow and meaningless.

I spoke to a tech recruiter before and he told me that the way recruiters
qualify resumes is just by looking for keywords, buzzwords and company names;
they don't actually understand what most of the terms mean. This approach is
probably good enough for a lot of cases, but it means that you're probably
going to miss out on really awesome candidates (who don't use these buzzwords
to describe themselves).

The same rule applies to investors. By only evaluating things based on
buzzwords; you might miss out on great contenders.

~~~
hindsightbias
Perhaps an AI Recruiter is in order.

------
RushAndAPush
I've read every comment in this thread and its filled mostly with peoples self
congratulatory intellectual views. Nobody, not even Robin Hansen himself has
given a good, detailed argument as to why the current progress in Machine
learning will stop.

~~~
bunderbunder
I doubt you'll get that, because nobody thinks that progress in machine
learning will stop.

An AI winter doesn't mean that progress stops. It means that businesses and
the general public become disillusioned by AI's or ML's failure to live up to
the popular hype, and stop throwing so much money at it. The hype then dies
down. Research continues, though, until enough progress is made that machine
learning starts to produce results that excite the public again, and the cycle
goes into another hype phase.

~~~
RushAndAPush
> I doubt you'll get that, because nobody thinks that progress in machine
> learning will stop.

Robin Hansen is notoriously skeptical about the possibility that Deep Learning
can make real gains. He for some reason thinks brain emulation is more likely
to make large progress in AI.

>An AI winter doesn't mean that progress stops.

It doesn't completely stop, but progress would be at a snails pace.

> The hype then dies down. Research continues, though, until enough progress
> is made that machine learning starts to produce results that excite the
> public again, and the cycle goes into another hype phase.

I think we as a community may need to take a good long look at the hype cycle
theory and be skeptical it has any merit.

~~~
dwaltrip
Brain emulation? I didn't realize people seriously thought that could be good
for anything other than research and investigation.

If we can successfully emulate the brain, it seems we would have necessarily
acquired the knowledge needed to build models that are very powerful without
having to exactly mimic the brain.

~~~
Jach
See Hanson's [http://ageofem.com/](http://ageofem.com/)

~~~
dwaltrip
I keep hearing great things about that book. Thanks for the link, maybe it is
time to pick up a copy.

------
AndrewKemendo
I'm sorry but I'm not buying it.

ML companies are already tackling tasks which have major cost implications:

[https://deepmind.com/blog/deepmind-ai-reduces-google-data-
ce...](https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-
cooling-bill-40/)

[http://med.stanford.edu/news/all-news/2016/08/computers-
trou...](http://med.stanford.edu/news/all-news/2016/08/computers-trounce-
pathologists-in-predicting-lung-cancer-severity.html)

Those are just the two I had off the top of my head. We apply ML tasks for
object/scene classification and they blow away humans. Not only that we're
already structuring a GAN for "procedural" 3D model generation - in theory
this will decimate the manual 3D reconstruction process.

------
yegle
I started question the credibility of the article when the author mentioned
"deep machine learning". Not an expert in ML, but it should be "deep learning"
referring to a type of neural network based machine learning technique with
deep hidden layers.

~~~
deepnotderp
Yeah,that's weird.

------
h43k3r
A little off topic but I think the VR boom will bust much more sooner than AI.

I can't think of normal people wearing those heavy gears in their normal life.
There will be its use cases in specialized applications like education,
industry, games but I don't think it will get popular like an iPhone.

AR is still OK since it augments real life but there is a long way before it
will become mainstream.

~~~
thesimpsons1022
have you tried it? i own an oculus and every family member i've seen has been
shocked and loved it. Obviously the oculus is prohibitively expensive but with
the release of Playstation VR i think the mainstream is poised to adopt it. I
really believe the next game consoles that come out will simply be vr headsets

~~~
jroes
I have tried it. I owned a PSVR for a month and tried everything available. It
was a pretty incredible experience, more immersive than I expected for sure.
Most of my family and friends had a great time with it as well.

Ultimately, however, I resold it after a month. There are too few interesting
full games available. Nearly every game is mostly a short trial, and most of
the games are also very experimental and uninteresting to me in general. As an
example, a full 1/3 of games available were musical demos that seemed to be
geared toward folks having fun experiences while presumably smoking weed or
otherwise in an altered state.

There was never a reason for me to come back to the system, but I would like
to see if there are very imaginative useful practical applications that
eventually see light. After surveying the other VR options I'm not convinced
anything exists yet.

~~~
petra
What about nature based apps, for coming after work, and just relaxing a bit
in nature ? do they give a similar feeling of going into a beach, etc ?

~~~
h43k3r
I love that idea but I think we should be more social in person than
electronically. I would better go out with someone on a run or a walk than
this but again I would love that for sometime

~~~
jononor
I think almost everyone would prefer to walk with someone in person in nature.
However sometimes the more realistic (or perhaps perceived) alternatives might
be stay in, or go on a virtual tour with someone online. This makes it
relatively attractive.

------
euske
I have a hard time understanding why even technical people use the term "AI"
today. Its use should be limited to sensational media and cheesy sci-fi. It's
roughly equivalent to saying "computery thingamabob". I would call a pocket
calculator an AI too. Why not? It carries out certain mental tasks better than
our brains do.

~~~
randcraw
I think "AI" does still have meaning, in that the goal of AGI hasn't gone
away. Yes, any goal that's tangential to AGI probably shouldn't be called AI.
But as long as the constituent tasks needed by AI (vision, learning, speech
in/out, etc) continue to improve rapidly, especially due to advances in a
single technology like DL, it's inevitable and IMO appropriate that the
umbrella moniker used to describe DL and its impacts remains "AI".

------
zamalek
One of two eventualities exist:

* The article is correct and the current singularity (as described by Kurzweil) will hit a plateau. No further progress will be made and we'll have machines that are forever dumber than humans.

* The singularity will continue up until SAI. So help them human race if we shackle it with human ideologies and ignorance.

There is no way to tell. AlphaGo _immensely_ surprised me - from my
perspective the singularity is happening, but there is no telling just how far
it can go. AlphaGo changed my perspective of Kurzweil from a lunatic to
someone who might actually have a point.

Where the line is drawn is "goal-less AI," possibly the most important step
toward SAI. Currently, all AI is governed by a goal (be it a goal or a fitness
function). The recent development regarding Starcraft and ML is ripe for the
picking, either the AI wins or not - a fantastic fitness function. The
question is, how would we apply it to something like Skyrim: where mere
continuation of existence and prosperity are equally as viable goals (as-per
the human race). "Getting food" may become a local minimum that obscures any
further progress - resulting in monkey agents in the game (assuming the AI
optimizes for the food minimum). In a word, what we are really questioning is:
sapience.

I'm a big critic of Bitcoin, yet so far I am still wrong. The same principle
might apply here. It's simply too early to tell.

~~~
mmkx
It's already happening. The world became hyper efficient. The hackers behind
"Trumpbots" made money on the prediction markets. Understanding that mechanic
makes me almost certain the singularity arrived after his election.

~~~
inimino
...what?

------
lowglow
We're building an applied AI business by creating an experience through both
hardware and software. You don't set out to create something with as big a
breadth of vision by worrying about booms and busts. You continue your journey
unwavering because the potential impact and fruitfulness of development is
worth it.

This is why you should work on something you're passionate about. Your time on
earth is limited, so strive to leave good work and contribute to the progress
of humanity on a larger scale.

------
kpwagner
AI is overhyped... sure that's probably true.

But data science is here to stay in the same way that computer science is here
to stay.

~~~
tree_of_item
No, I don't think "data science" is nearly as well defined as computer
science.

~~~
kpwagner
I agree; and it's probably not as deep either.

I meant that computer science has staying power, while particular branches (or
JS frameworks) may rise or fall in popularity over time. Likewise, the "trunk"
of data science knowledge is not a mere fad.

I'm not a data science academic or practitioner. My opinion is based on a
small amount of tinkering and what I've read in various online sources.

------
iwritestuff
I plan to enter a PhD program in 1-2 years to specialize in ML/Deep Learning.
Assuming it'll take 5-6 years to complete my degree how applicable should my
skill sets be in industry at that point?

~~~
godmodus
you'll be a programmer - that is what counts. How good of a programmer you
will be will determine your success. never put your eggs in one basket (not
saying you shouldn't become an ML expert though, that's pretty damn nice). as
a Phd, you are probably good enough.

as to ML, its adoption is hyped. it is powerful, but not as anyone really
talks about.

support vector machines and Bayesian learning have been around since the
70s/80s _(ninja edit: SVM 's since 1963! Markov Chains 1950s, Bayesian
Learning/Pattern recognition sine the 1950's)_, but adoption has been slow due
to the nature of business, which is now drooling over it since neural networks
beat a few algorithms.

due to the hype, more business will opt for ML now, but the craze will plateau
and ML will become another tool in your arsenal.

so basically, you really have nothing to worry about - use your Phd to do
interesting things, come up with novel and new research and/or develop your
own product.

don't let your job security worries get in the way of enjoying what you want
to do now, you're already good and in STEM (and if you don't feel good enough,
work on yourself until you do).

~~~
cerrelio
> support vector machines and Bayesian learning have been around since the
> 70s/80s (ninja edit: SVM's since 1963! Markov Chains 1950s, Bayesian
> Learning/Pattern recognition sine the 1950's), but adoption has been slow
> due to the nature of business, which is now drooling over it since neural
> networks beat a few algorithms.

This is one of the things I find hardest about convincing managers and leads
of. They think things like CRFs and Markov models are "new" methods and too
risky. So they opt for explicit rule-based systems that use old search methods
(e.g. A*, grid search), which hog tons of memory and processor. Those methods
rarely ever work on interesting problems of the modern day.

They can understand the rule-based methods easily. They have a hard time
leaping to "the problem is just a set of equations mapping inputs to outputs,
and the mapping is found by an optimization method."

~~~
godmodus
I explain it using the infinitesimal method, which if done right using the
hill climbing metaphor, often delivers. But it does take away the magic of
"wooo, neural" :p

------
Houshalter
Robotics and automation have been improving for a long time, and especially
recently. Look at the rise in consumer drones, enabled by improvements in
batteries, sensors, and computers.

But the main thing holding them back is a lack of AI. Robots can do a rote
action over and over again, but they have a hard time identifying where
objects are, planning, and reacting to their environment. Just solving machine
vision would be a massive step forward and enable a ton of applications.

And that has sort of already happened. The best nets are already exceeding
humans at vision tasks. They are learning to play video games at expert level,
which is not conceptually distant from robot control. Its taking time to move
this research out of the lab and into real applications, but it is happening.

And so I totally believe that at least 50% of current jobs could be automated
in 10 to 15 years. How many people are employed doing relatively simple,
repetitive tasks, over and over again? Me and most people I know have jobs
like that.

------
skywhopper
I think what we're seeing is an explosion of new approaches to computerized
problem solving made possible by huge amounts of data and enormous computing
resources. A lot of what has become possible in the last couple of decades is
indeed new, but the apparent rapid advance is really just a matter of applying
the brute force of a massively upgraded ability to process huge quantities of
data in parallel, and this has led us to make erroneous assumptions about
future progress in these areas.

Basically, these are new solutions to new problems, and we're rapidly seeing
the easy 80% of this new generation of "AI" happen and it seems magical. But
soon enough we'll hit the wall where further progress becomes harder and
harder and brute force approaches are no longer sufficient to achieve
interesting results.

------
siliconc0w
Pretty much every large firm has multiple problems ML can solve better than
linear/logistic regression. Smaller firms may still have one or two. In some
industries the core competency will be how good your ML model is as everything
else becomes a commodity. There are new advents that make ML better for small
data-sets as well as opportunities for data-brokerage to increase access to
data. And these are just current applications, new applications are still
nascent (i.e self driving cars). Treating ML as a software problem instead of
a science project - with a pipeline of adding/creating data, cleaning,
modeling, analyzing, learning, and iterating is also incredibly important but
it's not like most companies are doing this particularly well either.

------
unignorant
Along similar lines, we did some work investigating public perception of AI
over the past thirty years:
[https://arxiv.org/pdf/1609.04904.pdf](https://arxiv.org/pdf/1609.04904.pdf)

From Figure 1, it's clear we are now in a boom.

~~~
otoburb
Thanks for the study - Figure 1 is impressively self-explanatory. It's
interesting to note that the article's author Robin Hanson[1] worked in AI &
ML industry roles starting at the peak of the last AI boom in '84 and
witnessed the AI winter through to '93.

------
januscap
I think the article and the discussions are focusing on the wrong semantic
definition of AI.

Unsupervised learning is where the revolution is. Learning has nothing to do
with boom or bust.

------
rsimons
I recently made an appointment through an AI secretary to set up a meeting
with them; it worked surprisingly well.. They are not hiring a secretary any
time soon. Real effect. Also:
[https://www.theguardian.com/commentisfree/2016/dec/01/stephe...](https://www.theguardian.com/commentisfree/2016/dec/01/stephen-
hawking-dangerous-time-planet-inequality)

~~~
Noseshine
"Ai secretary" in what respect? Natural language processing? Because simply
making an entry into a calendar is just a 1980s' level computer programming
problem at best, so I think your example needs some clarification/elaboration.
Whether something is "AI" also depends on the presence of _learning_ (the
stuff you do with Windows voice recognition to set it up is not
"learning/teaching" in the AI sense). Most of the things touted as "I"
(intelligence) have all the intelligence in the programming effort, but none
in the actual program (apart from what was put in).

------
mark_l_watson
I think the boom in AI jobs will go bust, when tools for data cleaning and
injection, and for automatically building models will get so good that experts
will no longer be required to use them. I have so frequently set up customers
with procedures and code for ML, that I have seriously thought of writing a
system to replace people like myself.

Until there is real AGI however, there will be jobs for high end AI
researchers and developers.

------
mwfunk
I'm very curious to what degree there even is an AI boom right now, vs. AI and
machine learning going through a phase as the buzzwords du jour used in
corporate PR. People have doing all sorts of fascinating things with machine
learning for decades, and (for example) Google has been arguably an AI-focused
company from day one.

In the tech press recently, I keep hearing how every huge tech company needs
to have some sort of AI strategy going forward, so they don't miss out on an
industrywide windfall, or even become irrelevant because they didn't hop on
the AI bandwagon.

I suspect that there are a few more people working in AI nowadays than we're
doing so 10 years ago, but that quite a bit of the narrative surrounding AI in
the press is some combination of corporate marketing and journalists eager to
have something to write about.

I'm not saying AI isn't important, rather that it's an important field that's
only a little more important than it already was 10 years ago. The difference
seems to be how often it pops up in PR and tech journalism vs. 10 years ago.
Just a theory of course; I would love to know what the reality is.

~~~
randcraw
I work at a large pharma, and from what I've seen in this business and
anywhere else where data is important (like R&D), deep learning is perceived
to be revolutionary.

I think most of the rise in DL's corporate mindshare is the potential it has
for disruption of the status quo. Business folks hate game changing tech. It
destabilizes the priorities and practices they understand, and dishevels the
grand hierarchy of corporate command and control they know and love.

If DL really does reinvent any significant fraction of their part of the
business, they know 1) they won't know what to do in response, and 2) their
daily competition is going to get even more challenging until a new status quo
emerges. And both of which may cause them to lose their tenuous
position/status in the company hierarchy, much less, lose their very job.

------
eva1984
>> Good CS expert says: Most firms that think they want advanced AI/ML really
just need linear regression on cleaned-up data

Not nearly true. The simple counter-argument is that prior to DL, we don't
have good approach to really 'clean' data like images.

The author states this fact as if cleaning data is a piece of cake. No, it is
surely not. In fact, part of the DL's magic trick is the ability to
automatically learn to generalize useful features from data. From another
perspective, the whole DL frontend, prior the very last layer, can be viewed
as a data cleaning pipeline, which is learnt during the training process,
optimized to pick the useful signals.

The author clearly isn't an expert on the matters he trying to put claims on.
Yet his statement comes with such big confidence or ignorance. This shows why
this revolution will be a truly impactful one, for even some of the claimed
intellectuals cannot understand its importance and divergence of its
predecessors. They will be caught off-guard then left behind. It would be very
enjoyable to watch what their reaction would be once it happens.

------
muyuu
The immense availability of financial instruments and VC makes investment in
any mildly promising technology overshoot. There's nothing really mysterious
about this, especially after the dot com bubble. Is the web useless? no, but
there's a limit to the number of players doing the same things the same way
successfully - because that's what hype does, make people focus in not just a
technology but a particular way.

3d-printing, mobile apps, tablet devices, VR, cryptocurrency, 3D TV, Neural
Networks (in the 70s, then again in the mid 80s, then in the late 90s, and
now), you name it.

These are perfectly applicable technologies that may or may not warrant the
swings in investment they attract, but they are valid and defensible
nonetheless. They may also be monetisable - not always useful stuff brings
proportional profit in the market. And of course, there's also timing. The
exact same idea may work years later in a different environment. But the only
way to find out is to try, and maybe try _too much_ at some point.

------
willsher
It seems the nature of this and VR that they come to boom for a while and then
bust, having stagnated. They then wait for the next alignment of underpinning
technology, knowledge and culture to emerge again. Last last one I'm aware of
was mid-late 1990s, where VRML was gaining traction and new ways of thinking
about AI were emerging.

IBM Watson or similar (if I recall IBM was still calling their business AI
system Watson back then) seems to be promenant in these two booms and both
times the results it gives haven't matches it's marketing hype.

The technology, having been significantly furthered fades into the day to day
of computing somewhat until the next boom that drives more short burst
innovation and awareness.

Conscious AI and realistic VR is some way off, if we ever see it. Culturally
and ethically we are not ready to answer the questions it poses and the
cyclical nature gives us more time to digest the latest raft of questions in
light of the progress.

------
ig1
While the article is right that in many situations a linear regression or tree
based approach will be more effective, it downplays the real value add of deep
learning which has been clearly demonstrated with image data and is likely to
have significant impact in other areas (audio, biodata, etc.) where
traditional statistical methods have failed.

------
Entangled
It won't. It will get better.

In a couple of years you will be able to take a picture of a rash in your arm
and get exact diagnosis with treatment instructions. You will be able to take
a pic of a flower, a leaf or any tree and get accurate info about its species,
plagues and best techniques for growing them. You will be able to take a pic
of any insect, spider, snake, animal, or anything that moves, any mineral,
element or anything at all and get accurate info about that.

AI is not only about robots thinking, it is about collecting information and
making it available on demand. Agriculture and health will be the first
beneficiaries, finances in a close second.

Data mining is where the first stage of AI is, and Google is moving ahead of
everybody else with their search engine, maps, translation, and all the
information collecting tools. Once you have enough data, knowledge is just a
couple of programs away.

~~~
IshKebab
That is a valid use of deep learning. I think the article is talking about
people thinking that because deep learning is very impressive for some things,
they should use it for everything - even problems where a much simpler
solution works fine, or problems where deep learning doesn't apply.

I work for a consumer product company, and there are often people talking
about using 'big data' and 'machine learning'. They're just following the
hype; they don't really know what machine learning is and I've only heard two
or three potential applications of it mentioned that make any kind of sense.

------
kodisha
So, imagine that in 2011, 5 years ago, you approach some VC and say:

"Hey, we are building this VR hardware and games for it, we would need ~1M to
finish it".

I think that there is high chance that you would get some weird looks, and
possibly few remarks how that is a "dead technology, tried once, and obviously
failed".

And then, fast forward couple of years there is whole industry around VR,
jobs, hardware, software, the whole eco system.

You only need one strong player in a field, and suddenly everyone and your
neighbour kid is doing it.

------
srinikoganti
What is "new this time" is that computers/machines can see and hear and even
speak, whether we call it AI or "Deep Learning" or "Machine learning". So
there is going to be more impact from Vision/Voice based applications rather
than data analytics/predictions. Eg. Self Driving Cars, Medical Diagnosis,
Video Analytics, Autonomous machines/Robotics, Voiced based interactions. All
of these combined could be as disruptive as Internet itself.

------
itissid
Thomson Reuter's ET&O and Risk division laid off 2000 people recently to fund
a new center near university of Waterloo to provide "answers"(Its their
mission statement) by recruiting people to provide deep learning solutions. NY
suffered deep cuts. The sad part is it seems the leadership just want to use
deep learning as a way to justify doing what they are doing, which is
"starting from scratch"

------
DelTaco
I would argue that it would often be easier to implement a ML solution on
regular data than try and clean the data and then use linear regression

------
spsgtn
This reminds me of the stem cell research boom and bust of the late 90s and
00s. It turns out that the new and shiny toy doesn't work for everything.
However, it's not really a bust, as AI, similar to stem cells applications,
will continue to do wonders where it is the best tool for the job.

~~~
debaserab2
We're not even close to reaping the benefits of stem cell research yet. It's
way too early to deem it's success to be just mediocre.

------
Animats
Hard to say. I'm seeing too many billboards near SF for "big data" and
"machine learning". AI-type grinding on your stored business data is sometimes
useful, but not always profitable. Everybody big already has good ad targeting
technology, after all.

------
intrasight
First, there is no "AI boom" because there is no AI - there is machine
learning.

Second, booms that produce real, tangible results don't normally go bust.

Finally, we've only just scratched the surface of what machine learning is
capable of delivering, so no bust is to be expected.

~~~
goatlover
There is no AGI, or strong AI. There is weak or narrow AI. AI simply means
that if a human performed the task, it would be considered intelligent. So
Deep Blue was chess AI, but it wasn't remotely AGI. Same with Watson and every
application of AI to date.

In a sense, Artificial Intelligence is the encoding of human intelligence for
some task or problem domain in machines.

------
marcoperaza
Until the taboo on talking about consciousness is broken and we seek to
understand what role this incredible phenomenon plays in human cognition,
there will be no progress towards the holy grail: true general purpose AI.
That is my falsifiable prediction.

~~~
pakl
Just define consciousness, and then maybe we can start to have a scientific
discussion about it. ;)

------
erikj
We already had the AI winter before. The worst thing about it is the death of
Symbolics. The latest AI resurgence, unfortunately, created nothing comparable
to the legendary Lisp machines.

------
pmrd
Also - must remember that AI is a programmer on the sidelines letting data do
the logic. It required a fairly different frame of mind than traditional
programming

~~~
randcraw
So data as employed by DL plays the same role that it does in Brooks'
subsumption architecture -- grounding and shaping the knowledge model -- but
now doing it emergently, albeit requiring a lot of parameter tuning from the
human-in-the-loop.

An interesting prospect for the evolution of software developers.

------
tootie
My company has hitched itself to the AI hype train, but they're talking about
NLP and conversational UIs not data analysis.

------
antirez
Recent AI progresses are a clear technological advancement. To meter them in
terms of bay area biZZness meters is lame.

------
sjg007
No. Image and audio recognition in many specific cases are now solved
problems. This is substantial progress.

------
blazespin
Ml comes in big in cleaning up the data and making recommendations on what to
look at.

------
danielrm26
I think a key differentiation between ML and more common statistics is that ML
is designed to improve itself based on data. Statistical methods don't do
that.

So maybe they're trying to do very similar things in a lot of cases, but self
improvement is a major differentiator.

------
albertTJames
[http://www.dafont.com/forum/attach/orig/2/6/263344.jpg](http://www.dafont.com/forum/attach/orig/2/6/263344.jpg)

------
RandyRanderson
I'm old enough to have seen a lot of these boom/hype/bust cycles. I'm
convinced that this time is, in fact, different.

To temper this I believe most decent user-visible changes will take ~5 years
(as most actually useful software does) but the changes will be huge:

* The author cites computer driven cars. I think this will take place mostly on long-haul highway trucking instead of in cities first. Even so, this could mean a massive swath of truckers without work in a short 5yr epoch.

* We've already seem the effects of heavy astro-turfing/disingenuous information/etc in the last US election. This certainly changed the "national psyche" and may have changed the election outcome. There is heavy ML research going into making the agglomeration of ads and content almost compulsively watchable. Our monkey brains can likely only handle a few simple dimensions and only boolean or maybe linear relations and they certainly get trapped in local maxima/minima. Even trivial ML techniques can bring this compulsion from say 50% effectiveness to 95%+ (by some reasonable measure). Imagine a web that is so completely tailored to the user such that search results, ads and content is completely tailored to you. Verbs, adjectives entire copy all written to get you to that next click. This is different.

* Bots that seem like real ppl will be rampant. Are those 100 followers/likes/retweets actual ppl? Even years ago reddit (to gain popularity) faked users. Certainly this has only accelerated and will continue to as commercial and state actors see value to moving public opinion with these virtual actors. (ironically maybe only bots will have read this far?)

* Financial Product innovation - Few ppl actually understand this market (even within the banks) however the deals are usually in the 100+ million range. The products take advantage of tax incentives, fx, swaps, interest rates, etc in an ever increasing complexity. These divisions are still some of the most profitable parts of banks. It's likely that on deals where profits are measured in tens of millions on a single deal (several are made per quarter, per major bank). It's likely that ML algos will be put to use here as well not only optimizing current products but in current prod elaborations. I beleive these products to be a major source of inflation. Whereas the official numbers are ~2% I believe the actual inflation (tm) felt by most is more in the 7%+ range.

* State Surveillance and Actions - I hear ppl saying that mass surveillance hasn't been effective in stopping "terrorism", as if it would be ok if it did. Well, it will be effective and it will get very, very good at it. Of course terrorism is not defined anywhere so ...

* Customer Support - this, like transportation, is a major employer of unqualified workers. I believe in 10 years there will be maybe 1% of the current workforce in CSR work. The technology is here the software just has to be written.

It's not just the number of jobs displaced it's the velocity. If we look to
the effective Predator-prey modeling:

[https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equatio...](https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations)

We see that the generally the solution takes 2 modes:

* stability - wolf/rabbit populations wax and wane together * crash - the wolves kill enough rabbits to make the remaining pop crash

Now I don't believe there will be a 'crash' but likely there will be a new
normal (equilibrium) and getting there will not be pleasant.

Disclaimer: Yes, I do work in ML.

~~~
edblarney
I think this is the best comment.

AI will find it's way into specific areas in which the ROI is very
significant, and though those will be hard to predict, I agree with the above.

It will make a huge impact in some areas.

Others, not so much.

It's funny that not a single person mentioned 'big data'.

Big data was all the rage for the last few years, and it would seem like an
intuitively obvious fit, not? I don't think so.

I think it's a stretch to see how AI help's the gap design new clothes, or
even optimizes sales approach. I suggest it will be things like customer
service as Randy indicated.

And yes, truckers will be the first to go.

------
mmkx
The technological singularity arrived November 15th. Plenty of AI/robots to
come.

