
Acquisitions accelerate as tech giants seek to build AI smarts - t23
http://www.reuters.com/article/us-tech-startups-ai-idUSKBN18M2ED
======
senatorobama
Wish I was in this industry instead of my dead-end job.

~~~
tinbad
On the bright side: every job will be a dead-end when AI is very good.

~~~
TeMPOraL
When people revolt and start hanging programmers for destroying their jobs, it
probably won't matter in which field of the industry any of us were.

~~~
resf
Fortunately the AI riot police and soldiers will be far more effective at
containing unrest than their human predecessors.

~~~
dkersten
You are right. We need to start building in "protect our programmers" as a
high-level AI directive.

------
batbomb
Lattice wasn't discussed on HN, but they were acquired two weeks ago by Apple.

~~~
vvanders
I had a heart attack and thought this was Lattice Semi. Whew.

~~~
amelius
It could still happen. Market cap is $855M versus $800B for Apple.

------
miguelrochefort
Yet, no one is tackling the AI UI problem. Why is that?

I can't believe people are still thinking natural language is the way to go...

~~~
ethbro
Natural language is always the endgame. Because natural language either
removes the physical input system or frees up screen real estate.

~~~
ragebol
Yet, I don't like to use e.g. Google Now when I'm not alone.

Dictating a personal text message to my phone in public is not what I want.

~~~
ethbro
Granted, but I'd expect subvocalization with predictive text or something else
to infill where necessary. Stronger AI can substitute for a lot of "magic."

------
graycat
Uh, guys with the big bucks: AL/ML are at most a tiny fraction of the
material, tools, power, and value of the pure/applied math on the shelves of
the research libraries.

If you have a problem you want solved with AI/ML, 99 44/100% of the time you
are better off going for what's on the shelves of the libraries, as taught in
various high end grad schools, than with anything currently specifically
AL/ML.

So, broadly, go for work in statistics, optimization, stochastic processes,
and optimal control. For a specific application, may want some work that
stands on that existing material and is also at least somewhat original.

E.g., the crucial technical core of my startup is some original applied math I
derived based on pure/applied math that's long been on the shelves of the
research libraries. For the valuable work of my startup, what's in AI/ML now
is in comparison at best weak, nearly silly early grade school baby talk.

Really, guys, 99 44/100% of the good stuff is still where it's long been -- on
the shelves of the best research libraries. And for the education for that
work, it's definitely NOT in departments of computer science. Instead look at
selected programs in pure/applied math in some of the best research
universities.

~~~
aub3bhat
This is a spectacularly bad advice, which "off-the-shelves" research libraries
that you mention? OpenCV, LibSVM, Weka or Matlab/Octave?? Most
examples/implementations in OpenCV are outpaced by Deep Learning methods.

TensorFlow, Caffe, PyTorch/Torch which today implement most of the state of
the art methods were all written on average a year or two ago.

>> E.g., the crucial technical core of my startup is some original applied
math I derived based on pure/applied math that's long been on the shelves of
the research libraries. For the valuable work of my startup, what's in AI/ML
now is in comparison at best weak, nearly silly early grade school baby talk.

Rather than vaguely mentioning libraries and startup, I suggest you offer
concrete evidence behind your claims.

>> And for the education for that work, it's definitely NOT in departments of
computer science. Instead look at selected programs in pure/applied math in
some of the best research universities.

This is just truthiness (Math feels more "hardcore" than CS so Math department
must be the better source.). Having known several people at
Google/Apple/Facebook-FAIR, CS departments are typically major source of AI/ML
researchers.

~~~
raverbashing
> Most examples/implementations in OpenCV are outpaced by Deep Learning
> methods.

Right. Except that:

\- In some cases there are very good (and fast) techniques for finding out
what you need quickly (like identifying faces)

\- You don't want to retrain a network for your specific case (which might not
have an existing model already)

\- Traditional solutions might be good enough

\- OpenCV is much more than just identifying images, it has several APIs like
geometrical transformation, color processing, filtering, etc
[http://docs.opencv.org/2.4/modules/refman.html](http://docs.opencv.org/2.4/modules/refman.html)

~~~
_delirium
Plenty of industrial AI is using other methods as well, it's just not where
most of the current hype is, so there's a kind of bait and switch where they
lead with the DL and then if you look at the products and APIs they're
actually deploying and selling, the workhorses are often some mixture of very
classical stats methods (like logistic regression) and general-purpose non-NN
ML algorithms (like gradient boosting). One common breakdown is that the
general-purpose APIs use these general methods, and then there are separate
NN-based APIs for a few specific kinds of problems like object recognition in
images, where NNs give a big performance increase.

Especially true for companies in the data-science niche, since DL rarely gives
you much of a win for tasks like data-mining SQL databases, or when you have
only modest sized data sets, but nonetheless you still need to at least offer
some kind of DL solution to be perceived as a state-of-the-art AI offering.
(Like with the "big data" hype wave, many companies that think they have
gigantic data sets don't.) These other methods are better understood though
and not in a ton of flux, so there's no need for an acquihire frenzy to get
that expertise.

