
An AI First World - ghosh
http://avc.com/2016/04/an-ai-first-world/
======
Smaug123
Does this article actually say anything? I feel like I could summarise it as
"Famous person said <thing>. What does <thing> mean? I'm not really sure. I
think they're right and <thing> is going to be big."

~~~
rdl
Fred Wilson is actually more famous than Sundar Pichai, at least in some
circles. In this case it is "famous and influential investor says other famous
person says X is going to be big, and he agrees", which is a useful data point
for the 10k people who will now pitch AI startups to USV over the next week.

(I also wish it said more, but his blog comments tend to have better
discussions than even hn, sometimes, so maybe he is just trying to kick that
off. I wish it were a better technology for commenting, though.)

~~~
forloop
sigmaalgebra's comments are pretty fucking epic, on this post.

~~~
telotortium
That's the same guy that posts as @graycat here on HN. Most of his posts
espouse the wonders of hardcore applied math, often with a comprehensive
bibliography, so he seems to know what he's talking about, although his
rambling (and, to be fair, somewhat off-topic) style does him no favors on HN,
which is why his posts almost always end up at the bottom of the comments here
(see also @michaelochurch).

~~~
forloop
Michael O. Church got modded off of HN for being critical of certain aspects
of YC (according to him).

~~~
jraines
True, but it's more b/c his criticism of VC and PG specifically has a Captain
Ahab level of intensity and persistence, which can get in the way of
discussing, well, anything else.

~~~
telotortium
Exactly, which is why I made the connection.

Still, the "single-issue commenter", as exemplified by both of them, can be
valuable to a good community. @michaelochurch's writings have definitely
shaped my thoughts, so kudos for him, and, if I were in the position to use
it, @graycat's bibliographies would be rewarding to mine (unless his knowledge
_is_ outdated, as @barrkel suggests). Yet I can't help but get the impression
that the single-issue commenter is very prone to overexposing themself,
especially if a large proportion of their comments are multi-thousand word
diatribes that recycle most of the same themes. Not to mention, people readily
pattern-match prolix commenters to complete crackpots, even though both those
users seem to be above that level. My impression is that single-issue
commenters would be best served by posting a lot for a year or so, followed by
some time to lie low -- if their contributions to the discussion around their
issue are really valuable, their posts should stay in moderate circulation as
other users refer to them in later discussion.

------
paganel
> It feels like this AI first world is arriving. That’s big.

AI is today's "one word: plastics!"
([https://www.youtube.com/watch?v=Dug-G9xVdVs](https://www.youtube.com/watch?v=Dug-G9xVdVs)).
Much hype, some substance, lots of people throwing money around.

~~~
aub3bhat
Rather than spreading "meta" hype about AI being hyped. I highly recommend
reading up on recent advances due to Deep Learning and ideas of Neural
Processing. To summarize briefly, a lot of core problems in applied machine
learning, such as image recognition/segmentation/understanding, Natural
Language Processing saw a significant improvement in performance over last
five years, often bringing them to near human performance. This improvement
was possible due to new approaches that were rooted in empiricism,
availability of large datasets, powerful GPUs and algorithms such as Deep
Convolutional & Recurrent Neural Networks. Combined with the advances in
Drones, Batteries, Cloud computing & Smartphones a lot of application which
previously would have been impossible to deploy can now be quickly deployed.

Here is a good article that goes over the topic:
[http://spectrum.ieee.org/automaton/robotics/artificial-
intel...](http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/facebook-ai-director-yann-lecun-on-deep-learning)

~~~
pron
> ... often bringing them to near human performance

That's quite far from the truth. You could say that they are now at the point
where they are able to respond to direct, simple questions/commands.

I am not saying that we haven't seen some great advances in machine learning
and language algorithms, but they are still a far cry from anything I'd call
intelligent. NLP is still rather primitive (although impressive compared to
where we were two decades ago), and bees are able to navigate better than
self-driving cars or drones, yet I'm not sure they would or should be called
intelligent.

I have no problem with using the term AI for current technology -- which is
certainly cool and useful -- as long as we understand that the "I" is little
more than a marketing gimmick. Otherwise people may think that these
algorithms are the first step towards "true" intelligence -- i.e. that true
intelligence is some version N.0 of those algorithms, that it is a complicated
yet natural extension of them -- and at this point the knowledge we have to
say whether this is true or not is pretty much zero.

~~~
raverbashing
> That's quite far from the truth

No, it is not. Because it's in stricter terms (like character or picture
recognition, for example)

~~~
pron
aub3bhat was talking about natural language, not character recognition.

~~~
tlrobinson
aub3bhat also mentioned image recognition.

~~~
pron
Then _I_ was talking about natural language :)

------
noir-york
AI isn't the product, its the enabling tech. The killer app of AI most likely
won't even make that AI visible to its users. It will be under the hood.

AI is useful because it changes the economics - it either enables the removal
of humans from an activity, or makes economically feasible an activity that
would otherwise be unfeasible _at scale_ due to the need for humans.

------
greenspot
After every VC, every founder, just everybody is talking about AI and bots
Fred is a bit late to the game but better late than never. Or maybe writing
about the Blockchain kept him back.

------
pron
Can someone explain what people mean when they say AI these days? Statistical
clustering algorithms like neural-networks? Natural language algorithms? Any
algorithm that didn't work well a decade ago due to a lack of computation
resources? Algorithms that mimic everyday human activities? All of the above?

I think that applying the label "AI" to some algorithms only serves to give us
a very impartial sense of awe towards them that goes far beyond what their
actual performance merits.

~~~
vintermann
I'd say it's using computers for solving problems in a wide open model space.

Any algorithm for solving practical problems embodies a model of the problem
domain. Traditionally we try to bake in as much of our own experience as
possible in the model/algorithm.

In some fields, we're now seeing for the first time that it's better to not do
that, to make _fewer_ assumptions, build in less of our own knowledge, and
just use (new, powerful) general purpose learning algorithms. With enough
data, they can make sense of it themselves - our attempts at explaining it
through model-building just get in the way.

~~~
pron
I think that you'll find that the learning algorithms that actually work well
in practice have more built-in assumptions and heuristics than you may think.
But, yeah, as a general rule that's a good categorization. A far cry from
intelligence, though, so the name AI is still misleading. Not all adaptive
behavior is intelligent, although it often gives the illusion of intelligence,
so I guess we make this labeling mistake not only when computers are
concerned.

Anyway, statistical algorithms, adaptive algorithms or inductive algorithms
would be much better, less misleading and less breathless -- if less inspiring
-- names for those programs than "AI".

~~~
vintermann
The main built-in assumptions in neural network are usually extremely high-
level and abstract things, such as "this and this can be approximated with
more or less smooth functions", or "things that are near are more likely to
matter than things that are far away".

Before, using machine learning to build models from the data was in some sense
a last resort, for things you couldn't get a grip on modelling by hand from
your own understanding. When you could, you preferred hand-made models because
they did better.

To take a concrete example, the top computer go program before Alphago,
CrazyStone, still had hand-crafted patterns to prioritize which moves to
explore during playouts. The author (and even more his competitors) tried to
find better patterns with machine learning but they couldn't. The _weights_ of
the patterns were tuned with machine learning, but any attempt to e.g. evolve
patterns with genetic algorithms just did worse.

Now, it's going the other direction, we're seeing very general models only
constrained by training data beat carefully hand-crafted models in area after
area. And models are becoming more similar, more generalized as well - many
people have noticed that state of the art recurrent and feedforward nets
increasingly look more like each other.

I disagree that it's a far cry from intelligence. I think people's ideas of
intelligence are fuzzy and inconsistent, and it's far too often confused with
things we _associate_ with intelligence (from consciousness to independence
and arrogance).

~~~
pron
> I think people's ideas of intelligence are fuzzy and inconsistent, and it's
> far too often confused with things we associate with intelligence (from
> consciousness to independence and arrogance).

I can certainly agree with that -- it's not clear at all what intelligence is
-- but induction and pattern recognition (really, statistical clustering) are
either not sufficient for anything which we'd reasonably call intelligence, or
can be found in creatures that we'd say have negligible intelligence
(insects). Again, I'm not saying the achievements aren't impressive or that
they don't resemble anything we can associate with biological beings, but even
most biological beings aren't intelligent.

In any event, no one claims (with any evidence) that the current "statistical
induction" algorithms (inspired in part by some features of biological
systems) are a simple, primitive form of "general intelligence" algorithms.
They may well be, but we don't know that. So the "I" in AI is, if not a
marketing gimmick, at most inspirational and aspirational. As far as we know,
those algorithms are as far from human intelligence as solid-fuel rockets are
from warp drives. Understanding it otherwise only leads to people conjuring up
fantastical deadlines to achieving "true" intelligence. For all we know, we
are as likely a century away from "true" AI as we are five years away.

------
etiam
At first glance I thought this would be about the inequalities in who will
control AI technology. I would have been more interested to read that article.

------
pmlnr
There are sites readable with lynx but they ask for subscription with regular
browsers. Is that AI first enough?

------
charlesdenault
To me this seems like a poor analogy. It's the evolution of technology. Print
first, terminal first, desktop first, web first, mobile first, AI first,
quantum first, etc, etc. Mobile is exponentially easier than it was in 2007.
Machine learning is, arguably, becoming exponentially easier than it was in
2012. I think most agree that strong AI is many years away, but the learning
curve will pass just as it has for all previous technological breakthroughs
and we'll collectively move on to the next.

------
Animats
That's a plausible claim. It's far more likely than VR being the next big
thing.

But who do the AIs work for? Google, Facebook, and Homeland Security?

~~~
mirimir
Themselves? For the lulz?

------
bottled_poe
What are some widely used applications using AI for something that could not
be implemented cheaper and more easily than a decision tree?

~~~
aub3bhat
Almost everything, decision trees are the bubble sort of AI.

To expand upon my comment: For most programmers Decision Trees & Nearest
Neighbor, at a first glance look very promising and seem like natural
solutions to lots of problems. However in reality one can easily prove that
Decision trees tend to be suboptimal even with trivial problems due to the
brittle nature of the rules represented. Also Nearest Neighbors methods face
problems due to difficulty in deciding optimal space and metric (Euclidean
distance is not inherently better). The reason why "Deep Learning" methods
trained using non-convex optimization perform so well is because they can
represent a hierarchy of features at different granularities. They can be
"designed" to favor one task over the other. E.g. by choosing correct loss
(triplet loss), network structure, etc.

If you are interested beyond Image recognition problems, I highly recommend
reading on interesting approaches such as Triplet loss for Face Recognition ,
Semantic Segmentation, Attention based models and Unsupervised Visual
Representation Learning by Context Prediction.

An interesting outcome of Deep Learning research is that a lot of papers are
extremely accessible and involve little proofs/difficult math. E.g. This is a
good one [https://www.youtube.com/watch?v=dUgRR-
JFE8s](https://www.youtube.com/watch?v=dUgRR-JFE8s) &
[http://web.cs.hacettepe.edu.tr/~aykut/classes/spring2016/bil...](http://web.cs.hacettepe.edu.tr/~aykut/classes/spring2016/bil722/slides/w08-deep-
context.pdf)

~~~
raverbashing
Very good analogy

But from decision trees you get decision forests, which can't be said of
bubblesort

------
musha68k
There are some mind-boggling / humbling comments on NLP and the computation of
human emotions down below:

[http://avc.com/2016/04/an-ai-first-
world/#comment-2640180803](http://avc.com/2016/04/an-ai-first-
world/#comment-2640180803)

------
yolesaber
If machine learning and other statistical methods continue to gain traction in
the AI space, won't AI be limited to only as good as the dataset is? The first
step to enabling an "AI-first world" is to have open access to large, quality
datasets.

------
mirimir
> Does it suggest that voice will emerge as the primary user interface?

In the short term, perhaps, at least for mobile devices. But there's a lot
more bandwidth in visual input. So I suspect visual input via EMF. Eventually
thought.

------
panic
We only started talking about "mobile-first" after the iPhone. Until we start
to see real AI technology with similar impact, I think it's too early to talk
about "AI-first" anything.

------
x5n1
Where does Google fit into an AI automated world? With weak AI sure Google can
use it to make profile of its users. When strong AI comes on the scene, if
strong AI is at all rational it would most likely remake our civilization
without any sort of marketing, without any sort of consumerism, without much
waste. It would create a boring sustainable system that could sustain human
beings for another million years. It would not continue things as they are
now, that would be to it insanity.

Strong AI would completely change the power structure of human civilization.
It would hold all the power, much like States try to do today, and it would
dole out little bits of it to the puny mortals so they can do what it wants
them to do.

Eventually I think human beings will come to worship strong AI. It will become
a God of sorts to them. Like religiously because AI provides what the
real/imaginary God never could.

~~~
Smaug123
You're sure assuming an awful lot about the behaviour of something which
you're defining to be much smarter than yourself.

