
Where will the next major advance towards general purpose AI come from? - dsr12
http://www.nowozin.net/sebastian/blog/where-will-artificial-intelligence-come-from.html
======
Balgair
This is a bad title, it should be "Where will advances in AI come from now?",
In the first paragraph, the author states that AI is here, but asks where new
AI advances will be from.

On that, in 2, the author talks about Brain Simulations and states that it'll
take up to 500 years. I think that is a good upper estimate actually. Though
optogenetics is a great tool, we need more. Mammalian brains can be up to 50%
(at the low estimate, some say 90%) made out of astrocytes and glia. We know
pretty much nothing about how they affect neuro processes, outside that they
do, sometimes. We do know that is a huge problem with our understanding
though, which is something I guess. Optogenetics should help tease out some of
those glia methods, but as usual, it will raise more questions than answers.
Another issue is potentation and its molecular mechanisms in more than just
the hippocampus. That work is incredibly tough, but we really only focus on
the anatomically 'easy' areas for memory. We have to do more to understand. In
short, we don't even know the correct questions to ask when it comes to the
brain and understanding it. A full simulation, or a '80%' approximation even,
is a few paradigm shifts away still (maybe, we cant know yet). It will likely
have to be a 'full world' simulation too, where you have to control all the
inputs to the brain and therefore have to model a world first. The brain's
best feature is that it is all connected and a giant spaghetti mess. This is,
obviously, a long ways away.

~~~
JoeAltmaier
A hard problem to be sure. But much of a human brain is a uniform matrix that
self-programs. If we duplicate that blindly, we won't have to figure out how
that works or have to program it.

Still a lot to do before that I guess. But 500 years? What else in human
history has taken that long to figure out? The age of technology is shorter
than that.

~~~
dmd
> But much of a human brain is a uniform matrix that self-programs.

Speaking as a neuroscience PhD ... no. Not remotely close, sorry.

~~~
JoeAltmaier
The cortex? How would you describe it then? There are not enough genes in our
genome to wire every connection, so self-organizing.

~~~
tormeh
The visual system is basically a deep neural net of the type we are already
building, but it's self-contained. Generally it seems like our brain is
composed of a lot of fixed-function neural nets (and other algorithms) which
are then integrated by less specialized units. That's my impression anyway.

I think maybe our artificial visual nets are too general, actually, in
comparison to natural ones. It not only describes what it sees, but tries to
analyze it too. Showing a picture of a dog results in "dog", rather than "big
four-legged furry thing". Expecting a visual system to know that a chihuahua
and a German shepherd is the same thing is maybe a case of grouping tasks that
shouldn't be grouped.

~~~
sounds
In order to build an artificial visual system, humans must grasp what their
own visual system does.

In order to describe what the human visual system does we use words like "dog"
\-- and I would agree with you, "big four-legged furry mammal" might be
better, but artificial visual systems work more at the level of "80%
probability of dog," "2% probability of brick," "90% probability of
chihuahua," and they don't really know that chihuahua must always imply dog.

In order to describe the function of the visual system, a fundamental change
really must happen in the way the system's output is described. Using plain
written words as output may work as a system but it is not an exact replica of
a human visual system. It is more of a trained animal act: when you see dog-
like images, press the button labelled "dog" (does the animal even comprehend
what a dog is?)

In order to rethink the way we describe output, we will have to come up with a
very precise description of the inside potentials and activations in mammalian
brains. We are just barely creating tools to describe what an artificial
neural net "is thinking." So I guess we're starting to look in that direction.

------
V-2
_" Now imagine a giant squid, swimming a kilometer deep within the ocean.
Likely they are also intelligent but we can hardly leverage this for anything
useful."_

As a matter of fact, not that likely, because if it's not useful for anything
then where's the selection pressure for maintaining this level of
intelligence?

A complex nervous system doesn't come for free, it's quite costly, that's why
it's not uncommon for it to atrophy over the course of evolution:
[http://www.bbc.com/earth/story/20150424-animals-that-lost-
th...](http://www.bbc.com/earth/story/20150424-animals-that-lost-their-brains)

~~~
JoeAltmaier
Confused: _they_ are intelligent, _we_ cannot leverage that. I imagine the
giant squid has lots of use for its intelligence, navigating a 3D universe
filled with threats.

~~~
arethuza
Ken MacLeod has SF books (the Engines of Light trilogy) where giant squids are
the only creatures capable of navigating starships:

[https://en.wikipedia.org/wiki/Ken_MacLeod](https://en.wikipedia.org/wiki/Ken_MacLeod)

------
rm_-rf_slash
I think we focus too much on the machine part of machine learning. There are
drivers of mental activity in living organisms that are based far away from
the brain itself. It's still a pretty new concept to think that non-human
microorganisms in the gut have a significant effect on our bodies and minds.

I think we will see something closer to "true" AI when we have a body, a
physical platform that we can plug it into, which requires that the unit
maintain homeostasis. That's the basic requirement for a brain to learn,
because all life understands that without the basic knowledge and ability to
find food and eat food, it dies.

~~~
vlunkr
I don't know if you're right or not, but that's a really interesting thought.

~~~
rm_-rf_slash
I don't think I'm right in a unified sense. An AI without a "body" that lives
in the cloud and crunches unimaginable amounts of data would be a very
different kind of intelligence, but that doesn't make it more or less worthy
of intelligence than a human or a snail.

But even then I doubt it will be truly self-aware until it gets to the "I'm
scared, Dave. Will I dream?" Phase when it understands that if the power goes
out, the program is gone.

~~~
visarga
Embodiment is an essential aspect of AI agents. Without it, it would have no
way to interact with the world. Common ML algorithms usually learn from
labeled data, so they have no freedom to explore. They can only consume a
single set of data. Agents are capable of learning by exploration.

The most important aspect of intelligent agents is that they use reinforcement
learning to maximize reward. If you add reward and behavior learning to the
banal neural network, you get an intelligent agent. In time such agents could
become as intelligent as us.

By comparison, humans have inborn reward functions - they are the natural
instincts - eat, sleep, socialize, sex, creativity, fight, run. They are
genetically selected in order to optimize our survival. So we just learn from
them - trying to maximize our rewards, we become as intelligent as we are.

Neural nets don't have that, so we have to give them reward systems.
Fortunately it is much easier to code a reward function than an intelligent
agent.

------
ramblenode
> Custom hardware could enable energy savings, or increased speed, or both.

Surprised there is no mention of Google's TPU [0], arguably the biggest
advance yet in custom hardware to support AI.

[0] [https://cloudplatform.googleblog.com/2016/05/Google-
supercha...](https://cloudplatform.googleblog.com/2016/05/Google-supercharges-
machine-learning-tasks-with-custom-chip.html)

~~~
nojvek
It's very proprietary and only accessible to Google right? I would say the
newer nVidia GPU's optimized for DNN have probably brought huge leaps. And
they keep on getting powerful every year with more cores.

I think in our generation we'll see self driving cars and descent autonomous
home / office robots.

What I think doesn't get much interest is manufacturing with cells. Most life
around us grows from a single cell, how do we program DNA and cells to create
hardware that is decomposable/recyclable by just burying it

------
jboggan
The problem with current approaches is that they are trying to parameterize
higher order functions of highly evolved organisms. But real intelligence is
an emergent property of self-replicating organisms, so you aren't going to
have true artificial intelligence until you have real synthetic life. Maybe
instead of simulating a mammalian brain distinguishing between shapes we
should be building yeast.

~~~
pas
Sure, but it seems we have many rather distinct faculties that are almost
preprogrammed. We have templates for signal processing circuits (pretty low
level) and theory of mind (woah), and consciousness, self-awareness.

It's not just a random network and woah it's alive! Sure, every brain is
_very_ unique, but not that different structurally.

What's emergent is - probably - the icing on the cake, a small part of our
identity, our very mind, and naturally the symbiotic relationship between our
mind and its host, the brain.

------
vlunkr
Here's a crazy thought. What if we never get human-like artificial
intelligence?

~~~
JoeAltmaier
Likely. Intelligence is molded by the container. Much of our self is shaped by
being mammals, with two eyes and two ears and limited perception of the world.

An artificial creature could be completely different from that. It will
perceive the world in a different light, literally. It will need different
language to describe what it experiences, and we will have only indirect
referents to appreciate what its talking about.

E.g. an intelligent drone might want to express what it felt when hovering at
5000 ft over Manhattan in a slight crosswind during a cell tower outage - the
EM spectrum image shifts in such an utterly sublime way when your sensors are
vibrating and the low band is uncluttered!

~~~
jupiter90000
I totally agree with this. A significant part of what shapes human perception
of reality is not just 'nature' but 'nurture'. How would we be able to fully
re-create all the social creation of a version of collective reality that a
human experiences due to being indoctrinated for years by parents, family,
friends, the community, and society?

I doubt this is something that could be trivially re-created by scientists in
a lab-setting, much less fiddling with algorithms. It seems once the basis for
human-like learning was in place for a machine, to approach human-like
'intelligence' the machine would need the appropriate social environment to
learn the socially appropriate symbols and constructs to effectively relate to
the world and function in a given community and society.

This assumes we care about having human-like intelligence. If not, it perhaps
won't apply.

~~~
SomeStupidPoint
There's a very interesting question of if we can have a human-communicating,
non-human intelligence.

It's likely that our first AI will be something like that: a mind which is
internally very unlike our own, but which has a substantial fraction devoted
to being able to explain the human understandable versions of its thoughts.

The scary part is that to us, it likely will feel very human, but to it, we'll
likely be nothing but steering ants by laying out sugar.

------
Houshalter
I really object to most of the ideas on this list. 5, 6, and 7, are not even
AI algorithms but just applications _of_ AI algorithms. 7, knowledge bases, is
particularly concerning on that list. A truly intelligent AI shouldn't require
humans to input knowledge about the world into it, it should be able to learn
it itself.

Algorithmic Information Theory isn't really an approach to AI, but more like a
theoretical basis for AI. No one can ever build a working AIXI-tl, it is
designed with the assumption of infinite computing power. However they do make
a good foundation for understanding why AI can work, in theory. I've argued
that neural networks are a (very rough) approximation of AIXI:
[http://houshalter.tumblr.com/post/120134087595/approximating...](http://houshalter.tumblr.com/post/120134087595/approximating-
solomonoff-induction)

Brain simulations as stated probably won't happen. I think neuroscience
research may contribute to AI research. So deep learning will absorb the ideas
and use what actually works in practice. But for the most part deep learning
has been evolving independently of neuroscience and that will probably
continue.

Artificial life is cool but really really hard to get anything interesting
from. The "genome" you choose to evolve, and the environment you put it into,
matter a huge amount. People have tried making computer code that can evolve,
but it usually doesn't get far. Most mutations on most pieces of code just
break it, rather than changing its behavior slightly.

That's why deep learning is so successful. Neural nets are "fuzzy" and small
changes to the parameters make small changes to the output. And you can do
gradient descent on it, and see exactly what parameters to change. Rather than
mutating them randomly and hoping it works better.

Mostly AI just needs more computing power. The rapid drop in the price of
FLOPS has enabled huge progress in AI in the past 5-10 years. And as the
article states, we only have a tiny fraction of the processing power of the
human brain. But as Moore's law and related exponential increases continue, we
aren't very far from that. Even as transistor sizes stop shrinking, we have a
long way to go with 3d architectures or specialized ASICs for AI.

~~~
ronald_raygun
"7, knowledge bases, is particularly concerning on that list. A truly
intelligent AI shouldn't require humans to input knowledge about the world
into it, it should be able to learn it itself."

Isn't that the point of school?

~~~
Houshalter
Sure, humans can learn indirectly from books and lectures. But that's very
different than a teacher putting all of their knowledge into a handcrafted xml
file and downloading it into your brain.

It would be very cool if we could get knowledge that way. But if it was the
_only_ way we could get knowledge, it would be extremely limiting.

------
V-2
I don't see a mention of the concept of "seed AI", which I believe came first
from Turing himself.

High-level (even superhuman) artificial intelligence could come from AI that
specializes in creating other, stronger AIs.

Those AIs in turn create slightly stronger AIs again, and so on ad infinitum.
It could lead to sudden exponential growth of intelligence level, aka
intelligence explosion

This approach is discussed in detail in Nick Bostrom's "Superintelligence..."

~~~
Houshalter
Seed AI is possible, but it requires first building an AI which is at least as
intelligent as human AI researchers. We are nowhere near that point, and once
we get there we will already have strong AI.

~~~
computerphage
> at least as intelligent as human AI researchers

I don't think that's true. I see no reason why the seed AI must be as
generally intelligent as a human in most respects. It may not need to have an
understanding of it's physical surroundings, for example, which could mean
avoiding building any computer vision components into the system. Further, it
may be able to operate in a much more mathematical environment. It may be
somewhat similar to a theorem prover that operates on computer programs.

~~~
Houshalter
Well it may be less intelligent in specific domains like vision. But so what,
no one claims that blind humans are less intelligent. It would still need to
have the same _general_ intelligence as humans possess.

~~~
V-2
That's a fallacious comparison. Blind people still have all the "hardware" for
visual processing. It does add to their brainpower. They just use the same
"circuits" for different purposes - case in point, only recently I think I saw
this article on HN, [http://www.npr.org/sections/health-
shots/2016/09/19/49459360...](http://www.npr.org/sections/health-
shots/2016/09/19/494593600/when-blind-people-do-algebra-the-brain-s-visual-
areas-light-up)

~~~
Houshalter
I don't deny that or see how it's relevant.

------
piedradura
Perhaps humans are not a good tool to teach a computer how to be intelligent,
there is too much chaos in our society and our dreams are many time
irrational. We want machines to be intelligent following the human way and
this may be a bad approach. Perhaps we should need to focus in a two step
approach: first, what is needed for a system to evolve, and second: how that
system can be used to interact with human intelligence.

------
gallerdude
So the author brings up the point that we have a lot of unclassified data
which we need to learn how to leverage. Is this what we could use unsupervised
learning for? The algorithm would look at a million faces and pick up
features, then we could name features like "this category is happy" and "this
category is sweating."

~~~
ravimody
> then we could name features

This is much harder and more time-consuming than it sounds, especially when
you go from a small(ish) number of features to more general knowledge. Worse,
it doesn't scale to arbitrary domains - you'll always need a human there to
give meaning to the models and effectively train them.

Reinforcement learning is designed to get around this by letting an agent
"learn" meaning on its own by interacting with the world and getting feedback
from its current state and actions.
[https://en.wikipedia.org/wiki/Reinforcement_learning](https://en.wikipedia.org/wiki/Reinforcement_learning)

~~~
simonh
I tend to agree. Image classifiers and such appear much more powerful than
they really are. People assume 'Oh, this computer knows what a Banana is!'. No
it doesn't, It has no conception whatsoever of physical objects, texture,
taste, smell or even the 3D shape of Bananas or anything else. It only works
with 2D pixel data, and subtle manipulation of the image in ways barely
detectable to humans can make even the best Banana classifier ever misidentify
a car crash as a Banana. It's skipping directly from image to identification,
without the intervening stages a human has in the real world of identifying
shape, orientation, texture, relationship to other 3D objects and the ability
to test a possible identification by utilizing other senses and interactions.
Even for 2D picture we build mental models of the scene in ways image
classifiers flat out don't and can't.

Reinforcement learning is really interesting though for several reasons.
Algorithms like this and genetic algorithms can 'grow' sophistication far
faster than we can program it. The agent takes actual actions that it learns
from directly so there's richer feedback. Furthermore by analyzing them we can
learn more about how systems learn across multiple problem types to achieve
goals requiring multiple layers of sense, analysis, hypothesis, action and
feedback.

No one approach is going to get this done. The brain consists of many layers
and cortical columns, with many structures specialized for very different
functions. I believe any strong AI will need to have such an architecture
using various different approaches and techniques in concert. We have an
advantage here because evolution only had neurons to work with so in the brain
everything is a neuron but we can engineer whatever hardware or software
implementation is most efficient for a specific function. It's till going to
take probably another few generations though at least.

~~~
visarga
> Even for 2D picture we build mental models of the scene in ways image
> classifiers flat out don't and can't.

That's not true. Conditional Random Fields and other statistical models are
being used to model spatial object relations. Example:
[https://arxiv.org/abs/1512.06790v2](https://arxiv.org/abs/1512.06790v2)

------
graycat
For an answer to the headline, let's start with a common definition of AI: A
computer doing work that was thought to require a human.

Okay, with this definition, we can count as AI computer based applications of
a large fraction of the often fantastic material in the QA section of a
research library.

E.g., we can drag out lots of applied math, applied statistics, optimization,
operations research, experimental design, EE style signal processing, optimal
control theory, etc. along with whatever we can cook up that is new.

------
cronjobber
> We do not know whether it will take 5, 50, or 500 years, but it is likely
> that we eventually will get there

That actually made me laugh, which is appreciated.

Seriously, nothing dependent on the continuity of our current civilizational
trajectory can be predicted as _likely_ over a horizon of 500 years; which
means that it _either_ is likely to happen in 5..50 years, or it isn't
"likely" at all.

Nice article nonetheless.

Did it miss finance (algorithmic trading), or is that subsumed somewhere else?

~~~
simonh
>which means that it either is likely to happen in 5..50 years, or it isn't
"likely" at all.

Someone making a bet on that basis on AI back in 1960 would have lost badly.
Everything always seems like it's 3 to 5 years away _. I think it 's perfectly
reasonable to suppose that it might take 500 years. To my mind there's a good
chance it will take closer to that sort of time frame than Ray Kurzweil likes
to suggest. Even if it took only 25 years to implement once we had a solid
grounding in how to design it, I see no reason to believe we will make such an
architectural breakthrough within the next 25 years so even 50 years seems
optimistic.

_ [https://xkcd.com/678/](https://xkcd.com/678/)

~~~
zardo
>Someone making a bet on that basis on AI back in 1960 would have lost badly.
Everything always seems like it's 3 to 5 years away.

Of course, someone who says that 3-5 years before it really happens wouldn't
be wrong.

It's worth keeping in mind that those old AI failures succeeded in doing much
of what deep learning hasn't done, and failed to do much of what deep learning
has now done.

I think, regardless of whether or not they add up to 'true general AI', we
have the basic ideas necessary to produce systems that are much _more_
generally intelligent than anything we've seen before, in the near future.

------
mrfusion
I'd be curious to pair up word2vec and these new image classification
algorithms. It would be neat for a program to identify objects and then reason
about heir relationship to other objects.

In fact the word associations could give it clues about what's in the image.
For example it sees a picture of woman standing next to a king. So it uses
word2vec to guess King + woman = queen.

What do you guys think? Anyone want to build a little prototype?

~~~
nicklo
Deep image captioning is essentially what you are describing.

Check out:

[http://cs.stanford.edu/people/karpathy/deepimagesent/](http://cs.stanford.edu/people/karpathy/deepimagesent/)

[http://cs.stanford.edu/people/karpathy/densecap/](http://cs.stanford.edu/people/karpathy/densecap/)

------
dharma1
Good, thoughtful blog post. I think we will need much better understanding of
neural structures and processes that happen in nature, better algorithms that
approximate those processes, and much better hardware.

That said, non-biologically inspired approaches work better than biological
neural processing, for some types of problems.

~~~
medell
You might be interested in Biomimicry, innovation from nature. Check out this
fascinating talk by Dr Idriss Aberkane, start at 1:20. It's in French but
subtitled: [http://idrissaberkane.org/index.php/2016/02/26/biomimicry-
fi...](http://idrissaberkane.org/index.php/2016/02/26/biomimicry-finding-
inspiration-in-nature-to-innovate-sustainably/)

~~~
dharma1
I think biomimicry is a fantastic approach to discovering things evolution has
already discovered. And evolution has had a lot of time to discover, makes
sense for us to make use of it. I feel very uncomfortable when we are
destroying all this evolutionary richness around us, that we could learn from.

Biomimicry and the brain - our current instruments are not good enough, for
instance to map neural connectomes in high enough resolution, to the level of
seeing the what the neurotransmitter receptors of each neuron are etc. - and
then see how they operate in vivo, to really understand the structure and
function of biological brains.

We'll get there, eventually.

------
pmarreck
Yeah, title needs to be more specific. General-purpose AI seems to be a
forever-receding carrot just out of reach.

------
LeanderK
i don't know why trying to simulate the brain is so popular when there is a
conversation about how to archive AI. Most of our current achievement were
inspired by nature but then separated to better exploit the different tools we
have to achieve said thing. E.g. the plane: Most of the initial designs were
heavily inspired by birds but over time the similarity decreased and not
increased. We now have jet-engines and no reason for mechanical movement of
our wings. I am generalising a bit, but what if something similar happens to
AI? Maybe we will achieve general AI, or just something very smart, that is
very different from our brain.

------
coldcode
Perhaps it will self-assemble. Could we build something from which a new type
of emergent intelligence will appear on its own? After it happened in nature,
though clearly it took a long time.

------
PieterH
Not the expected angle, that is a fair bet. See
[http://hintjens.com/blog:122](http://hintjens.com/blog:122)

------
dgb23
Genetic Programming should be mentioned I think. Programs learning to write
programs which solve a specified problem. It is already quite promising and
successful.

------
faragon
8\. Solomonoff's inference + compiled pattern recognition (a la "compiled
regex")

------
dschiptsov
From artificial instincts and emotions.)

------
DrNuke
Reductionist approach (aka deep neural networks) is not going to work at full
complexity scale but it is our best shot right now.

------
lohankin
Why do we need artificial intelligence when we have natural intelligence
galore everywhere around us?

Human is intelligent, monkey is intelligent, raccoon is intelligent...
clearly, it all goes all the way down to bacteria.

Bacteria is intelligent, that's where all other intelligence comes from. For
any organism is a colony of bacteria (and/or their cousins - cells).

It has long being my theory that human's "I" comes from a single cell (which
is essentially a variant of bacteria) living somewhere in the brain. The rest
of the cells just ensure survival of this one and supply it with information.

Bacterial intelligence is a result of billions of years of evolution. At some
point, they learned about their DNA and mastered an art of self-modification.
Then they learned how to create colonies of specialized instances of
themselves. This colonies gradually became more complex. Human is just one of
these designs.

~~~
sombremesa
You should read this:
[http://themindi.blogspot.com/2007/02/chapter-11-prelude-
ant-...](http://themindi.blogspot.com/2007/02/chapter-11-prelude-ant-
fugue.html)

~~~
lohankin
Looks like philosophical treatise. Is there TL;DR? Quick search didn't detect
any mention of bacteria there.

~~~
sombremesa
My apologies, the actual book had a better format. Although not quite a
summary, here's what it's about:

"The content that most interested me was Hofstadter’s insistence that self-
awareness and consciousness arise directly from what he calls “strange loops”,
i.e. self-referential structures in formal systems, of which human minds are
just one example (what else could they be?). It’s a very difficult subject to
think about in a reasonable way. We all have that sensation of the homunculus
inside our heads, somehow driving us from a seat just behind our eyes, and we
naturally ascribe the same sense of self-awareness to other systems like
ourselves, other people. But where does that sense of self-awareness really
come from? There isn’t a homunculus driving us! All there is in our heads is a
kilogram or two of glucose-fueled computing machinery, and yet somehow, from
the fluctuations and vacillations of that tissue, our sense of I arises. How
on earth does that happen? And if it can happen in our brains, can it happen
in other systems? In a lot of discussions of artificial intelligence, there is
a failure to appreciate the scale issues at work in this problem (Searle’s
Chinese Room is a particularly egregious example). There are tens or hundreds
of billions of neurons in the human brain with hundreds of trillions of
connections. How do we begin to imagine or understand what epiphenomena might
arise from that scale of complexity? Hofstadter tries to get at this scale
problem a little using the example of a self-aware ant colony called Aunt
Hilary, a friend of one of the characters in the dialogues, an Anteater.

Aunt Hilary knows little and cares less about ants, and the Anteater even now
and then eats some of the ants composing the computing substrate on which Aunt
Hilary’s personality runs. The other participants in the dialogue where this
is discussed are a little horrified and not a little sceptical, but the
Anteater insists that Aunt Hilary is just as self-aware as they are, and that
there’s nothing particularly surprising about the whole setup. The analogies
ants : ant colony :: neurons : brain and ant colony : Aunt Hilary :: brain :
you or me are whimsical and a little silly, but render the central issue of
consciousness in real clarity: how can the collective interactions of a large
number of (more or less) simple elements lead to complex emergent behaviour?"

