
What’s Next for Artificial Intelligence - miraj
http://www.wsj.com/articles/whats-next-for-artificial-intelligence-1465827619
======
rm999
>Deep learning, modeled on the human brain, is infinitely more complex [than
machine learning]. Unlike machine learning, deep learning can teach machines
to ignore all but the important characteristics of a sound or image-a
hierarchical view of the world that accounts for infinite variety. -Yann LeCun

I strongly disagree with a lot about this quote, even though it comes from a
brilliant man I highly respect (my thesis research was inspired from some of
his older work, and in my job we work with techniques he developed around deep
learning). What I dislike is it utilizes hype to make deep learning seem
mystical; in actuality it's a natural extension of old techniques that clearly
fall under "machine learning".

Deep learning is a neural network with 3 or more layers instead of the 2 layer
networks that were developed in the 1980s. People tried 3 layers back then,
and they didn't work well. Yann LeCunn and other researchers found cool ways
to get 3+ layers to work in the 90s and again in the mid 2000s. More recently,
researchers have just thrown a ton of data and computational power at them to
get them to work. But fundamentally this was an extension of established
techniques. This article that recently hit the front page actually breaks it
down really well: [https://blogs.nvidia.com/blog/2016/07/29/whats-difference-
ar...](https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-
intelligence-machine-learning-deep-learning-ai/)

I think my main point here is that deep learning is quite accessible to people
who are learning machine learning. It's great at solving some really complex
problems (that can certainly resemble true intelligence), but is not the right
tool for other problems.

~~~
AnimalMuppet
Viewed strictly as a neural network, how many layers deep is the human brain?

~~~
jcranmer
Neural networks are based on a very, very outdated (and incorrect) model of
how a neuron works, so your question is roughly akin to asking "how many
vacuum tubes are in my GPU?"

Mammalian brains are so densely packed that we don't really know what their
internal architecture actually looks like--that's why estimates of the number
of neurons keeps increasing over time [1]. Most of our models are based on
some autopsies, and rely on things like fMRI or electroencephalography (not
often you get to spell that word!) to work out which portions of the brain are
involved in certain tasks. We have no hope of being able to build a
connectivity graph between neurons at our current state of technology.

The short answer is that the cerebral cortex is usually identified as having 6
layers. These can't be correlated with layers in a neural network, however. I
do seem to recall a spitball number of 30-ish in relation to the visual
cortex, but I've been unable to find any references as to what that number
would actually represent, let alone if it's been superseded.

[1] The current estimate is around 100 billion. If you want to know the number
involved in, say, the visual cortex, that appears to be on the order of a few
billion.

~~~
daveguy
> The short answer is that the cerebral cortex is usually identified as having
> 6 layers. These can't be correlated with layers in a neural network,
> however. I do seem to recall a spitball number of 30-ish in relation to the
> visual cortex, but I've been unable to find any references as to what that
> number would actually represent, let alone if it's been superseded.

That short answer is _very_ misleading. Those 6 layers are histological layers
of density and structure in the cerebral cortex (eg how the cross sections
look a little different). You might have many (30+) layers of cells with high
interconnectivity in each of those 6 layers. Current estimate is 100 billion
for the number of neurons, but the estimate of inter-connectivity is on the
order of 100 trillion synapses. The human brain complexity many _orders of
magnitude_ higher than anything we have ever "simulated" in silicon.

~~~
Cybiote
It's not misleading because they were careful to point out that these 6
'layers' have nothing in common with how layers are defined in Neural
networks. In Neural nets, layers are effectively subgraphs that are bipartite.
They can also be computational stages in some networks like convolutional.

The brain's layers are both much more loose in their allowed interconnections
but also more modular in their functionality. There are different visual areas
(usually 5) at different locations of the neocortex that connect to each other
(feedforward and back) in complex ways. As residents of the neocortex, each
visual area will thus have 6 layers which are in turn dividable into sublayers
(often 1 - 4). These layers are structural but are also partially
organizational tools for the scientists.

------
calycosa
>We need to update the New Deal for the 21st century and establish a trainee
program for the new jobs artificial intelligence will create.

Maybe I'm being a bit pessimistic, but can AI really create jobs just by
taking over old ones? Sure we could train some new "data analysts [and] trip
optimizers", but in the end can we really mass replace low skilled blue collar
jobs with higher skilled ones with the wave of a wand? In the period between
when robots can automate the majority of low skill jobs and virtually all
jobs, there is very likely to be some sort of significant turmoil as our
economy undergoes a paradigm shift of sorts, and I don't think "just retrain
workers" is that viable of a solution.

~~~
rm_-rf_slash
When I hear "retraining workers," I usually think of Charlie (of "And the
Chocolate Factory" fame)'s dad losing his job screwing in toothpaste caps and
coming back in the end as a technician for the machines that took his job.

That is the wrong way to approach this new shift.

There will simply be too many menial jobs made obsolete for blue collar
workers to step up into supervision/technician roles.

Instead, they will have to find new _kinds_ of jobs. The kinds of jobs that
cannot be replaced by robots without getting stuck in the Uncanny Valley. Jobs
like personal trainers, yoga instructors, physical/massage therapists, tattoo
artists, hairdressers, and so on. Unless a robot can 100% mimic human form and
action, these jobs aren't going anywhere - certainly not overseas.

I'm calling it here and now: as jobs are lost too quickly for people to
retrain (or even _want_ to adopt an entirely new skill set) we will see a new
push for the legalization of prostitution. If nothing else, you can always
sell your body by the half-hour, and no robot could ever truly replicate the
closest of human touch.

~~~
gaius
_There will simply be too many menial jobs made obsolete for blue collar
workers to step up into supervision /technician roles_

I think you're thinking about this backwards. AI will claim professional jobs
like accountants and lawyers LONG before it affects hairdressers and plumbers.

~~~
naveen99
This is partly because robotics and computation are orthogonal. We have fast
progress in computation, but robotic progress is much slower at the scale of
human hands.

------
vonnik
The problem with news articles like this is that they attempt to appeal to a
general readership through grand promises and fear-mongering. The reporter's
basic problem is: How can I make my audience care?

So you call up Nick Bostrom and he very reliably gives you a quote about the
existential threat of a superintelligence, even though no one in the industry
thinks we're anywhere close to that. (What's next for AI is not
superintelligence...) And you force a great researcher like Andrew Ng to talk
about job loss among truck drivers, because that's what will make it relevant
to people outside AI.

We should be thinking about job loss and how the job market will change, but
this type of article never gets past the "Oh No AI Will Destroy US" stage of
thinking. But a lot of the questions raised by AI are actually relevant now,
in an economy where AI hasn't even made a big impact. That is, they're not AI
issues, but we're treating them as though they are. How should our societies
treat and support the humans who have become unnecessary? They obviously will
not all become data analysts and robot caretakers.

What's next for AI is better natural-language processing. Right now, chatbots
are pretty dumb, but in the next few years, they'll get much better, and in
more languages.

What's next for AI is the wider deployment of mature technology. Many problems
such as image recognition have been solved, but developers and companies have
not figured out how to deploy the solutions yet. We still have chokepoints in
the number of data scientists who can tune and train models, and the number of
engineers who can plug them into existing stacks. So AI will be felt directly,
rather than just talked about.[0]

What's next for AI is an arms race. The major powers will be escalating the
how they deploy AIs against AIs, embedded in drones or through the creation of
adversarial data to slip through filters. Commercially, many smaller arms
races will occur in different industries, as AI drives down the costs of
interpreting data, and allows rival organizations to compete on price.

What's next for AI is the combination of the flavor of the month, deep
learning, with other extremely powerful algorithms like reinforcement learning
and Monte Carlo Tree Search to create goal-oriented strategic decision-making
agents.

[0]
[http://deeplearning4j.org/use_cases.html](http://deeplearning4j.org/use_cases.html)

~~~
mathgenius
Any thoughts on support vector machines & kernel methods? Is that stuff dead
and buried, or what? (I've been out of the loop now for a while.)

~~~
dr0l3
I don't have anything valid to back up my claims here, so take what i have
with a grain of salt unless you find external validation for it. That being
said i will venture a guess:

Yes and no. Neural networks have a lot of nice properties like being easier to
grasp, easier to parrallelize(?) and have better tools which i guess is
because neural networks are applicable to more problems than SVM's because
they are extensible.( see
[https://en.wikipedia.org/wiki/Recurrent_neural_network](https://en.wikipedia.org/wiki/Recurrent_neural_network)
for example).

Either way SVM's still do what they do very nicely and in the field of "Human
Activity Recognition" where i did my thesis neural networks are practically
never used but SVM's pop up from time to time.

~~~
mathgenius
Right, scalability is a good point to consider. Along the same vein, my last
ml job we ended up using gradient boosting which worked really well, but
definitely does not scale to big data (afik.) Robustness is another thing to
consider, both svm's and neural networks need a reasonable amount of data-
massaging before they behave themselves. Hence the success in image processing
where every pixel can be treated equally.

------
Animats
New jobs created by AI: Oxford faculty member pontificating about values for
AIs. Who says there isn't job creation.

My worry: AIs held only to the moral standards now expected of corporations.
Optimize for shareholder value. We're close to this now with machine-learning
assisted hedge funds.

~~~
treehau5
Ah the upcoming "automation utopia" will "free man from the chains of labour"
to pursue their passions -- at least that's what they will tell us while SF
firms endlessly drive towards automating more people's jobs away while they
rake in the cash. My worry is the lengthy, inevitable in-between time of human
suffering and corporate greed until something bursts and we actually pass 21st
century laws like Basic income, vote in technology-competent politicians, etc.

It is my worry, as well.

------
Animats
There's smarter, but there's also faster. Historically, robots have been
rather slow and clunky. That's over.

Industrial robots have become much faster. Here's Bosch's packaging robot from
2014, looking at small objects and putting them in order for packaging.[1]
Then Fanuc built a faster one.[2] Faster CPUs allow using modern control
theory that considers dynamics, and fast machine vision. The resulting
machines are much faster than humans.

Those are production machines, able to run for long periods. Research robots
are even faster.[3] That's from 2012. Progress continues.

[1] [https://www.youtube.com/watch?v=BAF-
ALWwlLw](https://www.youtube.com/watch?v=BAF-ALWwlLw) [2]
[https://www.youtube.com/watch?v=vtAEIKJLHGw](https://www.youtube.com/watch?v=vtAEIKJLHGw)
[3]
[https://www.youtube.com/watch?v=U2sUvQ_HsU8](https://www.youtube.com/watch?v=U2sUvQ_HsU8)

~~~
ericjang
Here's one that's even faster (and relevant to your username):
[http://spectrum.ieee.org/automaton/robotics/robotics-
hardwar...](http://spectrum.ieee.org/automaton/robotics/robotics-
hardware/a-cyborg-stingray-made-of-rat-muscles-and-gold)

------
kantian_ethics
Some of my thoughts after reading this article:

Everyone seems to believe that achieving artificial general intelligence is
inevitable. I'd argue that it's only inevitable if humanity survives long
enough to make it happen. I'm not a pessimist, but the next 300-400 years will
be the most difficult humanity has ever faced. In addition to climate change,
population expansion, nuclear weapon proliferation, and naturally increasing
inequality, humanity will face many more threats that haven't yet been
perceived.

I believe building strong intelligence will optimistically take 3-4 centuries.
To complete a "system that could successfully perform any intellectual task
that a human being can," it is first necessary to understand what defines the
human intellect and conscience. Although there is much ethical debate on what
defines a conscience, scientifically it is a product of experience, and can be
emulated if three requirements are met:

\- The system can accept every input the human body can. \- It can process
every combination of stimuli in the same way a human would. \- It can to the
stimuli in every way a human would.

For these requirements to be met by software, we must either:

\- Aquire a nearly full knowledge of the human brain (and probably body)'s
information-processing mechanisms, and figure out how to implement this in
software. \- Build enough processing power to completely simulate the human
brain/body, atom by atom.

Both will not be feasible for an extraordinary amount of time, and it's
probably better to spend our time worrying about the current existential
threats to humanity.

A much more relevant ethical problem for humanity is that of eugenics. Unlike
AI, recent advances like CRISPR/CAS-9 make it viable now, through modification
of sperm-generating stem cells (not embryos), and like AI, it offers world-
changing benefits (the eradication of diseases, lengthened life spans,
increased knowledge and strength, etc), while also providing the keys for
modern humanity's destruction (designer babies, lack of diversity, separation
of humans into casts, etc).

Perhaps increases in intelligence driven by eugenics will even cause computer
systems and AI to become obsolete.

------
notadoc
Let's ask Siri!

"OK I found this on the web for 'whats next for artful shell in tell gins'"

------
Cortexia
The fact is, we don't know how deep Neural Networks work - we simply know how
they form. They are adaptive systems that learn to perform complex tasks.

This is scary because it means we don't "design" an A.I., we design an
adaptive system and allow it to emerge. Nothing could be more dangerous in the
long term.

------
whoops1122
I think next step for AI is turing test, where a "machine" recognize that it
is talking to a human.

~~~
randcraw
A "Turding Test"?

------
Hydraulix989
I don't know much, but I do know that if somebody were to tell me What's Next
for AI, it's definitely not the WSJ.

------
CuriouslyC
Everyone has piled on the learning bandwagon as the path to "intelligence" but
honestly creativity is just as important, and it is hardly even being
addressed. Even worse, while function approximation (i.e. learning) is a
fairly well defined problem, creativity is nebulous and ill-defined.

~~~
vonnik
That's not true, actually. There's a ton of work being done in computational
creativity. AI is actually pretty good at sensing similarities between
instances of data, and recombining various elements to create something new.
There's been a ton of recent work using deep learning, including DeepDream.

[https://psmag.com/rise-of-the-robot-
artist-5aa0e6e1b361?gi=c...](https://psmag.com/rise-of-the-robot-
artist-5aa0e6e1b361?gi=cb92d4984cda)

------
arca_vorago
I think the way forward for AI will be in simulating human conciousness
processes, including whatever limitations computationally that come with that.
In this sense, I feel that the kind of AI that will break barriers first is
going to be in a game, and not on a factory floor.

~~~
aroman
Can you elaborate a bit more on this? I'm fascinated by consciousness, and
I've often wondered about how it arises in biological systems and how it might
do so in artificial ones.

My basic intuition is that consciousness arises as a byproduct of _any_
sufficiently interconnected (i.e. Complex) network which processes input from
the outside world. The problem with AI right now is bottlenecking of both
kinds — we don't have computers powerful enough to support the level of
complexity of the human brain (100bn neurons with an average of 7k synapses
each), and we don't have good enough sensors for capturing the real world
(hint: touch is a BIG one).

What sort of game do you have in mind? I definitely think a game in which
humans interact with an AI in an artificially-supported information-rich
environment (say, the VR of 5-10 years from now) could help provide a suitable
environment for "growing" such an AI. Consider the insane amount of
Stimulation and calibration human babies require for development! It takes
years for the visual system to stablilize, for example.

~~~
nomailing
> My basic intuition is that consciousness arises as a byproduct of any
> sufficiently interconnected (i.e. Complex) network which processes input
> from the outside world.

I think you are underestimating the importance of the motor/action system for
the development of consciousness. Many neuroscientists and robotic researchers
think that the sensory-motor-environment-sensory loop plays an important role
for consciousness. This loop allows you to unsupervised learn all sorts of
laws of physics and allows you to think that your body actually is you and not
just an extension of your mind and therefore also allows the development of
the concept of self awareness. Some researchers might call this sensory-motor-
contingencies or embodiment. I think when talking about consciousness there it
is very important to take into account this loop and not only the sensory
perception and building a neural representation of the world...

~~~
aroman
I definitely agree with you — I think interacting with the environment is an
important part of "processing" it. The motor/action feedback loop (cf.
artificial motor control in monkeys) is exactly what I had in mind when I said
that "touch" is a big one.

I definitely should have clarified that point, though, as it's a huge one, I
agree!

------
samblr
Neural networks are eating the world.

------
angelbar
Its some kind of advertising? How can I read it outside of a paywall?

~~~
arijun
Hit the "web" link next to the link at the top of this page, and select the
top result from there.

~~~
adeptus
That didn't work for me. The top result is still the WSJ URL; however, a few
links lower, we can find another URL which does show the same full article:

[http://newspot.me/n/03LT3uuT](http://newspot.me/n/03LT3uuT)

~~~
apendleton
It's the same URL, but if you get to it from a Google referer it lets you
through the paywall.

~~~
TY
Not necessarily, I got the link from Google and still saw the paywall when
going through regular Chrome mode. However, once I opened the same referrer
link in incognito mode, I was able to see the article.

Looks like WSJ is now using cookies to decide whether to show paywall or not.

~~~
daveguy
Definitely cookies. If you click the link before doing the web search it will
remember. I had to open an incognito tab and then search for the title.
Definitely a pain in the ass.

