
Andrew Ng: Why 'Deep Learning' Is a Mandate for Humans, Not Just Machines - prostoalex
http://www.wired.com/2015/05/andrew-ng-deep-learning-mandate-humans-not-just-machines/
======
nightski
Article was pretty disappointing. It's not that he did not make valid
(although general and vague) points, but the question "Why deep learning is a
mandate for humans" was not answered in a satisfying manner at all.

~~~
trendroid
Before reading, I did ctrl+f for the keywords in the title and straight away
realized it will be misleading. Wasn't proven wrong and hence not
disappointed. For some reason, I was hoping to find some hack on learning
better

~~~
0xdeadbeefbabe
The hack, though not stated, seems to be visit coursera and start learning or
start machine learning :) which is a current course by Ng.

------
jjaredsimpson
My views align strongly with Andrew's stated views.

>The reason I say that I don’t worry about AI turning evil is the same reason
I don’t worry about overpopulation on Mars.

I think a technological singularity is a rational proposition and even
consider it possible within this century. But I have no fear of a
Terminator/Matrix/etc outcome. He seems to think the possibility is just too
far away to even consider it. But I agree for a different reason. I don't
worry because I don't think I (or any human) can accurately assess what a
superintelligence would do. And to assume absolute destruction of all humans
seems ridiculous.

> In universe two you have one organization, maybe Carnegie Mellon or Google,
> that invents a self-driving car and bam! You have self-driving cars. It
> wasn’t available Tuesday but it’s on sale on Wednesday.

>I’m in universe one.

I am strongly in universe one. I have no faith in the basement dwelling loner
making significant contributions to AI.

~~~
Jach
If you have no idea what the superintelligence will do, then you should set
your expectation to be the outcome of any random one of possible
superintelligent mind designs. Not all superintelligent mind designs are
totally unpredictable -- if I know it will prefer winning in a game of chess
to losing, I predict it will win every match against a human, even if I don't
know exactly what moves it will make. If I know it will prefer existing to not
existing, I predict it will seek the resources to continue existing. If the
superintelligence has the hardwired goal of producing paperclips, one of the
possible outcomes is tiling the solar system with paperclips. Another is that
humans succeed in shutting it down but not until after it's killed people.

Once you're done enumerating as many possibilities as you can for what a
superintelligence might have as its goals, simple and complicated or even
nonexistent, and what the various outcomes of a superintelligent agent with
those goals broadly look like, find out how many of them are positive,
negative, or neutral to the future of humanity, especially if these things the
superintelligence does requires resources humans also use. I did this once, it
convinced me that the likely case is something negative for humanity, even if
it's unlikely to be a Terminator/Matrix/Hollywood scenario or really any
specific enumeration. What further convinced me that the likely outcome is
still negative for humanity, in the event of people making an honest but
uninformed attempt at after having created an AGI making sure it won't be net-
negative for humanity, is understanding the complexity of human value and how
_almost right_ is still very wrong.
([http://wiki.lesswrong.com/wiki/Complexity_of_value](http://wiki.lesswrong.com/wiki/Complexity_of_value))

~~~
sampo
Kind of like a gorilla trying to predict what humans will do?

~~~
rndn
Exactly. However, a superintelligence might by vastly more superior to us than
we are to chimps: [http://intelligenceexplosion.com/wp-
content/uploads/2011/12/...](http://intelligenceexplosion.com/wp-
content/uploads/2011/12/scale_of_intelligence.png)

I actually don't quite agree with the spaces on this chart because the
scientific method should set us far apart from mice and chimps so that they
would almost merge in a single point.

Anyway, there are a handful of reasons to assume that an artificial general
intelligence would likely turn out to be far superior to us: (1) Hardware is a
much less noisy computation environment compared to the badly insulated
neurons in our wetware (though this might be a feature rather than a bug), (2)
our neurons operate at about 200 Hz, microchips at 10 million as much, (3)
they would have extremely fast access to physical simulations and other
computational resources, knowledge bases, other AIs, (4) they would never get
tired, no body to maintain, no social obligations, and (5) they could
replicate within seconds and easily modify their own sourcecode.

~~~
superdude264
These machines also use vastly more energy than a human brain.

~~~
rndn
Oh right, forgot about that one.

------
astrocyte
I took Andrew Ng's course and dropped it half-way upon discovering that it was
nothing mind blowing beyond the 'constraint optimization' courses I took in
grad school when I attended Carnegie Mellon. I strongly remembered enjoying
the introduction lectures but realized that it wasn't what I was looking for
once a 'cost function' was mentioned. 'Constraint optimization' was before
people slapped : machine learning, deep learning, NN, CNN, and A.I on every
gradient descent algorithm.

I have no faith that Strong A.I/AGI will come from the above efforts as they
are anything but General. The whole is greater than the sum of the parts and
the current crop of weak A.I algorithms are a small part of a bigger whole.

As far as fear of AGI/Strong A.I. Listen, you can't stop something whose time
has come. When it comes and it will, no million dollar consortium of
businesses who have leveraged weak A.I to line their pockets will be able to
stop it. We can postulate to the ends of the earth about what it will be and
how we must prepare for it or we can focus on creating it.

Those who are most likely to create it aren't worrying to the ends of the
earth about its dangers. They aren't funding million dollar initiatives to
make 'tests' for something that isn't even understood yet. They are out there
thinking deep and far and working through what it takes to create it... Likely
: Thinking Different than the current crop of people centered on capitalizing
(increasing profit margins) on a very productive discovery : Weak A.I. The
fear that arises from this group is in knowing that AGI will trump Weak A.I
(their cash cow). Thus, the fear propaganda.

So, where do I feel AGI will likely crop up from? The basement dwelling loner
asking the deep and general questions about life, matter, energy, information
without a particular profit maximizing goal in mind. Their goal being to more
deeply understand the very nature/fabric of life, intelligence, and this
universe...

Otherwise, hey.. I could sit back and believe that wisdom just emerges when
you've dumped enough money in the laps of A.I experts.

Think different. You're not going to achieve anything new and ground breaking
thinking and looking at the problem the same way everyone else is. Sure, there
will be many failures but therein lies the risk/reward of going beyond the
herd.

~~~
1971genocide
Thanks for that well written insight.

I had the same feeling while studying computational neuroscience.

There is no way in hell the current approaches is going to lead to AGI anytime
soon ! The models we have for even the most simplest operations of the brain
is not well though out. Even using the term "Artifical Nueral Nets" is an
insult to the real machinary that is a neuron.

However, I do not think someone in some basement is going to figure out the
solution. I think the people who are working to solve the problem are the army
of nameless minimum wage graduate and PhD students across the industrialized
world who are studying all the various topics involving hardware, software,
biology, etc. All of which are required to create a a true AGI system.

We are no where near creating Strong A.I. but its going to come incrementally.

~~~
astrocyte
Agreed. However, there is most probably going to be a group of well educated
generalist who tie it all together. The problem with the new paradigms that
lie ahead is that they are begging us to tie a considerable number of things
together. Yet, there are a limited number of institutional or company bodies
that invest in or even see the value in a generalist. How are you to create
something that is generalized when your core foundation is specialization?
Given that the whole is greater than the sum of the parts, you better believe
you are going to have a hard time incrementing your way to it. I use the term
'basement dweller' loosely to describe someone thinking outside of the box.
The term was loosely used by the original commenter to essentially demarcate
those in spotlight and those who aren't. So, I decided to play off of it. You
have PHD holders outside of the box. The key difference is their mindset. They
think different and beyond the box.

There are many contributors to science who go unmentioned. There are many
back-stories which, due to not conforming with dominant ideologies, go untold.

[http://nautil.us/issue/21/information/the-man-who-tried-
to-r...](http://nautil.us/issue/21/information/the-man-who-tried-to-redeem-
the-world-with-logic)

Best of luck to everyone in pursuit of AGI. The core of creation is elusive
for a reason. Many will miss it not due to their education but their mindset.

------
mangeletti
IMHO, Andrew's point about "hundreds of years from now...", pertaining
destructive artificial intelligence is flawed, due to:

1\. unless we jump into a hidden block somewhere, we're definitely on our way
to creating machines in the next 30 years that can out think humans in every
way

2\. given point 1, we tend to place an anthropomorphic role on the
"destructive" nature of future "robots", forgetting that:

A) machine sentience is not required for machine-induced destruction

B) a few humans with very powerful machines can be quite destructive

------
danso
The OP doesn't really address the headline but I'll take a shot at it:
learning about "deep learning", or any variation of machine/statistical
learning, is a mandate for humans because it is necessary to understand the
fundamentals and limitations of the machines and systems that increasingly
govern our life. Understanding is a minimum; the ideal is that with this
knowledge, we not only use machines appropriately in our lives, but to vastly
improve our ability to consume and disseminate knowledge.

It's been a long time since humans posed a challenge to computers in chess,
and yet freestyle chess competitions aren't dominated by supercomputers, but
by (relatively novice) chess players assisted by computers. Why should we
settle for a false dichotomy of human versus machine when we can do so much
better with cooperation and augmentation?

------
ACTHEO128
I wish he had gone into more detail about his research and how he is pushing
AI development. He just said it's akin to a building a rocket ship but gives
no real indication of what he is really working on.

    
    
         Is he trying to push the boundaries of neural networks or trying to create a new pattern recognition algorithm or machine learning algorithm like a Naive Bayes classifier? 
    
         I just wish he had went into more technical detail about his work.

