
Deep Learning Research and the Future of AI [video] - stablemap
https://www.microsoft.com/en-us/research/video/research-focus-deep-learning-research-future-ai/
======
melling
Here’s the book that’s mentioned:

[http://www.deeplearningbook.org/](http://www.deeplearningbook.org/)

Seems to have good reviews on Amazon:

[https://www.amazon.com/Deep-Learning-Adaptive-Computation-
Ma...](https://www.amazon.com/Deep-Learning-Adaptive-Computation-
Machine/dp/0262035618/)

~~~
colmvp
It's a pretty great general book that I go back to every now and then when I'm
reviewing something I'm learning (i.e. GANs) but isn't something I would
recommend for newbies to just pick up and try to understanding without at
least spending some time learning math/stats since it's a bit more technical
than what non-technical might be prepared for.

There's better introductory material out there to Deep Learning, i.e. fast.ai,
Ng's Coursera course, Thrun's Udacity's introductory course, tutorials on
Medium, where they explain the math/stats behind something but you can get
away with learning the process first and play around with code.

Furthermore for some newbies, I think it's a little easier to understand the
material when they try to play with it in notebooks (as is the case of books
like Hands-On Machine Learning with Sci-Kit Learn and Tensorflow) than trying
to just memorize statements on a page.

But certainly for those here who are more technically orientated, it's an
excellent book to pore over. I just like to caution friends interested in Deep
Learning that it's not the be all, end all way of getting into it and that if
someone out there is interested there are more gradual learning curves
elsewhere to get their feet wet before committing to trying to go deeper.

------
otakucode
He says general AI is 'certainly an end goal' but I certainly hope the 'end'
part of that is not true. We should certainly expect that the first general AI
will share the tremendously significant shortcomings and flaws that human
brains give rise to (conflating correlation with causation, rejecting truth on
pragmatic bases without determining actual truth or falsehood, etc... every
list of 'common logical fallacies' is a flaw that the associative nature of
our brain makes feel true despite being (at best) non-determinitive). If we
intend to stop there we really might as well not work towards it. I'd expect
the end goal to be producing something better.

------
saycheese
Same video on YouTube:
[https://www.youtube.com/watch?v=5BrNt38OraE](https://www.youtube.com/watch?v=5BrNt38OraE)

(YouTube seems to stream the video better)

------
WhitneyLand
Can anyone bottomline it for us, is this any good? Is there a other state of
the union at this moment of late 2017 that you would prefer?

~~~
transverse
The future IMO is recursive meta learning. This far the max recursion depth
afaik has been 1, and that's just a first step.

Also, SeLUs were supposed to be game changing. Are they, or why are they not?
Why haven't they been used more? Or what's still missing?

~~~
SpinyNorman
Scaled-ELU is meant to create self-normalizing nets, but that's nothing you
can't achieve a bit less efficiently with explicit normalization (batch norm,
etc), so hardly a game changer.

