
Deep Learning State of the Art (2019) [video] - ArtWomb
https://www.youtube.com/watch?v=53YvP6gdD7U
======
cs702
Nice work. I can think of only two things that are missing:

* Normalizing flows - e.g., [https://arxiv.org/abs/1605.08803](https://arxiv.org/abs/1605.08803) , [https://arxiv.org/abs/1807.03039](https://arxiv.org/abs/1807.03039) , among many others

* ODEnets and continous normalizing flows - [https://arxiv.org/abs/1806.07366](https://arxiv.org/abs/1806.07366)

~~~
grej
Yeah strange that ODEnets were left off, and I’m glad you mentioned it. That
has the opportunity to be a transformative approach to more efficient training
and much better performance on time-series problems.

------
tomahunt
The main sections of the talk without note are:

\- Bert and Natural Language processing, \- Tesla Autopilot Hardware v2+: NN
at scale, \- AdaNet: AutoML with Ensembles, \- AutoAugmentation, \- Training
Deep Networks with Synthetic data, \- Segmentation Annotation with Polygon-
RNN++, \- DAWNBench: Training fast and cheap, \- Big GAN: state of the art in
image sythesis, \- Video to Video Synthesis, \- Semantic segmentation, \-
AlphaZero and openAI Five, \- Deep learning frameworks

------
sounds
Was good to hear Lex Fridman's take on where we're at.

Honest question: how have people's experience with OpenAI Five been so far? I
haven't had the time to check it out in detail so I'm paying close attention
to what others are saying.

------
visarga
TL;DW: BERT and BigGAN

