Hacker News new | past | comments | ask | show | jobs | submit login
Deep Learning State of the Art (2019) [video] (youtube.com)
50 points by ArtWomb on Jan 18, 2019 | hide | past | favorite | 5 comments



Nice work. I can think of only two things that are missing:

* Normalizing flows - e.g., https://arxiv.org/abs/1605.08803 , https://arxiv.org/abs/1807.03039 , among many others

* ODEnets and continous normalizing flows - https://arxiv.org/abs/1806.07366


Yeah strange that ODEnets were left off, and I’m glad you mentioned it. That has the opportunity to be a transformative approach to more efficient training and much better performance on time-series problems.


The main sections of the talk without note are:

- Bert and Natural Language processing, - Tesla Autopilot Hardware v2+: NN at scale, - AdaNet: AutoML with Ensembles, - AutoAugmentation, - Training Deep Networks with Synthetic data, - Segmentation Annotation with Polygon-RNN++, - DAWNBench: Training fast and cheap, - Big GAN: state of the art in image sythesis, - Video to Video Synthesis, - Semantic segmentation, - AlphaZero and openAI Five, - Deep learning frameworks


Was good to hear Lex Fridman's take on where we're at.

Honest question: how have people's experience with OpenAI Five been so far? I haven't had the time to check it out in detail so I'm paying close attention to what others are saying.


TL;DW: BERT and BigGAN




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: