This week myself, Sayak Paul and Yannic Kilcher speak with Simon Kornblith from Google Brain (Ph.D from MIT). Simon is trying to understand how neural nets do what they do. Simon was the second author on the seminal Google AI SimCLR paper. We also cover "Do Wide and Deep Networks learn the same things?", "Whats in a Loss function for Image Classification?", and "Big Self-supervised models are strong semi-supervised learners". Simon used to be a neuroscientist and also gives us the story of his unique journey into ML.
We cover a lot of ground on contrastive self-supervised representation learning, and augmentations in particular. Simon has also done fascinating work on analysing the evolution of representations in NNs.
https://www.youtube.com/watch?v=1EqJyMy0LnE