
Visual Introduction to Self Supervised Learning - amitness
https://amitness.com/2020/02/illustrated-self-supervised-learning/
======
alleycat5000
Some really interesting work lately on "contrastive" learning, where the
accuracy is really getting on par with supervised learning, e.g.
[https://arxiv.org/abs/2002.05709](https://arxiv.org/abs/2002.05709)

~~~
jjoonathan
For those of us out of the loop, could you summarize the idea of contrastive
learning as a whole?

~~~
yazr
A fully illustrated article [1]

And Lilian Weng blog on self-supervision [2]

.. CPC is .. translating a generative modeling problem to a classification
problem... uses cross-entropy loss to measure how well the model can classify
the “future” representation amongst a set of unrelated “negative” samples...

[1] [https://ankeshanand.com/blog/2020/01/26/contrative-self-
supe...](https://ankeshanand.com/blog/2020/01/26/contrative-self-supervised-
learning.html)

[2] [https://lilianweng.github.io/lil-log/2019/11/10/self-
supervi...](https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-
learning.html#contrastive-predictive-coding)

~~~
jjoonathan
Thanks!

------
amitness
Followup post on SimCLR: [https://amitness.com/2020/03/illustrated-
simclr/](https://amitness.com/2020/03/illustrated-simclr/)

------
trash3
So instead of image annotation, self-supervised learning performs image
manipulation to train a model. Then what? Is this network then piped into the
original task at hand which would have required human annotations or is it
simply for these made up tasks?

~~~
Voloskaya
You then add a few additional layers on top, and you train those new layers in
a classic supervised way. But because a lot has already been learned you need
way less labels.

