
Learning to Classify Images Without Labels - seesawtron
https://arxiv.org/abs/2005.12320
======
seesawtron
1\. Representation learning: Pick a pre-training/pre-text task for learning
inherent representations in images that are not just low-level features. Here
they use augmentations on images for such task to minimize the loss between
real and augmented images. This forces the network to learn high-level
features important for discrimination of these two images.

2\. Self-clustering: Samples are drawn from neighborhood of embedded learned
space to assign cluster labels. They push the network to make neighborhood
samples to same cluster by including this in SCAN loss.

3\. Fine tuning: Assign labels to clusters specific to your data.

