
Playing Around with Noise as Targets - CaHoop
https://www.samcoope.com/posts/playing_around_with_noise_as_targets
======
avaku
Thanks for posting this. This looks like something genuinely new. Going to
look into it.

~~~
DoctorOetker
I didn't read the referenced 2017 paper yet, but mapping the training data to
noise (gaussian and/or other) is exactly what the RevNet paper does, with the
advantage of deterministic reversibility such that the trained RevNet is also
generative (without having to do gradient descent for each generated image)

~~~
dtjohnnyb
The intro to the paper has a nice comparison to other similar methods
(generative and non-generative) and the blog post linked in this article by
inFERNCe [https://www.inference.vc/unsupervised-learning-by-
predicting...](https://www.inference.vc/unsupervised-learning-by-predicting-
noise-an-information-maximization-view-2) has a nice comparison at the end to
different unsupervised methods and where this method adds novelty (or
doesn't!)

~~~
DoctorOetker
>has a nice comparison at the end to different unsupervised methods

I don't see the comparisons at the end of the inFERENCe link?

------
a008t
Can someone please ELI5 what this does and why/where it is/can be useful?

~~~
tempodox
Don't trust any machine learning algorithm that you haven't faked yourself.
You can make random noise mean anything you want.

