Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To my understanding, the idea is to stop training as soon as the discriminator has been 'fooled', i.e. its performance for telling fake and real images apart is just like random guessing. So, in a sense, you always keep on making better fake images, but not necessarily better discriminators (unless you botch the training or the losses, obviously).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: