But they have uses, like for lossy compression of images or texture generation. Stuff which focuses on the graphical side of things where it outperforms machine learning methods with its crisper samples.
For an overview of this argument: https://arxiv.org/abs/1511.01844
The idea of adversarial training is important and relevant in ML though! It allows for setting up losses which are hard to formulate otherwise.
Second of all, GANs are most definitely not just for graphics. They've been applied to text generation, to generating adversarial examples, to data preprocessing, etc.
Third, I have no idea what you even mean by "test set" in the context of GANs. It is true that it's hard to tell their performance, but that's irrespective of whatever you're talking about. It's hard to evaluate performance because we're usually judging the quality of the generated images, and we don't have any good ways of evaluating "perceptual loss", or how real an image looks.
As for the OP, GANs have been a very hot topic. Not as hot as this blog post makes them look perhaps (with nearly every paper about them...), but I wouldn't really disagree with any of the papers posted. Only one I'm not familiar with is the "most useful" one, but the rest were all pretty great papers imo. As for Ian Goodfellow, he's a very smart guy who seems to do a pretty good job explaining things. I saw a couple YouTube videos from him at a meetup covering his DL book, and he did a great job teaching.
Your third point is actually the point of those who disagree. It's the same reason why we have the principle of unfalsifiability in science.
Machine learning is typically split into supervised learning, unsupervised learning, and reinforcement learning, and GANs are usually considered part of unsupervised learning. I guess the part I don't understand is what you mean by "their concerns are valid"? What are their concerns about? Whether GANs are a promising path of research? And if GANs aren't part of machine learning what are they?
When you are trying to generate "realistic" samples of human concepts, the ultimate measure of evaluation is whether humans think that the output is realistic. So you have no choice but to ask humans to judge the quality of your results. That's a standard thing to do e.g. in text-to-speech generation, whether GANs are used or not.
Also WaveNet2 makes no improvements in the actual quality of the model, only the run-time performance.
> It is worth noting the parallels to Generative Adversarial Networks (GANs ), with the student
playing the role of generator, and the teacher playing the role of discriminator. As opposed to GANs,
however, the student is not attempting to fool the teacher in an adversarial manner; rather it cooperates
by attempting to match the teacher’s probabilities. Furthermore the teacher is held constant,
rather than being trained in tandem with the student, and both models yield tractable normalised
WaveNet2's main resemblance to a GAN is that it uses another neural network for the loss function.