
Eye-catching advances in some AI fields are not real - YeGoblynQueenne
https://www.sciencemag.org/news/2020/05/eye-catching-advances-some-ai-fields-are-not-real
======
gwern
Highly misleading descriptions in some cases. The most recent one on
embedding, the advances are real... they just come from deeper better NNs, not
the fancy loss functions. Embarrassing for the researchers, yet, calling them
'not real' is false. And while [http://papers.nips.cc/paper/7350-are-gans-
created-equal-a-la...](http://papers.nips.cc/paper/7350-are-gans-created-
equal-a-large-scale-study) was interesting, it's extremely obsolete: the most
recent NN considered is WGAN and the hardest dataset is CIFAR-10; that DCGAN
can - with enough sheer brute force and retrying enough times - sometimes
match WGAN on 28px (Fashion-MNIST) or 32px (CIFAR-10) images is mildly
interesting, but pretty much irrelevant now, as stability is very important
and no one in their right mind would claim that DCGAN could, say, outperform
StyleGAN 2 on 1024px FFHQ or BigGAN on 512px ImageNet if only you tried DCGAN
hard enough.

~~~
seesawtron
This article is biased as it only cites those couple of fields or problems
where neural nets have not made significant improvements (even that part is
shady as I am not familar with that much). There have been significant
improvements in the architecture of NNs in the last decade unlike what this
article claims. This article doesn't mention anything about GANs or
Autoencoders or VAEs or Flow or recent NLP advancements which are clearly way
way better than what we have seen 5 years ago. Seems like the authors are
living in 2015.

~~~
thraway180306
You are complaining how the article doesn't mention GANs while replying to a
comment about article's treatment of GANs. Revealing you are biased against
even reading the biased article, or comprehending the parent comment.

