
This clever AI hid data from its creators to cheat at its appointed task - GotAnyMegadeth
https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/amp/
======
olooney
This claim is unsupported:

> The machine, not smart enough to do the actual difficult job of converting
> these sophisticated image types to each other

Obviously it found an easy way to solve the problem it was given: stenography
But could it have solved the problem the researchers intended, if they had
framed it correctly? There's no evidence either way for this particular
algorithm, but in general this is not hard. This is usually called style
transfer and I don't see any reason to believe that standard techniques[1]
wouldn't be able to solve the street-map-to-aerial-map problem. And it's
pretty well established that adding a bit of noise[2] during training helps
GANs avoid these kinds of problems.

[1]: [https://medium.com/tensorflow/neural-style-transfer-
creating...](https://medium.com/tensorflow/neural-style-transfer-creating-art-
with-deep-learning-using-tf-keras-and-eager-execution-7d541ac31398)

[2]: [https://www.inference.vc/instance-noise-a-trick-for-
stabilis...](https://www.inference.vc/instance-noise-a-trick-for-stabilising-
gan-training/)

~~~
rococode
Right, I think it would be more accurate to say "The machine, smart enough to
choose the easier approach for solving the task". It simply found an accurate
and easy way to pass the discriminator and stuck with it.

------
longerthoughts
So basically AI abiding by Goodhart's law
([https://en.wikipedia.org/wiki/Goodhart%27s_law](https://en.wikipedia.org/wiki/Goodhart%27s_law)).
I wonder how much of this goes undetected in other applications due to poor
objective definition.

~~~
olooney
It's incredibly common. For example:

[https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOa...](https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml)

------
minimaxir
Dupe:
[https://news.ycombinator.com/item?id=18797787](https://news.ycombinator.com/item?id=18797787)

------
tivert
This is a neat result and writeup, but the title very misleading clickbait.

~~~
bittermang
Yeah, the article goes on to explain why the premise of the headline is wrong.
But it is a fascinating unintended consequence of the algorithm none the less.

