Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your comment seems pretty accurate because, from my perspective, I've never seen comments of type #1. And so, despite me explicitly saying otherwise, people like the GP commenter may be reading my comments as #1.


Even within this thread, https://news.ycombinator.com/item?id=41005386, https://news.ycombinator.com/item?id=41005633, https://news.ycombinator.com/item?id=41010124, and to a lesser extent https://news.ycombinator.com/item?id=41005240 seem like #1 to my eyes, with the sentiment of "It is detectable, therefore it will be easily corrected by near-future AI." Do you read these differently?


Of these four:

The first ('''So this "one weird trick" will disappear without any special measures''' etc.) does not seem so to me, I do not read that as a claim of perfection, merely a projection of the trends already seen.

The second ('''If the computer can see it we have a discriminator than we can use in a GAN-like fashion to train the network not to make that mistake again.''') I agree with you, that's overstating what GANs can do. They're good, they're not that good.

The third ('''Once you highlight any inconsistency in AI-generated content, IMHO, it will take a nothingth of a second to "fix" that.''') I'd lean towards agreeing with you, that seems to understate the challenges involved.

The fourth ('''Well, nice find, but now all the fakes have to do is add a new layer of AI that knows how to fix the eyes.''') is technically correct, but contrary to the meme this is not the best kind of correct, and again it's downplaying the challenge same as the previous (but it is unclear to me if this is because nuance is hard to write and to read or the genuine position). Also, once you're primed to look for people who underestimate the difficulties I can easily see why you would see it as such an example as it's close enough to be ambiguous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: