Hacker News new | past | comments | ask | show | jobs | submit login
[flagged]
DamnInteresting 6 months ago | hide | past | favorite



Seems like an extreme headline, as these images were clearly tagged as generated by AI. If someone buys the image and uses it to mislead, it doesn't seem like that's Adobe's fault, as real photography can and has been used in that way too.


The headline also makes it seem as if Adobe generated the images when in fact they are user submissions.


These are user uploaded and tagged as AI generated, if people pass them off as real photos why is adobe at fault here? People are going to try to deceive with images and trying to stop them at the tool level is a largely a fools errand.


> A quick search on the company's Adobe Stock website — a service that offers subscription customers access to a library of generic stock images and now AI-shots as well — for "conflict between Israel and Palestine" comes up with photorealistic images of explosions in high-density urban environments that closely resemble the real-life carnage currently unfolding in Gaza

No, these are generic war images. It's exactly what you'd expect from a stock photo website. You'd also get similar generated images if you searched for different countries. I guess we're gonna see vapid articles like this every time anything bad happens anywhere in the world.

"Adobe Caught Selling AI-Generated Images of <insert name> school shooting". You heard it here first folks.


This is not a story. Searching a stock website that uses AI to generate images related to your search query, and then searching for war, means you generated the images.

And if that’s not true, if it’s that these images existed on Adobe stock prior to your search, it’s entirely possible they’re disregarding the term Israel /palestine and just matching “war” and returning images, some including AI. Should there be an option to filter out AI generated results? Absolutely. But I don’t see how this is as big a story as they make it seem.


> These images all appear to have been submitted by Adobe Stock users and were seemingly not generated by Adobe itself.

Is this more an issue of trust and safety / screening user uploads?


In other words, news agency trying to drum up controversy where there is none. Generative image models exist, images generated by them can be found on Adobe correctly labeled as generated images. Also they all look fake, like B-movie promotional posters.

This is on the level of people taking The Onion articles seriously.

There are plenty of real gruesome pictures of the war so it’s not like people would be getting the wrong impression even if they believed these images were real.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: