I see a trend with 404media on writing sensationalized pieces. The last one I remember was also focused on nonconsensual AI images for a startup that just got funding. Not a fan so far.
I think this misses the mark on what should be the discussion.
First, this focuses too heavily on twitter. We understand a group of the population love to rail on twitter but this is a problem for the whole online ad industry.
Second, there is a broader discussion of the legalities and morality of these technologies. These types of image gen systems do not fit my moral compass but I am not sure if they should be illegal or not. It’s a discussion worth having.
Third, the whole ad industry has this problem of moral alignment. Google still shows the scam get rich quick people. Facebook similar. It is up to companies to pick their morality on what type of ads they want. So far everyone is still trying to capture the whole market.
Maybe it's sensationalized but it seems fair to call out a specific example of profiting off of gross advertising without having to call out ALL examples of gross advertising. They shouldn't get a pass because other advertisers are also bad.
I am not entirely sure. Not about the ai images but the away the piece was sensationalized. If you read the article it painted a picture that is all the startup did. Going to the website I could not even find the models, I am sure they existed though.
Like I said before. I am not defending the morality but the way in which 404 writes article. It’s like reading a buzzfeed/celebrity outrage piece.
Edit: I think it’s because after reading the original article I mentioned, I have a bias on how 404 writes pieces. That original article made me scared to even click the link to the company. I read a TechCrunch piece on the company and it painted a much better picture of the company and the challenges with being a marketplace/source for image models.
Definitely, I just don’t remember Vice being so terrible. I saw 404s intro post here on HN and every subsequent article from them that I saw on HN was like looking at a buzzfeed top10 lists.
I rarely ever go on Facebook but found myself scrolling on a friends’ timeline a few days ago. One of the ads I saw, mixed among her posts as I scrolled, was for cotton candy flavored nitrous oxide delivered to your door. This isn’t just an X/Twitter problem.
yes, a family member sent me a screenshot of a Facebook ad a few days ago for "magic mushroom gummies" that contained delta-8 THC and amanita muscaria lol O.o
I'm not sure amanita muscaria is actually psychoactive when taken orally (although delta-8 THC certainly is) but it was still wild to see that advertised so.. brazenly
Back in the day, I ordered some A. Muscaria mushrooms online and they weren't magical at all. I drooled a lot, took a strange mostly conscious nap, and my vision was a little blurry.
Just eat Cubensis or another psylocin containing strain as Muscaria contains none of it.
Facebook isn’t a flattering comparable to any platform. It’s basically a zombie platform at this point with people only using it, minimally, out of inertia.
It’s not surprising Facebook also has bottom of the barrel ads. Any decent advertiser (unless they’re advertising to old people) would probably choose Instagram for their Meta ad buys instead.
Given the person-millenia years that Meta has presumably put into their ad-targeting, maybe you should give the nitrous a try; maybe they know something you don't. Or maybe they're just whores for anyone who would give them a buck, same as they ever were.
I’ve had ads for drugs, cloned credit cards and devices for stealing cars with keyless entry all in the last month on a major social network.
Reported, but they don’t seem to care.
I don’t know if your experience was on a Meta property, but it is interesting to think about the amount of blowback Twitter/X has received from advertisers who supposedly don’t want their ads shown next to unpalatable user generated content. But unpalatable ads next to a users posts? Sorry, no discussion about it and no recourse. Just another effect of being the product when using a free service I guess.
Can you please stop posting flamewar comments? You've been doing it a lot, unfortunately, plus it looks like you've been using HN primarily for ideological battle, which is a line at which we ban accounts (https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...). We already asked you about this twice.
Edit: I took a closer look and your account has so clearly been using HN primarily for ideological battle that I've banned it. Please don't create accounts to break HN's rules with.
I really wish we could be more honest with ourselves about the tradeoffs inherent in the level of technological sophistication we achieved and continue to push the boundaries of.
There just isn't a way to have privacy and human dignity in a world where everyone and their brother is walking around with an internet-connected recording device in their pocket or on their wrist.
I know, I know, I can hear it now: "We just need to regulate it properly; It's not tech that's good or bad, but how we use it; tech is just a tool!".
While that's a nice intellectual platitude, it's just not workable. The level of regulation that will ultimately be required to manage things like recording people in public, AI, mass data collection, etc. is going to require a level of government intrusion at a scale never before seen. And then there's still the matter of the government actually being able to regulate the tech properly.
I just don't think our current level of tech, particularly with the emergence of AI, is a matter of having our cake and eating it too. We can have our fancy toys , but the social cost is going to be very very high.
This is a natural and expected outcome of a free speech zone. When certain types of speech are disallowed elsewhere, the places that speech is allowed will be dominated by it. "PPC" is an acronym most commonly used for "pay per click", but also has a tongue-in-cheek meaning of "porn, pills, casinos".
I regularly see ads on Instagram for home delivery of steroids, LSD, MDMA etc. All of these companies have cut back on any human review of ads or content posted to their platforms. They have decided that there is no risk of government handing down any meaningful punishment for this.
I ad block the web in general but can't really do that on Instagram.
I get adverts encouraging me to start my own OnlyFans equivalent, the tagline being that it'll pay better than a healthy relationship with a real person you match on a dating app.
I'm curious, not to give a personal belief one way or another, but in this situation are you equating someone generating an image and looking at them privately - keeping them private - as a form of harassment?
That to me sounds in line with the "words are violence" ideology, an attempt at the rewriting of the meaning of words?
So there's no room for accountability and responsibility then as part of your take?
Does the issue then start that people post photos of themselves online to begin with, because if we continue your logic, it will be seen then and shared and used however human nature will determine it to be used?
And what if there are 2 or 3 different people's photos used to generate images, where it's hard to tell who is generated photos vs. who are any of the real samples - is that acceptable?
Shouldn't we be looking at the consequences though? Isn't the issue how people start treating the person, if they think or believe the person did nude photos - like if an employer fires them for that, even if that's not the listed reason? Situations like in high school, or younger, if nudes are generated and go "viral" amongst young horny-idiot immature boys - is where peer and social pressure may most be obvious, but then isn't that an education problem-opportunity to properly moderate-parent that situation, learning/growing opportunities for everyone?
It is time for us to mature past primitive ideas like “anything you post on the internet can be used by anyone for anything so be careful what you post!”
People should be free to post knowing that anyone who abuses their content will be suitably punished.
So long as your solution isn't undue-out of proportion punishment-response, nor giving authoritarian powers via policy to tyrants - then cool, I'm all for evolving-maturing as a society.
I agree accountability and justice is lacking today, which seems to be because industrial complexes and bad actors are currently winning and directing society the past many decades.
“Those who love peace must learn to organize as effectively as those who love war.”
- Martin Luther King, Jr.
The military-industrial complex (et al) being far more profitable than the cost of upholding integrity and justice; the profit that comes from justice is externalized via peace, abundance, and thriving through as efficient rapid technological advancements as possible.
If someone pulls a photo of me in a bikini off my social media then uses an AI undress app to turn it into a nude then passes it around you don’t see how that is some kind of violation?
I've flagged the multiple top-level posts in this thread where you attempt to desperately deflect from the issue here. You may not consent to my flagging your posts, but I'm sure you're fine with that. Using HN is consensual and HN allows this; buyer beware.
I think this misses the mark on what should be the discussion.
First, this focuses too heavily on twitter. We understand a group of the population love to rail on twitter but this is a problem for the whole online ad industry.
Second, there is a broader discussion of the legalities and morality of these technologies. These types of image gen systems do not fit my moral compass but I am not sure if they should be illegal or not. It’s a discussion worth having.
Third, the whole ad industry has this problem of moral alignment. Google still shows the scam get rich quick people. Facebook similar. It is up to companies to pick their morality on what type of ads they want. So far everyone is still trying to capture the whole market.