Hacker News new | past | comments | ask | show | jobs | submit login

Not sure about ads, but things are definitely getting worse in terms of health first at FB and now at Instagram. It seems like sex workers (onlyfans models) seemed to have realized the way to skirt the FB community guidelines. There’s no straight porn, but when cooking videos and reels about cute animals suddenly starts morphing into cooking videos but with someone wearing suggestive clothes and animal videos centered around an attractive person doing suggestive things, things go downhill real fast.

Recommendation takes higher precedence over your network. And then nefarious actors take over the recommendations. That’s how these products die. The recommendation engine is unfortunately not able to distinguish between regular content and suggestive content, and no amount of reporting/blocking seems to change that. Plus almost all the reports are reviewed and are determined to have not broken any community rules.






No, there is outright porn. They do this "trick" where they flash a nude photo for like 100ms on a 10s video and caption it with something like "pause it for the good part".

The sad thing is that I've reported these posts and they always say it doesn't violate their terms.


>The sad thing is that I've reported these posts and they always say it doesn't violate their terms.

I'll just quote an experiment conducted by my colleague, where they tracked the outright malicious (porn, malware or fraud) ads they reported from a regular user account: "from January to November 2024, we tested (...) 122 such malicious ads (...) in 106 cases (86.8%), the reports were closed with the status “We did not remove the ad”, in 10 cases the ad was removed, and in 6 cases, we did not receive any response". That's not very encouraging.


> The sad thing is that I've reported these posts and they always say it doesn't violate their terms.

Nothing I reported on FB was ever removed, even obvious spam (e.g. comments in different language than the rest of the thread, posted as a reply to every top-level comment in the thread). I think this message is most likely just generated automatically. And maybe, if hundred people report the same thing, someone will review it. Or it will be automatically deleted.


> The recommendation engine is unfortunately not able to distinguish between regular content and suggestive content, and no amount of reporting/blocking seems to change that.

Maybe currently... But you can definitely crowdsource / user generate some thumbs-up or thumbs-down mechanism whether something is "suggestive" content


Bad for Parents: It won't catch people failing to be sensitive enough or favoring their own network with less stringent oversight.

Bad for Content Creators: It won't be consistent, so largely it will be harsh to established players in predicable and game-able ways, while new market entrants will experience either statistical randomness (influenced by time and geographic demographics) or a bias against them in favor of established content.

Still, SOME of that would help a little, if only as a second sort of data channel to compare to other effects.

AI moderation will continue to be game-able garbage until true AI, at which point, please just plug me into the matrix and give me the pill.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: