Hacker News new | past | comments | ask | show | jobs | submit login
Meta is the 'single largest market for paedophiles', says New Mexico AG (theguardian.com)
81 points by sandebert 3 months ago | hide | past | favorite | 145 comments



Not to be insensitive to a sensitive topic, but isn't Meta the 'single largest market for <almost anything>'?


One would have to assume all of these things are points of negotiation to get Meta to disable E2E on FB messenger and WhatsApp, along with the public allegations made for years prior that providing encrypted channels directly supported the cause of pedophiles.

(It is also interesting to me that in ~2010, such discussions used to use terrorism, and now it’s CSAM/pedophilia.)


> (It is also interesting to me that in ~2010, such discussions used to use terrorism, and now it’s CSAM/pedophilia.)

Nah, CSAM/pedophilia argument goes way back. It is just that the terrorism argument has largely disappeared because the emotional impact of mostly 9/11 has been dealt with (terrorism was much more rampant in parts of the previous century, can't recall the decade but if I had to say it was 70s or 80s).

The thing is, tools can be used for Good and Bad. A knife, a car, the Internet, social media.

The problem is if we live in a society where your online identity is pseudonymous at best (remember Rotterdam hospital shooting recently, the shooter got found on 4chan), and there is post-moderation instead of pre-moderation you are going to have garbage content.

A lawsuit like this is used to prove negligence in proportion to the amount of users vs content. Ie. it will result in more active moderation by Meta. In that regard, it is The Netherlands hosting providers who were terrible with removing CSAM. And, almost all of them have improved.

Does that mean the content is removed from the Internet? No, impossible. But it sure as hell got more difficult for the perpetrators to distribute the content. Which is a decent compromise.


In 2010 threatening people with terrorism was already almost a decade out of date.

Eventually, if you keep threatening people with an outcome that they can easily detect, they'll notice that it's not happening. It's better to threaten them with something invisible.


The implication is that these are false allegations but that's not even necessary.

All this can be true and it is still a bad idea to unilaterally break encryption, just as it was a bad idea to allow unilateral use of wiretap.

If we stick the debate around the truthiness of the allegations, it only takes a few bad examples to lose that argument.


The implication isn't that the allegations are false. The implication is that the problem is imaginary. Facebook can easily be the largest site of an imaginary problem.

> If we stick the debate around the truthiness of the allegations, it only takes a few bad examples to lose that argument.

That is true, but the argument here is over why people stopped appealing to terrorism to support their policy preferences.


I'm sorry. I don't see the difference between a problem being imaginary and allegations of a problem being false. If you allege a problem exists, and it's imaginary, then your allegation is false.

So, I'm just saying we can oppose the policy preference regardless of why they stopped appealing to a problem, and regardless of that problem's "realness"


> I'm sorry. I don't see the difference between a problem being imaginary and allegations of a problem being false. If you allege a problem exists, and it's imaginary, then your allegation is false.

The allegation here, as seen in the headline, isn't that there's enough pedophile activity on Facebook to rise to the level of being a problem somewhere.

It's just that there's more on Facebook than there is in most other places.


And regardless of that being true or false, we should preserve E2E encryption.

That's all I'm saying. Somehow was that lost? I think we may be having what a former mentor called a "Conflicting agreement" or "Violent agreement" :D


Well, sure, I agree with your position.

But I think that the point I made is important, different from your point, and topical in the place where I posted it.


Good old "handshake after 10 comments". Classic HN. Cheers!


[flagged]


I stand by my statement that nine years constitutes "almost a decade". What did you think was dishonest?


The anti terrorism rhetoric started almost a decade earlier, it wasn't out of date by that time.

It was still frequently pushed even into the 2010s


Isn't this a similar argument about controlling cryptography that happened in the "Crypto Wars" (90s) [1].

[1] https://en.wikipedia.org/wiki/Crypto_Wars


If we generalize further, the Internet is biggest distributor of CSAM.


> isn't Meta the 'single largest market for <almost anything>'

No, it isn't. Far from it.

Edit: Unless those angled brackets have a meaning that I am not aware of.


I suspect the GP was being facetious, but the point that meta is massive is a good one imo. With billions of users your platform can become 'the largest market for x' for many niche use cases without it really saying much about the platform itself.


I try not to be facetious these days, especially on HN. What you said is basically what I was attempting to point out.

To jansan: The angled brackets is what I've used for a long time to represent a variable or as-yet-unknown value, ie. <Fill in the Blank>


I'm missing something or you guys are? They still should answer and pay for being the biggest market for criminal activities


Why Meta and not the ISPs trafficking the child exploitation bits, or Xiaomi and Apple for selling the silicon that enables it? Or the GNU project for providing disk encryption software that hides it.

The end goal is to morally and legally blackmail every company to act as an active spy for the panopticon.


Because Meta makes more money than all the ISPs together and the ISPs offer a much more useful value to the users. And no, the end goal is to not allow companies to profit from human misery which it seems you aren't against.


And there's the moral blackmail. Tell me, how does Meta "profit from human misery"? Do they take a cut from sex-abuse sales, deliberately turn a blind eye to it, and it represents a significant fraction of their profits? Or do they just imperfectly enforce bans on it, and to them it's nothing but a liability for their ad sales?

> And no, the end goal is to not allow companies to profit from human misery

So says you. But the end result will be a panopticon. Good thing government and corporate oppression are solved issues, so there's no need for impractical things like privacy or freedom anymore.


Countries have laws. At least in my country if a company allows criminals to advertise and scam using their resources they respond for the crimes.

> Do they take a cut from sex-abuse sales, deliberately turn a blind eye to it

that's exactly it.


I have my doubts on that. I think the world has moved on from facebook, leaving only instagram as influential under the meta umbrella.

Amazon is the biggest market for buying stuff

Tiktok is the biggest market for pedophiles/disinformation

Instagram is the biggest market for self absorbed people

Twitter is the second biggest market for disinformation and terrorism

etc


> I think the world has moved on from facebook

The HN bubble has moved on for sure, but going by DAU/MAU (which isn’t perfect but is better than “I think”), facebook is still the biggest and is still growing


I just don't see how. I have several neices and nephews. I use facebook to share photos of family and such. I rarely see any activity at all from the younger relatives that I'm following. I just don't think it happens, they should be getting at all social media rather than just facebook. I don't support any of these "think of the children" laws though, because basically the government wants to use it as a backdoor to track everyone and their cat on social media.


I'm pretty sure Silk Road had them beat on largest single market for outlawed drugs.

I have no idea which darknet market holds that crown today, but I'm willing to wager it isn't Zuckerland.


I think you're underestimating the amount of drugs sold via WhatsApp and Facebook

The largest single market is probably Telegram. I bought some cannabis live resin on Telegram just this week. Next day delivery!

IMO no dark net market will ever be bigger than an app you can download on iOS and Android


> IMO no dark net market will ever be bigger than an app you can download on iOS and Android

I heavily doubt that and think you heavily underestimate DN markets by service and volume. We live in crazy times, that's sure.


Well, other services like CP and fraud don't matter because we're only talking about drugs anyway. I don't think I do underestimate the markets, but we never defined biggest in the first place and neither of us have any data so it seems moot.

The number of times I've tried to get friends to make orders on DNMs over the years with no success. If I'd just passed them a Telegram username and told them to download the app they'd have done it there and then.


How do you order on "DN markets"? Can you give me a three-paragraph howto, starting from the point where I have cash, a card or the kind of paypal/bank/… most people have, but not any particular software installed.

I ask because… if this is something a lot of people do, then it must either be really simple or there are howtos on youtube/quora/… and I can't find any howto.


You can just open it up and take a look yourself, it's no hidden secret it's just well encrypted.

It's not complicated at all


I feel that there's a lot hidden in that word "just". The shopping sites you have in mind, are they searchable using google? Do they accept visa cards or paypal?

You wrote that GP "heavily underestimate DN markets by service and volume". If those markets can't be found using mass-market search engines and don't accept payment using mass-market payment services, then I think they're either nïche or you need to explain how they reached large volume.


I am not really sure where you are going at. You can very well Google this all (at least in Switzerland you find clear results) the time where onion.to was actually properly indexed in Google is over tho, but that was a thing not to long ago as well.

A crypto wallet, that connects to one of those multi Billion Dollar networks, very well could be viewed as 'mass market payment service'. I at least know more people using crypto than PayPal, most no 'hackers'.


Even if that's true, it's still a bad thing is used for the sale and or distribution of child porn.


Yes but it does not invalidate the fact!


It invalidates by twisting.

Rhetorically speaking it's simple to make the biggest in some class seem to be worst in some related class. If I'm the biggest taxi operator in town, I'm likely to have the highest CO₂ emissions of all the taxi companies, see? You can omit or deemphasise the connection between the two and make me seem bad.

It's just a form of lying by omission.


>says he believes the social media company is the “largest marketplace for predators and paedophiles globally”.

Doesn't sound like a fact.

On FB it's easier to find them compared to Telegram for instance


100% of pedos drink water! 100% of pedos breath same air as you and me!

Wow, what a fact.


Meta profits from pedo drinking their water.


Buses profit from pedos spying on kids in them. Google and Apple profit from pedos using their phones to spy on kids. Public infrastructure provides convenience for pedos to do their dirty work.

I can continue like that for a long time, you know.


Kind of does.

Facts don't mean anything without perspective.


Meta is a holding company I’ve been told


1. No

2. Even if it were true, it's irrelevant.

Because Meta is supposed to have automated systems that try to prevent this.

Which means they're not working, or Meta doesn't care, or both.


At the scale of Meta, even if they're 99% effective, 1% still represents tens of thousands. Facebook currently has 3 billion active users per month.


Social media is not that dissimilar to sex and drugs.

At some point we just have to acknowledge that young teenagers are going to use it no matter what. So maybe we should focus more on education and harm prevention as opposed to trying to ban it.

Unless of course we ask adults to provide their credit card or driver's license to sign up. Which I suspect they won't.


> Unless of course we ask adults to provide their credit card or driver's license to sign up. Which I suspect they won't.

Some states are already moving in this direction. However, it is a firmly unconstitutional approach. Our forefathers are rolling in their graves.


I don't see much of an internet in 20ish years without it. It's getting filled up with trash and this will be exponential. Data quality on the internet will rapidly decline and only platforms that can provide authenticity will be able to provide value imo.

If I can run multiple person bots in 10 years on my cpu, governments around the world will abuse it to 11 and forcing all actors to such measures.


Twitter almost requires logins and yet it's full of crap. Usenet can be anonymous, and once you set up kill files to filter all the SPAM, you can get really high technical discussions in lots of groups.


Those forefathers had no idea we'd be using a global digital medium.

In the 18th century your word was everything, and your identity was tied to your word. When you came into town, or a tavern, or wherever, you had to identify yourself. And if you lied about who you were your word was tarnished forever.

Now we're doing the same thing in a digital world. It's time to adapt to that.


But you could write letters (and later telegrams) anonymously around the world back then.

Online and offline is not the same.


And it would be fraud. And it was a huge problem when people did that.

If you got caught you were done.

In a digital world we can combat such fraud in vastly different ways.


It was fraud to write a letter without signing it?


Now you're splitting hairs.

Obviously I mean any letter of substance where identification is a crucial factor. Not your anonymous love letters to your childhood sweetheart.


They are not splitting hairs.

The GP comment was about the right toward anonymously sending telegrams and letters, then the response was about fraud, which makes no sense given the context of anonymity; you can't be fraudulently anonymous.


Dear Penis,

I must write you this letter to show you my burning eternal love since your 4th grade for legitimate, anonymous and encrypetd communication.

In shivers.

Your uncle.


Signed,

Publius


Sorry liberty and privacy will never go out of fashion, whatever old towns used to do I don't care. I will always fight and send my $$ to companies that continue that fight against the government tracking every little thing I do in my day to day life. The people who need to know who I am do (the bank, my tax prep service, the state comptroller) already know and everyone else can go pound sand lol. These "think of the children" issues are easily solved by involved parents imposing boundaries.


Good for you, but we're talking about identifying users of large social media platforms to ensure children don't use them.


The whole "think of the children" excuse is so overused that it's useless at this point. Let parents think of their children and keep them out of trouble, the whole world doesn't have to give up their anonymity because someone chose to procreate. I keep my kids off social media via nanny software and I lock down their devices. It's not hard. They know the consequences if they go around the social media block. If you let your kid get on social media before 15 or so then it's on you to take responsibilty for that.


> the whole “think of the children” excuse is so overused that it’s useless at this point

Folks seeking to profit from your loss of freedom find it extremely useful.


Why is it unconstitutional?


> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.


Nothing in there prohibits a national eID.


It absolutely does, as a national eID creates a chilling effect that can negatively impact my right to free speech and to peacefully assemble. Digital anonymity is not a joke, it is a fundamental human right whether or not you or any state agency recognizes it. It is crucial to a healthy democracy.


You mean if the issuer of the eID violates people's privacy, in violation of the first amendment. That's what you have courts for.


yeah that's worked out fantastically against any hundreds of companies that leak user data every year.

same thing will happen, a leak will occur and the authorities will be thrilled to offer me a 45 cent class action suit -- meanwhile bad actors will have the data and do who-knows-what with it.

if bad internet data is really the problem here then I would suggest more rigorous filtering of results rather than involuntarily signing up the world for a club that every individual may or may not want to be a part of by way of an eID.


> It is crucial to a healthy democracy.

And yet democracies predate the existence of digital anonimity by quite a few centuries (or at the very least decades, depending on how many slaves you accept in your definition of a "healthy democracy").


Digital anonymity is the higher-level concept being discussed, but the underlying issue which it relates to is that of a chilling effect, which is a timeless phenomenon.

> The "chilling effect" refers to a phenomenon where individuals or groups refrain from engaging in expression for fear of running afoul of a law or regulation.

> Laws that chill free expression do not provide the appropriate level of breathing space for First Amendment freedoms.

https://www.thefire.org/research-learn/chilling-effect-overv...


I guess the argument would have to be the admittedly stretchy one of by requiring an eID you are requiring people declare who they are when communicating, which would then be compelled speech.

since this is HN I better clarify - I'm not sure I agree that should pass or would pass, it seems a stretch to me. But if I wanted to make the argument that is where I would go.


Not necessarily.

An eID can also just make a call to the eID issuer, identifying the person, and then a callback to the service indicating that the identified person complies with certain requirements like age.

Think of it as a bouncer at the door that does not divulge any details to the club owner. ;)


The requirement only applies to certain platforms, though, you're still free to communicate what you wish.


How do we decide which platforms it applies to? And once the infrastructure is in place, how do we ensure the list of platforms does not grow at the whim of the administration of the month?


Exactly. Nothing in there prohibits electronic anything. It's talking about people. If you so choose, you can still do everything in person. The fact that computers currently make it easier and it's about to get harder doesn't change the fact that computers are entirely irrelevant.


The key insight is that the government should never be allowed to mediate the interactions I have with other humans on platforms which allow these interactions, and should never be allowed to mandate a platform to adopt policies which prohibit such activity.

If we share information which is a true risk to national security or the general welfare of the People, that's something the Constitution allows for. But wholesale enforcement of eID is reprehensible and must be met with rigid disobedience.


It wouldn't be restricting your freedoms, just your ability to exercise them.


> At some point we just have to acknowledge that young teenagers are going to use it no matter what.

We haven't tried to ban it for kids. Social media is a network effect. Currently kids have to go on if they don't want to get outcasted. But if you eliminate the network there isn't the same pull factor.


No thank you. It was hard enough growing up in a Catholic household with a firewall. My career in remote tech is despite the obstacles placed in my way, not because of them.

I would never support similarly gimping children in an increasingly online world. Banning kids from congregating on the internet is a reprehensible idea.


I think the debate is a little more nuanced than this. Any action at scale in this area will deliver both benefits and downsides. What we need to do is look at what action will deliver the greatest net benefit to the most young people. As I’ve said elsewhere, I’m not a fan of broad brush action in general, but if it delivers that net benefit I’d be supportive.

This is not to invalidate your own experience - it sounds awful - but we can’t make policy decisions at this scale based only on individual experiences.

Personally I dread the impact that social media might have on our daughters as they move on to secondary school.


To reverse that, we can't make policy decisions which negatively impact the basic human rights of individuals.


Except that the reality is that we can and do, and we do it all the time. Here’s an example of an institution that as a matter of both habit and statute does so consistently: https://en.m.wikipedia.org/wiki/National_Institute_for_Healt.... Note that I’m not suggesting NICE is always right, but they do at least try to make decisions that bring optimal benefits to the largest possible group within the overall population to which they have a duty of care.

Making decisions at scale is always hard, and sometimes outright terrible, because some people always lose out. Nobody likes this or thinks it’s anything other than horrible but, going back to social media, if I can save 5 kids from serious harm by banning social media for them versus 1 or 2 if I don’t, that’s a grim decision but also one I’d make.

Of course, it’s not that simple, and I don’t really know where the net benefit lies, but that’s what we have to figure out.


When I say "we can't", I mean "we can't ethically" or "I do not support a government that".

> if I can save 5 kids from serious harm by banning social media for them versus 1 or 2 if I don’t, that’s a grim decision but also one I’d make.

Not me, and I'd appreciate it if you don't attempt to parent my children for me.


I'm not trying to parent your children for you. Why would you think that? I'm looking out for the welfare of my own.

Are you looking for a perfect solution?


Then firewall your network and lock down their phones. Don't bother me and my own household with overreaching governmental oversight which is ripe for abuse by authoritarian administrations.


Do you understand that governments already abuse the data held by social media companies and that, by removing, limiting, or regulating young peoples' access to social media one of the outcomes would be that social media companies would hold less data that governments could abuse?

Do you also understand that governments and other hostile threat actors already use social media as a propaganda tool to spread misinformation? Do you think it's a good thing that children and teenagers are exposed to this? Do you like that your household is exposed to it?

Do you further realise that social media companies profiting off the back of social and societal problems they cause is also an abuse of power? Is that corporate abuse of power really better than government abuse of power?

I happen to think we can come to a solution that reduces all of these abuses. In fact I think we need to. Whether or not that involves a ban on social media use by under-18s or under-16s or whatever, I've already made clear I'm not sure of, though I can see that it might be a helpful option.

What you're proposing on the other hand is no solution at all: many parents aren't capable of correctly configuring a firewall or securely locking down a phone, even though products exist that make both easier. Some would be able to learn, but some wouldn't. You're simply burying your head in the sand.


Thank you for sharing your arguments, I'll share my take on them one by one.

> Do you understand that governments already abuse the data held by social media companies and that, by removing, limiting, or regulating young peoples' access to social media one of the outcomes would be that social media companies would hold less data that governments could abuse?

Fixing one problem with another problem. Instead, lets actually take user privacy seriously, and instead of following the abstinence method which works so well, we create a world where this kind of data is not collected/saved or abused in the first place.

> Do you also understand that governments and other hostile threat actors already use social media as a propaganda tool to spread misinformation? Do you think it's a good thing that children and teenagers are exposed to this? Do you like that your household is exposed to it?

Believe me, I do. I also believe that kids have a right to gather online. Our children have always faced danger, and have always needed guidance on how to safely navigate this increasingly complex world. By limiting the collected data as mentioned above, we mitigate this problem as well. Instead of attacking the rights of children, we limit the operational rights of corporations so that individual data privacy and security are better respected.

> Is that corporate abuse of power really better than government abuse of power?

No. See above. Your suggestion of instead limiting the rights of minors to congregate online does not in any way solve or mitigate corporate abuse of power, however it does enable governmental abuse of power, so I'm not sure what point you're making here.

> Whether or not that involves a ban on social media use by under-18s or under-16s or whatever, I've already made clear I'm not sure of, though I can see that it might be a helpful option.

I am very sure it's not the right move.

> What you're proposing on the other hand is no solution at all: many parents aren't capable of correctly configuring a firewall or securely locking down a phone, even though products exist that make both easier. Some would be able to learn, but some wouldn't. You're simply burying your head in the sand.

This is the strongest argument you've made yet, and you're right; Teaching digital privacy and security will be a multi-generational effort, but ultimately society will be better for it. The alternative is still sticking our heads in the sand, and allowing corporations to continue to, as you have agreed, abuse their power and take advantage of both adults and children who get past the filters.

Furthermore, I have no interest in making criminals out of children who circumvent any such laws, and they will circumvent these laws, and en masse. The law is simply unenforceable at scale, and the effect would be to conditiona an entire generation to normalize criminal action. Is that what we want?


I’m not usually a fan of broad brush solutions, and I can imagine there are plenty of special interest arguments against, but I sense the net benefit of blanket banning under-18s from social media may be greater than any harm caused.


I don't see how such a "ban" could be effective. If you boot all minors from current social media platforms, they'll more than likely just migrate to an alternative solution - an alternative which will be less accountable and more opaque than the current ecosystem.


Possibly, but that alternative would have to fall outside the bounds of the relevant legislation for it to be something they could migrate to.

Young people have always congregated and that, in itself, isn't a bad thing (quite the opposite - they absolutely need to socialise). The issue we have is that current social media is a far too effective vector for bullying, exacerbating mental health issues and, of course, discovery and sharing of abusive material. Those are some of the problems we need to solve for.

As I say, I sense there may be a net benefit to a ban for younger people, but I could be mistaken - for example, there could simply be greater restrictions or regulation of social media use by young people that would make it safer/better - and would welcome evidence that proved me wrong.


I wouldn't say under-18. Start with under-14 or under-15.


Back in my own childhood I would be strictly against this idea. But the internet I grew up with is far different than the internet of today. It really is nearly impossible nowadays for clueless children to avoid getting sucked into some endless loop of being manipulated by people, social networks, or even the websites themselves.


> Unless of course we ask adults to provide their credit card or driver's license to sign up. Which I suspect they won't.

To your point; we’ve been dealing with a variation of this for decades (maybe even millennia?).

The core issue is that kids have more time and energy than any parent, and they have inquisitive problem-solving minds motivated by an intense-to-them need for social connection.

Parental control software: Bypassed

Firewalls and DNS blocks: Bypassed

Parental locks on modern phones: Bypassed

“Are you over 18?” prompts: irrelevant

Asking for driver’s license? Good luck. Are lawmakers going to commit to the inevitable cat and mouse game that starts with kids doing AI forgeries of licenses?


Meta is the single largest communication platform in the world.


It's actually more like 4 interconnected platforms (Facebook+Messenger, Instagram, Threads, WhatsApp).

Through then I would argue for most regulatory reasons it makes sense to count them as one platform.


Is that true or is it just easier to find the posts compared to other tools like Telegram & Co.?


Skimming the story...it sounds like the AG's "single largest market" claim is pure fluff. Sure, he's got plenty of evidence that Meta's crawling with pedophiles (and doesn't much care). But there's no mention of data on (pedophile activity on) other platforms.

(The NCMEC is mentioned, and they probably have a decent feel for whether the headline's claim is true...but they might be reluctant to talk on this subject.)


Mentioning how bad it is in other places shouldn't be needed to try to improve it in one place.

The question here is not how bad it is in other places but how much does Meta do against it and is this another lobby approach to push through things abusing the suffering of children as an excuse.

Through then the "single largest market" claim might be a try to carter public outrage beyond what a fully honest reporting might have produced. Through also it depends a lot on how much Meta actually does against it. If the answer is Meta doesn't care at all then that would be a very reasonable thing to do. Through as far as I can tell Meta does care at least to the degree they legally have too. But it's hard to say because it's always easy to pic examples where it looks like they do not with a company of that size but all the cases where they did care you likely will never hear about. So if this court case goes ahead we might gain some interesting insights.


Yes...but wouldn't it be lovely if an elected public official actually told the truth to the public? Or if The Guardian, which at least seems to present itself as a truth-favoring publication for passably-educated people, gave some hint that they care about old-fashioned fact checking - vs. paraphrasing press releases and "he says/she says"?

And if we assume that the Good Guys (hopefully the NM AG is one) have limited resources - then "where can we do the most good?" becomes an issue. It'd be nice to have some hint that they are grown-ups who care about that. Vs. talking smack about going after Meta right now because that's currently trending online.


> But there's no mention of data on (pedophile activity on) other platforms.

Elon Musk has been very clear on that this kind of activity will get you permanently banned.

They are also actively following known pedo hashtags across X/Twitter to ensure this kind of content gets wiped, without waiting for individual users to report it first.

The difference between Facebook and X seems pretty stark.


I think you underestimate the problem.

You can scan for hashtags all day long but bad actors will come up with hashtags that are not on your list rather quickly.

The same for facebook groups people can setup like train enthusiasts group that would look like one posting photos of trains and then hide bad content in between.

You don’t have even computing power to scan everything.


How are these bad actors coordinating hashtags in a way that can't be detected?

If they really have some sort of secret channel that no one else can infiltrate, why not just use that to distribute stuff directly instead of running an evil cabal of hashtags?

Surely, even given the evil cabal of hashtags, you could find a lot with simple heuristics? When you ban a tag, track which new new hash tags suddenly explode in popularity. Pull a list of hashtags with an unusually high number of links/images to sites that don't normally get linked? Track the people who follow these hashtags, and see what hashtags they go to next

If you actually want to fight the problem, it doesn't require a huge amount of resources - the problem is entirely that Meta has no actual desire to fight the problem (after all, it costs money and doesn't produce any benefit to them)


Very naive way of thinking that there must be a secret channel or cabal.

Think more about how memes spread in the internet - there is no “meme administration” that picks what is funny and what not.

You still have to make difference between trends of normal hashtags and evil hashtags how do you do that? Also how do you even think there would be explosion when bad actors are aware they have to hide.

Main point is simple heuristics won’t work and you don’t have to be evil mastermind to circumvent them.


Okay, but why can't the moderators monitor this spread the same way the users do?

> how do you do that?

I already listed a few examples. If it was unclear, the other part of the process is "have a human check the hashtag", sorry

> Also how do you even think there would be explosion when bad actors are aware they have to hide.

You'd be really amazed how much low-hanging fruit there is here. If someone is posting publicly on social media AND using hashtags, they're not exactly in the top half of OpSec


To be fair I was arguing assumptions "all can be fixed automatically" and that "all can be fixed, all bad guys can be caught".

I do agree that there is a lot that can be caught with a bit more effort and that Meta does not really want to spend resources because it doesn't earn them money.

But also I argue that there still will be a lot of bad stuff and there is not enough money/resources to stop all.


I think we pretty much agree there - we're never going to wipe it all out, but I do think our current spending level is missing out on a lot of low-hanging fruit that could easily be eliminated if management cared to


I think it’s a bit harder than you realize.

> How are these bad actors coordinating hashtags in a way that can't be detected?

They manage to do it in China despite the massive censorship there. https://sg.news.yahoo.com/chinese-vpn-providers-hide-literat...

> it doesn't require a huge amount of resources

Are you suggesting that the Chinese government just isn’t trying hard enough, and if they really wanted to they could stop all dissidents in China for forever?


I never said it was perfect. I'm saying that if you invest resources, you can accomplish a lot - even if it isn't perfect.

China would be a perfect example of my point: they've succeeded in vastly reducing the amount of censored content available, and the average person has no real desire to work around that.

I really don't get how you think your example disagrees with me - do you think China hasn't accomplished anything there?


Well, they set out to eliminate VPN usage and prevent their citizens from criticizing the government. There’s still VPNs and people still criticize the government. I don’t think the “average person” is using a VPN but I would be shocked if subtle ways to dodge censorship weren’t widely known. It’s also a treadmill - the problem isn’t fundamentally solved, every time a new hashtag is squashed another one pops up. They will just keep manually identifying and banning them for forever.

Flipping the question around, I’m not really sure how Facebook’s current systems don’t fit your definition of “it’s not perfect but it works”?


> I’m not really sure how Facebook’s current systems don’t fit your definition of “it’s not perfect but it works”?

I think there's a lot of low-hanging fruit we could easily address with a small increase in spending, and I think it's fair to shame companies for not doing that.

I also think there's a lot of mid-height fruit that could plausibly be addressed with a lot of spending. I think it's fair to praise companies that ARE doing that, but it's hardly a source of shame.

And of course, there's plenty of fruit we're never going to deal with, no matter how much we spend.

I think Facebook is under-spending and missing out on low hanging fruit


And tiktok/facebook/instagram haven't? there is nothing unique about twitter's approach.


And so did Meta, it's not a question of weather you say you will stop it but if you actually do.

Hugely downsizing the content moderation team definitively isn't helping preventing such content on X/Twitter.

And one thing which companies which take this topic seriously is to not disclose too many details about their actions to avoid the targets knowing what they need to avoid. So the public pretty much _doesn't know about the actual difference_ in actions done. We can only observe the outcome, where a platform doing more still might end up looking worse due to simply their audience containing different groups of people and the number of cherry picked bad examples which is highly correlated with platform size but nearly always reported in absolute numbers.

But Elon Musk also has been frequently deceptive in many ways when it was beneficial for his ventures (some would say that is required from a CEO) so I really wouldn't take _anything_ he says as truth. Instead look at his actions. Which involve washing out the seriousness and meaning of claiming someone is a pedo (weather he intended that or not). A trend pretty widespread on Twitter as far a as I know. A Trend which helps people harming children by given them the excuse that someone calling them out is just another toxic Twitter(user) hate campaign you shouldn't take serious.


> Elon Musk has been very clear on that this kind of activity will get you permanently banned.

https://www.theguardian.com/technology/2023/aug/10/twitter-x...

> The company formerly known as Twitter has faced heat from politicians in a combative hearing over X’s tackling of child abuse material, after the company restored an account that shared such material last month, despite claiming to have a “zero tolerance approach”.

https://www.business-humanrights.org/en/latest-news/twitter-...

> But a review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that the authorities consider the easiest to detect and eliminate.

please stop spreading bullshit


I don't expect logic to follow "think of the children", but what's the argument for a marketplace? So criminals can send pornography to each other b/c it's a communication platform. But how do they use facebook to exchange money for pornography? As far as I know you can't buy digital media on Facebook?

If people are just using it as a medium to communicate and then doing deals offline or through other channels then it's about as absurd as holding the phone companies accountable for all deals arranged by phone


Sorry, that would be Discord.


What are you basing that on? Or is that gut feeling / meme?


If you’ve spent any time on discord gaming communities you’d know. Roblox, Minecraft etc are absolutely rife.

As an example, the /r/pokemon discord had an ugly situation where predators were exchanging rare pokemon trades with kids for nudes.


The Attorney General hasn't caught up to what the kids are really using these days.


A bit surprised over this with mighty suspicious Telegram bot spam that I've come across, and the recent trends in Telegram based revenge porn that often finds its way to underage subjects. I haven't dug deep but Telegram looks like the worst cesspool that I know of that is operating "in the open" (rather than Darknet).

But maybe he is talking of contacts and communication that might lead to distribution by other channels? The initial contacts happen on the popular (e.g. Meta) platforms via DM's, and then it's distributed in these stashes on Telegram and whatnot for cash as they groom people?

Might be two different sides of this coin so to speak


What is Telegram bot spam? Bots on other platforms that try to steer users to a bad actor on Telegram? Telegram itself is great and feels like the only "platform" that doesn't jerk ya around.


I suspect the most of this telegram spam to be just that, spam. They have public channels where they post 'normal' legal stuff and then offer 'premium content' for a one time payment, which you will never get.

Same shit was done forever on the DN. I guess many people think stealing from pedos is somewhat more moralic or so.


I had to shorten the title slightly. Original:

Meta is the world’s ‘single largest marketplace for paedophiles’, says New Mexico attorney general


I think they mean 'single largest tool for'.

One doesn't buy and sell paedophiles on Facebook.


The polymorphic “for” means “in service of” rather than “that trades” here.


Are you sure?

I can't imagine why someone would want to, but I guess if you really want to buy a paedophile, Facebook would probably be the place to do it!

"Watching paedophiles get raped" doesn't strike me as a very common kink, but I know a woman who's turned on by the Berlin wall, so there's bound to be someone..


I assumed seeing the AG that New Mexico was some company - around here AG is similar to Inc... so like "says New Mexico Inc"


The attorney general is an elected position in the US.

"Think of the children" is a common campaign theme so i suppose he's going for reelection soon.


Does this come in the context of this Senate hearing [0] ?

According to Guardian (UK)

  """ tech executives faced four hours of intensive questioning from
   Congress members """
Four hours doesn't seem like much. My students can sleep through a 4 hour lecture!

  """ Parents of children who died by suicide after experiencing online
  harms packed the Senate for the hearing on Wednesday. Senator
  Lindsey Graham said the event drew "the largest [audience] I've seen
  in this room". """
If so, this headline is a distraction from the issues at hand. Paedophilia is not what is primarily being discussed here. It's more generally that children are harmed by social media. And as far as I am concerned that is a solid fact.

  """ In his first remarks, Meta CEO Mark Zuckerberg cast doubt on the
  relationship between social media use and a decline in mental
  health. "The existing body of scientific work has not shown a causal
  link between using social media and young people having worse mental
  health," he said. """
Zuck is an outright liar. There are now mountains of exceptionally good research showing a solid causal link between social media use and negative mental health impacts on young people (no, I will not google that for you, please do your own homework).

   """ Twitter, became the first tech firm to publicly endorse the Stop
   CSAM Act """
Of course, because CSAM, while abhorrent, is the perfect distraction from the problem that your entire platform is socially corrosive.

[0] https://www.theguardian.com/technology/live/2024/jan/31/cong...


> There are now mountains of exceptionally good research showing a solid causal link between social media use and negative mental health impacts

Given that essentially every young person uses social media, I don't see how there can be research showing a causal link - how can you establish causality when there's no control group?


> every young person uses social media

If you'd read this [0], which was the first result for "young people social media use" then you'd know that in Germany, for example only 52% use social media, Poland (58%), Czech Republic (56%), France (70%), Netherlands (65%), India (31%).

At least one third of the global population under 16 yo do not use social media. There's your control group.

[0] https://www.pewresearch.org/short-reads/2020/04/02/8-charts-...


> Zuck is an outright liar. There are now mountains of exceptionally good research showing a solid causal link between social media use and negative mental health impacts on young people (no, I will not google that for you, please do your own homework).

what is the point of the statement if you call someone a liar, claim that there exists material to prove the lie and then say, that you will not link to this material?.


Intellectual curiosity (as per our guidelines)

I'd like you to show some too.

Obviously you won't be able to refute what I said, because there are no papers out there that show there isn't a causal link between social media use and harms to mental health. That would be asking the impossible. But there is a massive and no longer ignorable body of work out there saying indeed there are direct causal links between social media use and mental harms. And Zuck just pretended that doesn't exist.

You know, I was actually hoping someone smarter and with more time than me would come back with something I'd missed.

Mark Zuckerberg stood up in front of a room full of grieving mothers and gave a limp, fake apology that made them burst into tears. His sheer lack of humanity and emotional disconnect with other humans could hardly be clearer.

  """
  "The existing body of scientific work has not shown a causal link
   between using social media and young people having worse mental
   health," Zuckerberg told the crowded room.

   Lori Schott, a bereaved mother from Colorado, said she cried when
   she heard Zuckerberg’s comments.
   """
Those mums showed up to the hearing with photos of their dead kids who had killed themselves as a direct result of using social media.

My point is less emotional. Given the preponderance of evidence against Zickerbergs comments, he lied.

So, above I said that means Mark Zuckerberg just lied to congress and that makes him a criminal in the US right? Whether under oath or not.

That's why I was downvoted. Nothing to do with encouraging you to do a simple search, which you could have done in the time it took you to write a snarky reply.

[0] https://www.theguardian.com/media/2024/feb/01/parents-tech-c...


I agree 100% with your points. Zuck and FB are profiting from damaging the mental health of people. At least with McDonald's, nobody is saying a daily diet of McDonalds is healthy.

Whenever I see people saying "no, I will not google that for you, please do your own homework" they basically take a grenade to whatever argument is being made, agreeable nor not.

No ... YOU ... cite your sources if you wish to communicate a fact-backed point.


Fair comment. It doesn't sound good does it. I've grown weary, lazy... no exhausted of continually pointing people to things like [0], and the inevitable sealions who will pretend they've been living under a rock for 20 years.

Also the new research is coming in so fast it's hard to keep track of. I'm really hoping some people will google that for me too, and I might learn some new ones.

[0] https://ledger.humanetech.com/


No doubt Meta has issues here, like all social platforms.

But Meta at least has some level of verification and active moderation. You can't generally get away with posting explicit porn for any amount of time.

Try signing up to Meta with a disposable email address or without a phone number. Sure these things can be side stepped, but for the average person it's a hurdle, and ultimately many petty criminals will be identifying themselves if it came to that.

On the other hand Xitter is so DESPERATE for you to sign up they don't care about dodgy emails or even insist on phone numbers any more.

And, let me tell you, Xitter makes ZERO effort to even LOOK at porn, so hard core porn is THRIVING on there, including stuff that leans heavily on youth taboos. I don't click or search for illegal stuff, but I've certainly scrolled past some dodgy posts.

It's ridiculous how Xitter is getting away with this while Musk is fully and openly demonising Zuck. Sure Zuck has a history of being an arse, but this new wave of tech bro reality bubble is just... a whole new level of insane.


They don't care what steps are being taken or that they've taken down a trillion pedos, they care about results. The fact is that despite their efforts pedos still run hogwild on Facebook. They will not change unless the incentives change.


That might have been true before the rise of tiktok...


Putting aside the obvious thing that anyone hurting children in a sexual way, weather direct or indirect by financially supporting it should be appropriately punished.

What has me wondering in the article is that they speak about the platform Meta.

But Meta is like Alphabet a parent company which owned multiple platforms, something I think most people are aware of to some degree (like Facebook+Messenger are one platform, Instagram is another, probably Threads is it's own platform, too).

Now for some contexts it's reasonable to treat them as one (most legal regulatory ones), but in others (e.g. blame framing) it can be misleading.

Meta owns one of the biggest social networks (Facebook is still the biggest when it comes to certain categories, like older people, connecting Family, self help and hobby groups outside of the tech sector etc.) one of the biggest "picture" platforms and two(!) of the largest messengers.

So absolute numbers can be very misleading, like they THEORETICALLY could be the safest platforms for children doing the most against pedophile but still have the largest absolute number simply by shear total user count. Similar not clarifying what 'market for pedophiles' means is an issue like does it mean Meta seriously messed up and someone should go for prison for it, or does it mean people providing pedophilia services elsewhere use it so that customer can find them by providing fully legal content but sneaking in some "subtle but telling hints" about what kind of people they are (e.g. something equivalent to large black leather boots with red laces hinting that you are a neo-nazi in Germany, but sometimes it's just a unlucky coincidence and the person is not).

Similar misleading can be the use of absolute numbers, Meta owns one of the biggest social networks (Facebook is still the biggest when it comes to certain categories, like older people, connecting Family, self help and hobby groups outside of the tech sector etc.) one of the biggest "picture" platforms and two(!) of the largest messengers (which tend to exist at the border line between social platforms and "just" modern sms).

The main question is, is that case done to actually help protecting children or is it another iteration of abusing the suffering of children to push for more legal espionage or other questionable goals with no intend at all to do anything which effectively would help children?

It pains me that we even have to question that.

But too many lobbyist have abused the suffering of children to try to push through bad things, sometimes things which even would have made it worse for the same children they claim they want to protect.

Anyway while there are many gaps in the article it seems that it might be a well intended which would be nice but then it's done by a AG which is meant to act profit orientated which huge that looks sus flag.

Let's just hope that whatever happens the situation does actually get better for children.

But then there are so many many things which can be done in the US to reduce child abuse which are systematically ignored by so many of the people claiming to work to protect children it's really sad. But then in part of the US grooming and child marriages is still a think even thought it's basically legalized pedophilia, like it's just absurd.


If I were responsible for that company in anyway, I would be working proactively to avoid a headline like that. Do better Meta.


But they are. Thousands of people are employed for content moderation on Meta and cooperation with law enforcement (https://www.businessinsider.com/facebook-content-moderator-q...), and that number will likely go up to comply with the new European DSA.


The largest communication platform will inadvertently be the largest for nefarious activities.

Parents should work on keeping their kids away from social media, what’s the point of a kid having a Facebook or Instagram anyway?


> Parents should work on keeping their kids away from social media, what’s the point of a kid having a Facebook or Instagram anyway?

Parents are users too and in some cases they're addicted to social media as well. So it may be a quite hard task to make sure kids are having curated presence on the Internet. It may be even impossible because there's also a social pressure to be among peers and participate in whatever activities they do - nowadays mostly online obviously. Nobody wants to be an outsider and no young wants to be policed by an adult - pretty sure that's a prevalent theme across generations.

If anything, a social change should come from schools I believe but it may be still a hard task to accomplish. A second or third generation of digital natives is growing up and that's also important thing to consider.

No idea which are the most popular networks or activities among the youngest generation but I'd put my bet on tiktok, roblox and discord.


Well, all the bad things happens in the real world, we can’t keep kids away from it. We need to give them tools to cope with it.


Yes we can and protect kids from the real world in many ways, but I believe it should be parents and families handling that, not the state.

Social media is a net negative to adults in my opinion, not to talk of children.


[flagged]


IG has the same toxic content teen/women magazines had in the 90's across the Northern hemisphere, but with real life content.


It's also free, inexhaustible, and always in your pocket wanting to notify you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: