“If I have a party on my property, and I invite everybody on there, I’m responsible for what happens on my property,” Guffey told CNN. “And for some reason, we treat these digital companies different. These companies hide behind section 230 and say they’re not responsible.”
There is a middle ground. If you are just a platform or software, then I don't think you should be liable. I don't think Discourse should be liable for every forum they host. But, when you are in the position of someone like Meta that exist to generate "engagement" to push ads onto people then I think that should change a bit.
“If I have a party on my property, and I invite everybody on there, I’m responsible for what happens on my property,”
I have no specific comment on the actual situation here, but I feel this expectation is also ridiculous, so the analogy doesn't work for me. People shouldn't be responsible for anything that happens on their plot of land if they have no actual hand in it.
Discord is subletting rooms out to moderators and doesn't really have any hand in the content on any particular discord. Meta, however, is pushing their algorithm on everyone on the platform and their employees, in one way or another, handle what content is seen and what content is policed and taken down.
You misinterpreted their comment. They are talking about not agreeing with the actual laws around responsibility if something happens on the physical land you own.
> The family began piecing together Gavin’s last moments and discovered he had encountered a scammer on Instagram and unwittingly became a victim of sexual extortion, a crime the FBI warns is increasingly targeting underage boys and leading to an alarming increase in suicides nationwide.
Meta shouldn't be liable for this. The sexual exploitation scammers should be.
Meta and other platforms of a certain MAU threshold should implement standardized protections and work with law enforcement to stop this type of activity.
This is the correct response:
> Since then, Guffey has been outspoken about his son’s death and sponsored legislation last year that makes sexual extortion an aggravated felony punishable by years in prison if the victim is a minor or vulnerable adult or if the victim suffers bodily injury or death directly related to the crime.
While we need to have strong protections for free speech, anyone who knowingly coerces or bullies someone in a sexually exploitative way or in a way that they know could lead to suicide should be held accountable to an extent. If there is money or blackmail involved, then the penalties should increase dramatically.
I recall the many cases of the Kiwi Forums website bullying folks into committing suicide. They know what they're doing is wrong. While it might not be blackmail, this is a form of free speech that can kill people and we need to have a frank discussion about it. Even if we don't legislate it, we need to create social awareness and a safety net for folks. Suicide resulting from this is a wanton and needless tragedy.
> Between October 2021 and March 2023, federal officials received more than 13,000 reports of online financial sextortion of minors, which led to at least 20 suicides, the FBI said last week.
Parents also need to educate their children about this and let them know they're safe, loved, and that there is no shame in living life.
Your federal felonies don't affect Russian, Indian, Chinese, Nigerian, or lesser populated countries resident making cyber attack operations against the USA.
Extortion is already a felony.
This is a national security threat, not a mere crime.
A lot of this is about parent/child relationships and making sure our kids won’t feel such a degree of shame about something like this happening that they’d resort to taking their own life, but also unless these companies have some skin in the game, they have no motivation to help combat these types of things from happening.
And what exactly are they supposed to do? Monitor all private communications?
At that scale you would need an AI. Once the AI sees something possibly bad, either you have a real person check it (people would complain about privacy, the mental health of those employees having to see awful things) or you have the AI ban (people would complain about lost accounts)
No matter what instagram does they lose. So I think doing nothing is the right thing. Maybe the parent should have done some parenting.
Well since I don’t have visibility into the inner workings of these companies, I can’t say what exactly they can/should do, but I also don’t accept the premise that the only thing they could do is monitor all private communications. I don’t think it’s impossible that there might be some objective heuristic that could help flag potentially abusive accounts and then put some roadblocks in place to protect minors from those accounts.
isn't that basically what Apple did by using algorithms to scan for CSAM material. Even with the level of trust that Apple has, people weren't ok with this.
Likely internal opinion based on observed external actions:
> Of course not. They're on the platform using it, viewing ads, aren't they?
> I mean, it's a shame that some people die, because they won't be able to view ads, and the negative press might impact other people's opinion of the platform leading to potentially reduced ad view, but, heeeeeyyyyyy it's easy enough to prevent certain platform-hostile opinions from being surfaced too much on the platform, you know? "Vagueries of the Algorithm" and all!
(insert a large rolleyes here)
I'm sure we'll get the Standard Tech Appopollylolly: "We are so, so sorry that you caught us enabling this bad behavior, and we will try to do the work to ensure you don't catch us doing it next time."
> The lawsuit, filed in South Carolina state court last week, accuses Meta’s social media platforms of causing a range of problems in children, including depression, low self-esteem, anxiety and eating disorders. It alleges that Meta uses algorithms to aggressively target adolescents and does not do enough to keep them safe from harm.
Of course it doesn't. Depressed, low self esteem, and anxious? That's their dream user - likely to be online a lot, and easily influenced by Meta's actual customers, those paying to influence the thinking of others. The more people like that Meta can produce out of formerly well adjusted individuals, the more money they make. Their incentives are entirely opposite that of larger society. Healthy, well adjusted, offline individuals are mostly worthless to Meta, because they're likely to be on the platform far less often. You just need enough that the platform looks like it's well adjusted people on it.
I hope the lawsuit succeeds. It's long past time for the social media companies to face the consequences of what they've created for their own short sighted profit numbers.
It's so strange to me that HN has this view of social media, but not other things, like fast food. Fast food uses deceptive advertising and purposefully tries to get kids addicted to their product; which slowly kills them over time. But I remember HN being against sugar taxes, and other laws to help prevent it.
But for some reason social media is different? I don't see why it would be. Are the people on HN pushing for social media bans also pushing for unionization, banning pharmaceutical commercials, and preventing fast food stores in highschools? Because if they aren't it seems really hypocritical.
For one, HN isn't a monolith. There are plenty of people on here who agree and disagree on each of the topics you listed.
On the topic of social media, I think that there is a widespread belief that we could have tools that give us the social benefits of social media without including all of the toxic manipulative tracking and algorithmic feeds. Unfortunately the market has become dominated by a few maximally greedy companies, which sucks but doesn't have a clear solution.
Correct! They are completely different things. Cheap unhealthy burgers != social media platforms. Just because you can draw an analogy between two things does not make them the same, and so having different opinions about them does not make one a hypocrite.
But the better point to make, as the sibling comments have, is that saying "HN thinks X" is entirely meaningless because HN is not one person.
That's why I said "Are the people on HN," I was recognizing that HN isn't a single monolith. Glad to see you read far enough to see that.
As to how they're different, I don't see it. This was a chance for proponents to describe how it's different, but nobody has; they've just said it is. All I see is a group of people begging the government to take away a freedom, instead of allowing parents a choice in how they raise their children.
I don’t really have a dog in this fight, but I think you have presented a false dichotomy, re: meta <> fast food ads <> sugar tax opposition. HN also isn’t a monolith…
I'm not a great fan of fast food, though it didn't seem relevant to bring up in this topic...
However, I actually do draw a distinction between "things you have to go seek out," and "things that come to you in your home/pocket/etc."
I don't particularly care for gambling, and I'm the sort that goes to Vegas for conferences and doesn't even gamble a penny. But I honestly don't have a huge problem with Vegas existing. I recognize some people enjoy gambling, and... fine, whatever.
I do have a very serious problem with "casino apps" on phones/tablets/etc, and the "loot box" mechanics of games that bring that sort of thing into your pocket - without any of the regulation of Vegas. There are some quite strict limits on "how much you can screw over your customers," and casinos have to abide by them. There are no such limits on slot machine apps, and while you can go to Vegas and come away richer (it wouldn't work if nobody did), there's literally no way to achieve that with app-based gambling machines. You 100% lose all your money.
But to assert that nobody cares about fast food in a thread about social media is still... weird.
I reckon the son was gay hence the sextortion and his dad political leanings is probably to the right which regards being gay to be worse than being on social welfare.
From his website - yep called it.
"Brandon Guffey is an unapologetic conservative who shares our passion for faith, family, freedom and life."
> After his death, the scammers sent Guffey a laughing face emoji and a message using a pseudo account on Instagram after the original one was shut down. “It said, ‘did I tell you your son begged for his life?’” Guffey says. They also demanded money in exchange for the photos.
I wish there was a special prison for these types of people. Jesus Christ.
points to one of the major issues around this. Why are these even possible? Instagram is a social network for real people. Implement a strong form of ID verification. Charge people a dollar for a sign up. Sure there'll still be edge cases of people using stolen creds but it cuts down what, 99.9% of scam? The kid in the article was extorted for 25$. Drive the cost of this crap up.
I had a "son" WhatsApp scammer I took for a chat gpt ride. The "father" wouldn't send money cause the son was an addict. Sleezy scumbag happily pressed along promising rehab, sun and moon.
S3 is truly the chefs kiss vintage Black Mirror season for me. I mean, San Junipero comes directly after it. Even in the quaint year of 2016 though, we unfortunately already had some close examples:
I think the bad guy extortionist should be held accountable but if Instagram cannot identify them then Instagram should be held accountable. These platforms need more traceability.
If Instagram were liable for anon bad actors illegal behavior then Instagram would clean up their act with a quickness. There is no incentive to fix anything on these platforms with the current rules.
I tend to agree. If you're going to allow people to anonymize their identities with your tools, and further use your tools to facilitate mass crime that results in damages or loss of life, you need to be accountable for that!
I think the only way out of this is a full ID verification system online, which worries me a lot for a bunch of reasons. Not sure how you de-anonymize for safety without invading privacy (the eternal tradeoff).
You could set up a system that supports both. If you want to be verified then you opt-in and then you get a special green check by your name. And also provide a bunch of tools to allow you to filter based on the green check -- e.g., I don't allow DMs from non-green checks or don't show me any content in my feed that isn't from a green check account. So you have the option to decide how much you want to be in the anonymous versus verified world.
I'd go even further and make DM'ing a privilege for only verified accounts.
You could anonymously read and broadcast but anything private should require personal responsibility. I don't see how this would effect ad-revenue or anything either. I imagine it would only bother bullies, trolls and scammers.
I see where this is coming from but making platforms fully liable for user content will make it impossible to start a business that includes (semi-) public user interaction at all. Only FAANG-size companies will be able to afford the legal fees attached to that idea.
Step 1: Make platforms liable—in civil and criminal senses—for content they elect to promote (not just host). "Elect" includes "an algorithm did it". This also includes things like "an algorithm said we should put this ad, which no human at our company has even looked at, in front of this particular person".
There is no step 2. The worst parts of social media and ad clearinghouses (Jesus, all the scams Google promotes and makes money off of) are destroyed, anything remotely defensible survives. Personalized "algo" feeds and "algo" suggestions? Untenable. Customized ads served from a system no human monitors or checks to make sure they're not serving blatant scams at a massive scale? Gone. Both the incentives and ability to try to connect strangers with one another in ways they didn't directly ask for? Greatly diminished.
Ad-supported media survives (for better or worse). People can still communicate just fine. People can still publish without trouble.
> I see where this is coming from but making platforms fully liable for user content will make it impossible to start a business that includes (semi-) public user interaction at all
So? What value have these platforms provided that justifies their existence?
I don't think this is true at all. I think that in an alternate reality where Section 230 never happened then standards would have been created by the industry to tackle the problem.
I'm not sure they should be fully liable but being fully immune is ridiculous. I don't know what the answer is but the current incentives are not working.
There wouldn't have been an industry to begin with. Nothing would have been able to scale, legally, and hobbyist communities would have been annihilated due to bad actors abusing holes in moderation. As will happen if 230 is whittled down, and as is happening in countries where equivalents have been removed.
>But Guffey says he believes that Meta and other social media companies are not being held accountable
100% true.
Very sad to hear and I hope he wins. But these companies are very powerful so I doubt things will change. A blatant is what Apple did in the EU after they lost their court case.
This story is absolutely horrible. I feel so bad for that kid's parents. I had no idea this kinda thing was happening to dudes; I thought it was all teen girls being tricked into sending nudes, and then being extorted.
I don't know what the solution is here. However, every time I think on it, I keep coming back to parent and personal responsibility, which sounds like victim blaming. I kinda hate myself for that, but for every technical solution I can think of, I instantly think of a simple way to circumvent it. So then I wonder how to make the message stick: DON'T SEND NUDES TO ANYONE ON THE INTERNET. EVER!
There are organized "boiler room" operations doing this to kids at scale. They'll trick a kid into sending nudes or even lie and claim to have them or use AI to generate them, then will try to get the kid to steal their parents' credit cards and send money.
Today's dark forest hellscape Internet is not a safe place for young people, and it's getting worse with AI automation of scams and fraud. I wouldn't even post pics of kids online anywhere public since these can be mined to create fake AI blackmail material.
Edit: it's fraud all the way down, too. A lot of these boiler rooms use borderline or actual slave labor. The person doing the scam may not be getting anything for it at all, or might even be captive. So it's fraud perpetuated by fraud victims. At the top is usually organized crime.
I think Robin Williams said it best with "God gave men a brain and a penis, but only enough blood to operate one at a time". I don't think that "never send nudes to anyone" is going to work.
The core problem as I see it is that platforms like Instagram connect every person on the internet to every other person on the internet. Turns out that some of these people are a tad unscrupulous. Limiting this, especially for children and teens, seems like a good first step. Last week Instagram announced you can only send DMs to under-19s if they're following you, which is an okay-ish first step (and the absolute minimum they could do).
But it's also not only about these types of extortions, for example from [1]:
> “Two weeks ago my daughter, 16, and an experimenting creator on Instagram, made a post about cars, and someone commented ‘Get back to the kitchen.’ It was deeply upsetting to her,” he wrote. “At the same time the comment is far from being policy violating, and our tools of blocking or deleting mean that this person will go to other profiles and continue to spread misogyny. I don’t think policy/reporting or having more content review are the solutions.”
I recently watched a documentary[0] about the social media dangers for children. It mostly centers around narratives & interviews from a variety of people young and old, and in my opinion decently avoids trite pearl-clutching over vague assumptions about 'technology' or any particular company. Among other things, I am a bit floored by how much child suicide has increased over the past decades.
Maybe if we weren't so puritanical, this would be less of an issue. So you have a picture of my junk? Ok... nobody wants it and it isn't worth anything. Should I be embarrassed that I have something 50% of other humans have?
It'd be interesting to see the differences int his sort of crime in more open cultures.
There have been some high profile cases of people standing up to similar scams, including Bezos. It (standing up to bullies - edit: extortionists is a better characterization than bully) is commendable behvaior and needs to be better publicized as such, hopefully it will make some people feel more comfortable doing the same.
Bezos said he married his first wife because he thought she could bust him out of a Venezuelan prison if needed. Then he divorced her and married a newsreader. So I wouldn't bet on him.
Standing up to bullies is the kind of thing that gets young people expelled from school. Something about those zero tolerance policies. We're teaching our kids to be helpless. The only option we give them is to appeal to authority and it absolutely fucks them over when they're involved in a situation because of their own choices.
I think it's mostly that they don't want to get sued for making a bad judgement call, so they default to ham-fisted policies where no judgment is applied. But of course the schools get sued all the time anyways, usually due to stupid policies.
It does not necessarily need to be picture of anyone's junk. It does not need even to be a picture. So let's focus on the real issue here. Puritanism or whatever, extortion needs to be impossible.
A war on crime can always be used as a scapegoat for other things, but promoting a culture of resilience through acceptance is a weird thing to hijack IMO.
So kind of like getting rid of puritanical moral standards across society? There are some things people deserve to feel shame for (basically anything you intentionally do that hurts other people). Having junk between your legs is not one of those things. Ditto for 99% of the other things people get blackmailed for.
That's a start, but I think it may take more than that.
Many discussions here about freedom of speech and the value of anonymity to this; the freedom to express ideas without worrying it may explode in our faces later.
I think right-leaning people use "cancel culture" to describe the same category of concerns. At lest, I hope that's what they mean by the term.
Business and governments say similar things, saying that if every meeting is recorded, people will be less willing to speak up.
A society where we don't care about people having had dumb opinions in the past would help more… but that creates a game theoretic risk, we need some kind of reputation system to avoid the iterated prisoners' dilemma degenerating into the ordinary prisoners' dilemma.
I don't have any complete answers. But at the same time, don't let the perfect be the enemy of the good, we can certainly make things better.
I mean one good thing about Trump is that he's reduced the likelihood that future politicians will get "cancelled" for extra-material affairs or probably even high crimes...
Also Taylor Swift will probably save us all if she takes on AI nudes and she can help teens by leading by example. That will help to change the dialog shame and stigma.
We also need to help kids with self-acceptance and reduce shame. But yes, instagram and social media need to do more, that has been clear for awhile.
It really does come down to our society having a sick relationship with sex, our bodies and interpersonal relationships in general. I think we are really going to go through some growing pains in the near future when it comes to our 18th century morality in this 21st century world.
Hopefully we can move past restrictive religious ways of looking at the world and actually prevent things like what happened in the article.
The foundation for healthy interpersonal relationships seems deeply tied into the core values of (some) "restrictive religious ways". "Love is patient, love is kind. It does not envy, it does not boast, it is not proud. It does not dishonor others, it is not self-seeking, it is not easily angered, it keeps no record of wrongs. Love does not delight in evil but rejoices with the truth. It always protects, always trusts, always hopes, always perseveres." I would posit that the closer people walk to those ideals, the healthier their society will be.
That said, Christians, in general, would agree with you that law by its very nature has no power to bring about good in people. That's pretty much one of the core tenets. In the end, laws serve only like the double yellow line in the middle of American highways. It's a guide that helps us share the road without hurting others.
The guard rails of commitment, love and genuine care for others brings about an environment where relationships can flourish. They provide stability and care that children need to thrive emotionally. They keep us from swerving into relational "oncoming traffic" when we get frustrated or angry, instead encouraging us to stay in the lane and figure out how to work together.
I have wondered if generative AI will make this less of an issue. Victims won't be as afraid of images leaking out if they can plausibly claim that they're fakes.
These cases may also be effective in situations where the victim has a significant other so the release of private transcripts is not nearly "a picture of their junk".
I think if you want to reduce the shame of people in situations, the absolute best thing to do is to make extortion a crime that law enforcement has the ability to prosecute forcefully. If that's the case, people who are in this situation know that the law and the resources of the law are on their side.
I think this law -> culture change has already been shown to work in similar situations. For instance, when women know that law enforcement will help them investigate sexual harassment/assault it makes it means they have less shame about dressing/socializing how they want.
I don't want to make light of the situation, a teenager committing suicide is a horrible, avoidable tragedy.
Your comment reminds me of the first time I received the phishing scam where the email claims that your computer was hacked and that the hacker has a video of you pleasuring yourself that they will allegedly send to all of your contacts if you don't pay them money.
After immediately recognizing that it was a phish, my second thought was "I value that money way more than I value my dignity ... so even if this were legit ... sorry grandma!"
It isn't blaming the victim to have a candid conversation around how society and individuals can or could be more resilient. This is how progress happens.
For example, we often educate children to set boundaries and tell adults about unwanted sexual advances. That doesn't mean that children who failed to set boundaries and tell adults deserve to be molested. .
I'm not blaming the victim, rather the societal views that enabled this (in part). I too had some body insecurity at that age, but more societal openness would have likely reduced that.
I hope this will encourage the lawmaker to ask more of law enforcement. It's their job to track this down. I'm sure Instagram would comply with whatever warrants were issued to them, even ones I'd personally prefer they not.
It's unfortunate that it took this happening to a congresscritter, in their privilege of perfect law enforcement preference, otherwise blind to how LEOs aren't doing their job, to do something. Unfortunately, they're not asking LEOs to do their job, but rather suing the communication medium, even though the same scam could be run through a different communication medium.
So they track this down to some guy in Russia. Then what? Tracking down these types of criminals is probably the least effective things law enforcement could be doing because it's quite time-consuming and a lot of the time it gives you no concrete result.
I am not confident that 100% of these instances are "some guy in russia". Is it true that these scammers in question were? Or that LEO tracked them down at all? Or did anything meaningful?
I agree that we ask LEOs to enforce the law, and doing so is a lot of work. There is, though, quite a large discrepancy between what we ask them to do, and what they do, so it remains to be seen if they are doing a lot of work, and it remains to be seen if they work they do do, is the work we want them to do, versus the work they want to do (for example, what sort of speed limit enforcement operations do they run? How much do they prosecute simple drug possession?)
I believe opening themselves up freely and willingly to non-LEO oversight and control would do a lot to answer those questions.
It's not 100%, no, that's why I used "a lot" and not "all".
And even if they are in the US, it's not that hard to be reasonable untrackable with minimal skills. People make mistakes, and maybe every now and then you might catch one person. Whoop-die-doo. It's just not effective or reasonable to track down every single case in depth.
I agree that the job isn't easy, but that doesn't mean it should be ignored. You speak of tracking down "every single case" as if they get close to that metric, but do they? How many do they actually track down? Is it greater than even 1%?
I just don't think they're doing the best they could be doing, is all, but that goes back to my suggestion about them being receptive to civilian oversight and control: we can run the numbers and determine objectively whether they're doing as good as we'd like them to be doing.
Even if you can catch half of the 1% that you track down (hugely optimistic!), what's the point? The impact is barely perceptible, these types of changes to get caught does nothing to deter anything. And now criminals will be more careful so it will be even harder to track them down (and/or everything will shift to areas outside the US). Any wins will be temporary at best. All of this would be a colossal waste of time and effort.
The best thing that can be done is changing the system so these types of scams/extortions are much harder to pull off in the first place.
Any such system changes can be gamed, it's inevitable, what's the point? Rather than adding more and more layers of technology that will be bypassed anyways, we should step back and ask: what exactly is the problem here?
The problem is that we've already made it illegal, but I don't see law enforcement doing as much enforcing as I'd like them to be doing. I didn't give them permission to throw their hands in the air and cross their arms in frustration and give up yet, and I don't think that any marginal effort would be wasted.
I don't think they're at their capacity either, so from my perspective, the difference is between 0 benefit and >0 benefit for the same annual expenditure on salary. If they needed more capacity after actually exhausting their existing capacity, they could reduce work in things we don't want them to work on.
They could convince folks otherwise, but it would require what I mentioned before: being receptive to non-LEO oversight and control. Then, we would have more accurate information for our judgement of whether they're doing as much as we'd like them to be doing, and what we'd like them to be doing.
When things like this happen I always think that the more important problem is the fact that he chose suicide.
This sort of thing cannot be gotten rid of entirely. It's impossible. They can always be lured into another messaging system that doesn't track or analyze data at all.
I know this would be extremely embarrassing but it is far from life destroying (or at least it should be). Within a year life would've moved on and no one would even care. We all need to better come to terms with this reality and neutralize its impact and the shame of it.
It seems that Instagram in particular has a major problem with scamming. It would be safer to have teens consume all of these services as read-only, using an alternative front-end.
I've been saying this for a long time. Kids under a certain age should have read-only access to the internet in general. They don't need access to make posts, youtube comments, or on Instagram. I really think this is the solution.
If they want to socialize, go outside. The internet is not a toy it's a different dimension of life. The problem with here is it's still a pretty wild west and we don't have enough guards to prevent crime. You wouldn't let your kid walk around Philly's Kensington Ave without you and you sure as hell wouldn't let them walk around talking to everyone there... so why do we let them on platforms that are filled with the same or worse? Sure IG can be harmless but it's also a quick search away from whatever content they can (or can't) imagine.
If Instagram can be held liable for not having the magic powers to preemptively police several hundred million people, would it not make more sense to put the man in jail for severe neglect for not giving his son the "Don't send yourself to strangers" talk?
> Sextortion predators typically trick young victims, usually teen boys
That is an interesting piece of data. Up until now, based on other media reporting, I was under the impression that things like that were disproportionally affecting girls.
I've always found this reasoning insane. People are in a position where they want, or feel it's the only option, to kill themselves. And someone suggests the problem is access to the ability to do so, rather than what lead them to feel that way?
Guns, drugs, knifes, rope, exhaust, whatever. These always become the focus when a kid kills themselves. How about the circumstances that led to it? Suicide is still a problem is low gun ownership countries.
It is certainly part of the problem, but it's not the root cause.
Yes. Access to and knowledge of firearms dramatically increases the lethality of suicide attempts.
But in this case, the kid was scammed for nudes and then blackmailed with them. Without easy access to guns, he could have survived the suicide attempt, but there's no guarantee.
The root cause has more to do with childhood, internet access, and maybe even parent-child relationships.
To add to the sibling comments, Japan has very restrictive gun laws [1], but a suicide rate (12.2) similar to the US (14.5) [2]. Let me add that I find the instinct to restrict personal freedom as the very first measure very disturbing. As if the loss of freedom is not a harm in itself. More pragmatically, how many people will admit to being suicidal, if the immediate consequence is being locked in a padded room without so much as a shoelace?
To this American's eyes, not mentioning that at all is a glaring omission. I recognize they might not want to in the story of the representative's son (as it might come across as blaming the parents in a recent tragedy) but even though it's for sure relevant in the aggregate.
This is a yes-and situation (that is, the issues are orthogonal). There are several similar stories where firearms access is not a factor (Amanda Todd is one of the most well-known, https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd). Various social media networks are implicated as factors.
I'm curious what efforts have gone into finding these scammers? Any? I might be wrong but I can't imagine them having great OPSEC, especially if they are accepting payment through venmo...probably a hacked account, but they would try to get the cash out of the hacked account in some manner which would help target them.
It's would be trivially easy to scan these streams of messages through some kind of ML classifier and identify cases of extortion before they result in suicide. Once the platforms identify the perps, they can block them from the platform. I hope this lawsuit makes the costs of platform inaction so high that they build out detection systems.
I can't believe you are actively suggesting this. Even in this horrific case.
Okay, so you train your classifier on extortion. Then you train it to detect sexual content. Then you train it to detect potential piracy. Then you train it to detect wrongthink. Then you don't have a free society anymore.
Don't we already do this with email spam? And movie/TV ratings? You could slippery slope any classifier for any purpose into this -- this is why hopefully we have brains to determine good and less good uses of technology.
Or: you stop after you train your classifier on extortion, because society is capable of having and maintaining standards and because not every slope has to be slippery?
There are many other cases in which we allow limited intrusions into individual freedom in order to protect society. For instance, here in the UK, a police officer can enter my house without a warrant if they believe that it is necessary to facilitate an arrest for an indictable offence -- and that has been the case since at least 1984. If your way of thinking was accurate, police should now be routinely entering my house and beating confessions out of me.
We don't have a free society when predators can easily prey on victims like this with no repercussions. That's not freedom for the victims, that's just another form of prison. I personally don't know what the answer is, but it isn't "do nothing to protect freedom".
Social media is not required to have a free society. I don't think the progression you predict is how things would play out. IMO, At some point, people will simply talk in person (well, off-network) to be sociable.
Casual piracy, e.g. link sharing in DMs, would be a tipping point if I had to guess.
Why stop with messages? We have so much more information on people these days than chat history. Why not feed all of it into some kind of opaque classifier that can give us "grades" of human being. Then we could shadowban people below a certain grade from social media commons to keep scum like this out. Doesn't that sound like a fantastic society???
There is a middle ground. If you are just a platform or software, then I don't think you should be liable. I don't think Discourse should be liable for every forum they host. But, when you are in the position of someone like Meta that exist to generate "engagement" to push ads onto people then I think that should change a bit.