Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nearly 40 states back surgeon general's social media warning labels (theverge.com)
57 points by rntn on Sept 10, 2024 | hide | past | favorite | 62 comments


Soon we wont be needing ad-blockers because we'll have so many legal nag pop ups that you wont be able to see the ads!

Seriously, can we come up with a better solution to these things? Like can browsers notify you about cookies instead in a non-intrusive way? Same with social media nagware.

Just thinking aloud, because websites will become increasingly annoying over time.


> because websites will become increasingly annoying over time.

You could have websites emit headers in responses that will offer the appropriate regulatory compliance warnings, which the user agent can surface in an appropriate way such as in an address bar icon or a dedicated button, and allow the user to ignore them entirely, upon request.

Although not directly related to this particular article, this would also be a way in which porn bans can be easily enforced (e.g. Virginia) instead of requiring intrusive user verification - the website software might emit a "X-Regulatory-Info: porn" header, which parental control software can then look for; and any websites not emitting that header would be considered illegal.


This is kind of what I'm thinking, or even an API for the ones where you need to store information like "I only want x cookies" but just in some uniform way.


P3P tried this and failed: https://en.m.wikipedia.org/wiki/P3P


Was that one mandated by policy though? I don't think it was, it was something browsers wished websites would adopt, and since they didn't now we've got stupid web elements that are unnecessarily shoved in our faces. Meanwhile tracking people has probably gotten worse and worse over the years (for the consumer).

What I'm proposing is if we're going to legislate the web, make it smarter and intuitive. Unfortunately the people writing these laws rarely get feedback from people who actually build websites or care about end-users.


I'm a lawyer in this space. Couldn't agree with you more.

Edit to add some more detail. There's a lot of scholarship that more information or transparency can't be bad - and I don't disagree too much - but the cluttering and intrusiveness is too much. Every legal regime (privacy, safety, consumer protection, IP, etc) clambors to be the most important and worthy of being front page news. But this results in a brutal web experience. I miss the old, simple websites we had as kids. Can't we all agree that a privacy policy or terms link is enough? That's what links are for.


> Can't we all agree that a privacy policy or terms link is enough? That's what links are for.

Ultimately those are written by lawyers for lawyers. They're extremely long, verbose, and hard for a layman to parse. And that's on purpose. It's a dark pattern to get people to agree because they lack the time to read and to understand [0][1][2].

Reading isn't the same as understanding, and that's a separate bag of worms which comes up all the time in legal affairs. Take the recent Disney fiasco. I don't think the average person would read the ToS and come to the conclusion that Disney can more or less kill your loved ones with impunity in their parks if you watch Disney Plus. Not that other lawyers agree either, but apparently that's the understanding of Disney's lawyers.

Not that pop ups, modals, and warnings are much better. But I think a clear warning/opt out pop up is far better than a link buried in account settings or policy/ToS.

[0] https://www.theatlantic.com/technology/archive/2012/03/readi...

[1] https://www.pcmag.com/news/it-would-take-17-hours-to-read-th...

[2] case in point, I'm sure better links exist but I'm sort on time and just plugging what I reviewed from the top of Google results.


Yes -- that is the downside of the current privacy policy model.

There's a continuum, though: from a dense legal meaningless document, to a hyper-loud intrusive but meaningless pop-up. I'd argue that the lawmakers should try to understand the entire market factors at work here, and propose laws that incentivize companies to produce useful/meaningful documents and policies.


IANAL, but my understanding of the Disney agreement is that it states that completely clear. Just like every other terms of use I've read.

What people disagree is in whether it's actually enforceable.


It's not so cut and dry, or maybe it is to lawyers, IANAL. I don't think the average person would believe that forced arbitration would be applied to a separate Disney service, and that it's for all time.

That using a Disney plus trial promo several years ago would force arbitrartion in a wrongful death suit stemming from a visit to a physical establishment is well beyond, IMO, a reasonable expectation.


Personally, I would not believe that forced arbitration would be applied to the context of physical harm, nor would believe that Disney would dare to try it.

Evidently, I was wrong... But the arbitration clause clearly says it applies to the case. You are expecting things that aren't written there (nor in any other of those "contracts" you sign every day). IMO, you should be correct on that, but the world seems to disagree.


We're getting into the weeds a bit, but in regards to Disney plus, I expect the terms to apply to Disney plus, for the duration I use Disney plus. And I expect there to be legal boiler plate covering their bases.

E.g. For physical harm, perhaps I was upright/walking when something scary was on, and in my fright I stumbled/fell/flailed etc and experienced some bodily harm/death. I could totally understand and accept forced arbitration in this instance.

However, I don't expect that to continue for eternity. If I watched Disney plus for a day, canceled my subscription, and went to see Frozen on Ice years later, I wouldn't expect the Disney Plus ToS to have any bearing.

But anyways, back on point, I don't think the average person is legally competent enough to enter into these agreements. Laymen can read and argue what we expect/understand something to mean, but that's very different from what it legally means. This is why I think a ToS/Policy page are not good stand alone solutions.


I agree with you about what the Disney arbitration agreement should (fairly) apply to.

I'd argue that the ToS/policy page solution can still work, though. We just need higher standards and greater clarity for what those policies are allowed to do, and what they are not allowed to do, and greater clarity on what the bright lines are for companies -- like, there should just be a rule about how contracts work in this Disney case, not a rule about where you need to tell people about the specific thing.


My understanding as a layperson is that most of those banners are not mandated by law and it's companies choosing this approach rather then picking other options such as not storing PII unless strictly necessary for the interaction with the user. So why fault the laws here?


You're right, I guess -- but if you want to build a competitive business in today's world, you simply need to collect a lot of personal information and do analytics on your website. If it were outlawed to place cookies on people's browsers, then it would be easy - Amazon.com would have only strictly necessary cookies, Facebook.com same, etc.

But given it's possible to comply with the law by throwing up obnoxious banners, that's what these companies do, and you're basically forced to do, if you compete with them.

Whether you blame 'capitalism' or 'the marketplace' for those things or, you believe 'the government' has the job of understanding this and putting in place laws that make it easier across the board, is kind of up to your political affiliation.


>Like can browsers notify you about cookies instead in a non-intrusive way

They absolutely can, and they absolutely can be our user agents for maintaining cookie whitelists, blacklists, with a consistent UI for being notified a site uses cookies and if we want to allow them.

But they don't do that.

It would have been 1000x more user-friendly and convenient for our user agents to behave like actual agents, filtering cookies for us, instead of 1000 different flavors of invasive cookie banners and "necessary cookie" notices.

But the EU lawmakers didn't want that, or perhaps they just didn't understand the problem enough.


Have the user agent default to no for everything unless manually allowed by the user via a discreet, intentioned toggle similar to the tracking toggle Apple implemented.


> Seriously, can we come up with a better solution to these things? Like can browsers notify you about cookies instead in a non-intrusive way? Same with social media nagware.

I mean if it's notifying you in a non-intrusive way then it's not really notifying you. That said I absolutely agree with the spirit of your comment. All the EU regulations accomplished was adding an infuriating dialog about cookies that the vast majority of users just click-through anyway, which I suspect will also happen to anyone subjected to surgeon general's warnings about how bad social media is for you. They already click through the T's and C's without reading, I doubt this will be more interesting.

This, IMO, is just another symptom of our individualist-obsessed culture. We'll tell people how terrible a thing is for them to consume, but ultimately, we also permit them to consume it. I don't think that's a wrong idea in and of itself, I'm generally pro-autonomy, but products that explicitly create addiction in their consumers are, IMO, an ethical gray area with regard to this and undermine the notion that they are being engaged with in a healthy way, which is core to the pro-autonomy argument. If you choose to consume alcohol, knowing the health issues it can cause, then I'm absolutely fine with you doing so, and extend that to any other harmful substance or product. If you are addicted to it, on the other hand... well, it's complicated.


A check engine light is not a real notification? If a notification is standardized then it does not need to be intrusive.


Depends on the driver I guess... Some people will just ignore it till their car stops driving / turning on... The light also used to double as a maintenance light.


It's still a maintenance light for many things. If you check over the list of diagnostic codes for any given model, they vary widely from a "should check this out next time it's convenient, something looks off" all the way to "PULL OVER RIGHT NOW."

I'm old enough that I remember when it was commonplace to figure out what on earth your car was even mad about that you had to take it to a mechanic and pay for the privilege. That's one aspect I will absolutely give has improved both to new cars' infotainment systems that can give much more detailed information about what's wrong, and to the wide-spread adoption of DTC decoding devices/apps.


> All the EU regulations accomplished was adding an infuriating dialog about cookies that the vast majority of users just click-through anyway, which I suspect will also happen to anyone subjected to surgeon general's warnings about how bad social media is for you.

I tend to agree regarding the current state of things, but most nagging pop-ups are not compliant with regulations, and are bound to get slammed sooner or later. I think they will last a couple of years before being seriously challenged and be gone (possible replaced by something worse, though).


The issue is you have similar regulation from California, which is on sites that are not necessarily doing business in the EU, and vice versa. So while in some cases it might not be EU compliant, it might be CA compliant.

This has spawned some services you pay for which handle the dialogs for you.


To be fair this is probably a good thing. I would not expect a small website managed by a handful of people to know all the regulations inside and out. It’s similar to how they need to contract a proper accountant for taxes and balance sheets if they don’t have the skills to do it themselves.

I’ve noticed some of these providers, they have a consistent layout and are localised, even on small sites purely designed for American audiences. If these services are good, then a lot of smaller website will be good as well without them needing to invest too much time or money in this.


The solution would be honoring Do-Not-Track: https://en.m.wikipedia.org/wiki/Do_Not_Track


> “A surgeon general’s warning on social media platforms, though not sufficient to address the full scope of the problem, would be one consequential step toward mitigating the risk of harm to youth,” the attorneys general said.

I agree that it's not sufficient, disagree that it's consequential. Consequential means it has consequences, I don't see what those are.

Situations like this remind me of that old sketch by The State, where there's a penitentiary with a big, wide-open gate, and all the prisoners are politely asked to please not escape, as a favor to the warden. Our Surgeon General is asking people to voluntarily not abuse social media, when that's the exact thing people are doing already, want to keep doing, and in fact in many cases define their lives around doing. Also, there's absolutely no proposed way to stop them from doing it, or even incentivize them not to do it. Not sufficient indeed.


Should HN have addiction warning labels?


HN isn’t social media. No endless and ever-changing personalized algo-feed showing me stuff I didn’t ask for to try to keep me “engaged”. No alerts. No social graph at the center of it. No “friends” or “follows”.

It’s missing enough of the things that characterize social media that it’s unambiguously not in that category.

[edit] however, separately: yes, it should :-)


HN is absolutely social media, it's just attempting to blend aggregator with online forum.

you can network, dm, and users who post frequently are often recognized here.

you can doom-scroll as much as anywhere else whether it be in the feed or in the comments (so long as you don't have a bias towards different mediums like video).

and also while it might not meet your criteria for social media algorithmically or otherwise, an algorithm is not necessary for someone to engage with the internet in unhealthy ways. the surface area for emotional damage is just different. and that doesn't imply HN is some special place where the only users engaging here are "emotionally well" in both their engagement/usage, and in their personal.

and speaking as someone intimately familiar with relevant behaviors and presentations, the algorithm exerts influence and exploits, but it is ultimately not the RCA -- it's just like a drug dependency for someone with maladaptive coping mechanisms -- the drug isn't "the problem", it's a symptom, and the "drug dealer" is taking advantage of it.

side-thought:

it always makes me wonder if people manufacturing and distributing what i see as a "need" for social media, while employing HOOKED marketing dopamine strategies to establish said need in its users, ever ask themselves: is this a net-good? should i be doing more with my life? is this harmful?

imagine being a demonstrably great mind and seemingly great person, but working somewhere like FB or IG or TikTok and applying every ounce of your intelligence to maximize user engagement.


I think a lot of your side-thought basically comes down to the fact that there's an overabundance of qualified people who have to choose between a career that's personally fulfilling and pays like shit, and someone who never wants to worry about money again and seeing something like Facebook being the way to ensure they don't have to.

Personally, IMHO, if we want more people to have fulfilling careers that are a net-gain for society, we need to foster an economy that finds that valuable and will pay for it as such, instead of seemingly, almost intentionally, placing personal fulfillment/net-good for society on the polar opposite end of what pays well.


That's rubbish. HN has an algo-feed, it has public profiles (and if you stick around here, you start to recognize people), and there are plenty of ways of following other people (hell, you can see everyone's comment and post history). Hell, HN even has a "social credit score" - which is published on your profile.

HN is unambiguously social and unambiguously media.


You have just defined social media to include a multiuser UNIX shell server. Among other things.

I get how people arrive at that kind of place—“social” and “media” cover so very much, singly and together, so, what’s not social media?—but if we define it that way, it’s uselessly broad because it doesn’t exclude enough. Also, it includes tons of sites that pre-date (or are similar to sites that pre-date) the term “social media”, and aren’t the kind of thing the term was coined to refer to, like HN. Why even have the term?


Okay so... what is social media, then?


Strong focus on a social graph (this is the part that brought “social” into the phrase), follows, re-sharing, personalized feeds that make selections largely to increase engagement. Some of these I mentioned my first post. They’re major elements that characterize social media, and distinguish it from other things that are social, are media, or are both (but aren’t “social media”—the thing the term refers to, which is different and distinctive enough that it’s nice to have a label for it)


This sounds a lot like trying to define assault weapons. There are a few features you don't like (which many people do) so any website with too many of those features is Evil. Though I suspect it's more emotional. Is Whatsapp social media? What about email lists? I'm also curious to know what you think of HN's "social credit score".


I imagine their thought process is "Social media is what the stupid people use, and I'm not a stupid person. I use HN, so HN isn't social media", and then try and backsolve a definition from there. When in reality we're all humans.


It's a warning about associations with anxiety, suicidal ideations, and other mental health issues, not addiction. But seemingly news in general, not just Hacker News, has been shown to have negative mental health effects and no legislature I'm aware of has ever tried to get news providers to add a warning label that reading the news might make you sad, distressed, afraid, anxious, and whatever else it does.


This website is known by the state of California to cause flamewars.


This will most likely end up like prop 65 where the warning will be completely ignored and if anything make make people not care about the important warnings like on wine bottles. A surgeon general warning makes sense on a bottle of wine to prevent pregnant mothers from causing their children to have FAS or worse. I'm generally a proponent for government enforcing companies to improve consumer knowledge, but only for matters of simple fact such as nutrition labels or the recent ISP speed labels.[0] I could see such labels also making sense for eg. laptops or mobile phones, but there is no evidence that these manufactures are hiding these specs so it seems unnecessary. Similar arguments could also be made for movies and video games. In those cases, companies voluntarily provide age restrictions such as the MPA and ESRB rating systems. This is perhaps infeasible with mass user generated content like social media. However, I'm not sure the issue people have with social media is the "content rating" per se but other issues such as bullying or addiction (taking these claims at face value). Lastly, I have read that the evidence for a teen mental health crisis caused by social media is not as substantial as claimed in news media, but I haven't looked into it deeply.

[0] https://arstechnica.com/tech-policy/2024/04/starting-today-i...


I'm sure it'll work as well as it does for Tobacco and Alcohol.


Compelled speech. Strike it down, SCOTUS.


The Ninth Circuit struck down something adjacent to this last week (California's AB 587, with a social media compelled-speech provision), if you're looking for news to cheer you up!

You'll rarely read uplifting stuff about 1A on HN; this crowd does not like to upvote those stories. The AB 587 law had 400-comment threads excitedly cheering it on, and total silence when the courts overturned it. You'd get the wrong impression if you got all of your tech news from HN.


Pack-a-day smokers do not give a damn about the black box (death label) warning on cigarettes.

Minors by definition do not have any responsibility, nor do they have any perspective to consequences, they are immortal in their own minds

The only answer is to make parents and guardians responsible, start with fines, end up with prison if there is something so dire to what you are trying to stop and it is destroying lives.


> Pack-a-day smokers do not give a damn about the black box (death label) warning on cigarettes.

Probably not but I've seen not-a-pack-a-day smokers stop smoking also because of the warning labels. It's a constant nudge and if you are not that deep into the addiction it makes some people wonder if it's worth keeping the habit after seeing the throat cancer image for the 500th time.

It's a nudge for the ones that might pick up on it, it's never going to be perfect (and not even designed for that) but at a societal level if that label made some thousands of people consider quitting smoking it's already done its job.

Combine it with other nudges such as increasing taxes on cigarettes to absurd levels, education campaigns in schools, forbidding advertisement, so on and so forth, and changes happen.

The label is just one part of a whole strategy, only judging it in a vacuum is the wrong approach.


We should all be wary when non-medical politicians start campaigns involving medical issues. There is no scientific evidence or consensus of this position these politicans are pushing. The verge even mentions this offhand in their write-up, "with the exception of state-level rules demanding adult sites add unproven health notices about pornography." This is exactly the same type of situation in terms of being unsupported non-science.

But at least it's just a label and not the use of violent force like with states and their poltiical campaigns using pornography as justification/incitement of fears.


Social media is of low value. Best case, political action beats something toxic into submission. Worst case, it does to social what Musk did to Twitter (hollowing it out). Low risk, low/high reward. If social disappeared tomorrow, the world would not be worse off (although those working for it and investing in it would be).

Edit: @VyseofArcadia Thank you! Words are hard.


> hallowing it out

FYI you probably mean "hollowing it out". "Hallowing" is to sanctify or make holy, which is pretty much the opposite of what Musk did to Twitter.


Social media is of immense value to humanity because it allows rapid decentralised transmission of information. It's why for instance young Americans have been making much more of a fuss about the current genocide in Gaza than any previous one, because the flow of information is no longer filtered through a small number of self-interested media corporations. The fact it pushes some psychologically unstable individuals over the edge is no justification for trying to destroy the whole system; the priority should be on getting those people the treatment they need.


> because the flow of information is no longer filtered through a small number of self-interested media corporations

Isn't it, though? It seems to me, particularly due to the fact that people now overwhelmingly get their news from social media, that we've merely shifted which self-interested media corporations do the filtering.


> Social media is of low value

...to you. Everyone is a Libertarian except when it comes to other people's preferences.


Indeed, it depends on what type of value he meant. A lot of people would lose their incomes if there was no social media. It is not really about being social or communicating. Social media is the subset of corporations that have established legal ways enable the transfer of money between people on the internet. Which is why non-profit social networks never really rise beyond a fractional percent of usage (ie, mastodon's 1 million), they don't ever try to solve (and cannot solve due to KYC) the primary use case of social media being a job.


That’s why I vote for folks who want to strongly regulate and potentially dismantle these orgs. That’s democracy (assuming enough others seek the same harm reduction).

I feel the same about tobacco, alcohol, and other industries targeting vulnerable humans. Have to defend the human from systems built to extract while harming in the process.


> who want to strongly regulate and potentially dismantle these orgs. That’s democracy

Voting for someone who intends to exceed the bounds of the constitution is not democracy. That's just a hired bully.

> others seek the same harm reduction

You're free to reduce your own harm. You are not free to prescriptively dictate an acceptable level of harm to others.

> targeting vulnerable humans.

All those other things you mentioned need well marketed and broadly advertised lies to function. Why do you suppose social media does not need the same?

Aside from that wouldn't your goal to make humans less vulnerable? Or are we perennially going to bully everyone else over the imagined needs of this class while constantly ignoring the rest of their plight?

"We don't care that you're vulnerable. We care that we can use you as an excuse to get what we want."


We already regulate many things for harm, by age (alcohol, tobacco, firearms, recreational drugs), and broadly without regard to age (meth, fentanyl, etc). I think you are confused with regards to current state vs the way you wish the world was. The discussion is adding additional items to that regulatory list.

> If something changes the pathways in our brains and damages our health — and if it does so to Americans on a vast scale — it should be regulated as a threat to public health.

https://blog.petrieflom.law.harvard.edu/2021/12/03/regulatin...

https://www.mcleanhospital.org/essential/it-or-not-social-me...

https://health.clevelandclinic.org/is-it-possible-to-become-...

It’s still democracy even when you aren't happy with the outcome. You got the vote. That’s the democracy part, not the winning.


> We already regulate many things for harm

And are there any politicians who are making a career out of that advocacy?

> I think you are confused with regards to current state

I think you are confused about the context of the conversation vs. what you wish the conversation was.

> The discussion is adding additional items to that regulatory list.

And the items on that list and the proposed additions are _significantly_ different. In particular, most of those do not enjoy constitutional protections, and the proposed does.

> It’s still democracy even when you aren't happy with the outcome

The constitution would disagree with you.

> You got the vote

I'd rather have the rights that are guaranteed to me, and I would like to know that neither you nor your strongman politicians can take any of those away from me, regardless of how many slips of paper you manage to collect.


> Voting for someone who intends to exceed the bounds of the constitution is not democracy. That's just a hired bully.

As the constitution does not grant freedom from regulation to corporations, regulating corporations is well within the powers granted to states by the 10th amendment of the constitution.

> You're free to reduce your own harm. You are not free to prescriptively dictate an acceptable level of harm to others.

Again, if a society (city, county, state, etc) votes to do so, they are in fact free to do so. If a society agrees that a person is causing harm to others, then that person is fined, locked up, or (in some states) executed in an attempt to both cease the harm and punish them for causing the harm.

If a person such as yourself doesn't like the above, there isn't anything keeping you in that society except the benefits you gain from being a part of the society.


> Voting for someone who intends to exceed the bounds of the constitution is not democracy.

Elected officials proposing new legislation is pretty firmly within the bounds of the constitution. Is there something specific you object to?


>Voting for someone who intends to exceed the bounds of the constitution is not democracy

It is democracy; democracy is mob rule. That's why the US is a republic, not a pure democracy.


True, there's a lot of misinformation about porn. I do some amateur/OnlyFans stuff and i have several friends that work for a big studio. Never heard of any abuse going on in the industry itself.

Though it is true that people with prior abuse are more likely to be in this industry, it does seem to be pretty common. But I think it's a coping mechanism too. Everyone I know is happy to do the work anyway. The environment seems also really chill, they shoot at these luxury villas and afterwards hang out in the pool etc.

Of course N=1 but the whole stigma of it being a cesspool of abuse seems more politically/religiously motivated.


There is some abuse, but I don’t have reliable statistics to say whether it’s worse than, say the music industry.

What is true is that very often the people who complain loudly about porn don’t actually give a fuck about the women and would want them in the kitchen anyway. It’s like nationalists pretending to care about the welfare of migrants when arguing that they should be kept somewhere else. It’s pure hypocrisy.


> say the music industry.

Yeah or the vanilla film industry. With characters like Weinstein and the others..


I had this in mind as well. Or academia, which I know better and still has a lot of issues, even if it rarely goes all the way to rape.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: