Hacker News new | past | comments | ask | show | jobs | submit login
Senate to Vote on Web Censorship Bill Disguised as Kids Safety (reason.com)
68 points by verdverm 47 days ago | hide | past | favorite | 50 comments



Why are kids using social media to begin with?

I am on like 2 "social media" platforms at most. I use hacker news to find cool new stuff in tech and to look at what other people think on a topic, and find that the site is a great tool for finding cool niche topic discussions. And I use Snapchat because my friend moved across the country and his texting is spotty due to his provider. I check Snapchat less than once a week, and while I do frequent hacker news more often....I don't spend a lot of time on here.

YouTube I get. Its a great place to learn math or watch makers build things or experiment. But are people so into mindless consumption and digital interaction?


Yes people and especially kids are a lot into into mindless consumption. Whenever I travel( I travel a lot) I see almost every child on train with a smartphone mindlessly scrolling through dumb reels with their parents too lazy to give them a good engagement which will actually develop them.


You're going around peeking at what children are doing on their personal devices enough to have a representative sample? I'm not sure it's the internet they need protecting from...

edit: removed the part that wasn't really the point at all and seemed to draw a lot of noise from people who love to snoop and aren't used to consequences.


> If you did this to me I would smack you.

Violence in response to a legal, non-violent activity? In most free areas, one could use as much force as necessary to stop the threat and apprehend you for this activity.

Nice ninja:

> edit: removed the part that wasn't really the point at all and seemed to draw a lot of noise from people who love to snoop and aren't used to consequences.

You seem to be the one not used to consequences: hit people in public and you may wind up getting shot in most free areas that care about rule of law. End of story.


>> "You seem to be the one not used to consequences: hit people in public and you may wind up getting shot in most free areas that care about rule of law. End of story."

Vigilantism is the opposite of rule of law. Shooting someone because you got hit for acting like you want to steal their property is also not rule of law.

This is all very much beside the point, but if we can't even agree that murder is an excessive escalation then I don't see how we could agree on the boundaries of censorship. This entire conversation is a prime example of why none of us can be trusted to decide what each other is allowed to see online.


> you got hit for acting like you want to steal their property

Good luck arguing this one.

> we can't even agree that murder is an excessive escalation

We're not going to agree: you are not entitled to strike people in public for looking at your phone.

In my area, this makes you fair game to be apprehended for assault, using as much force as necessary to subdue you. Someone who is so unhinged as to hit people in public is not likely to submit to arrest peacefully - nobody said "murder."


This is why I took that part out. You're going off on this wild tangent based on a ridiculous misreading of something I took out because people were misreading it. I could have rewritten it better, but I took it out instead because it was off-topic and would still be off-topic if I made it more clear.

HN has a great guideline here:

>> "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

So you didn't mean you would murder someone for defending their property, because you didn't understand that's what I meant, and didn't know about the guideline. Okay. I accept that. I should have followed it too.

The thing about the guideline is there are two ways to go: your interpretation is correct and going off on a tangent is just going to be fruitless as we've seen here (because clearly I'm an unhinged freak who goes around hitting people over a glance), or you're wrong and making a mess of things over nothing (because I'm a rational human being who only uses violence as last resort).

It's about assuming the person you're talking to is the latter because there's no point talking to them if you're right and it's the former.

Let's move back to the actual topic.


You don't need to be aggressive. One can notice without being invasive, looking is still allowed as far as I know.


I'm not sure how someone can see enough of what kids do on their phones to have such confidence in their view of it without being invasive. I have no idea what most people do on their phones while they're in view, and I have no way to know if it's representative of their overall usage.


You don’t need to see their screens, there are plenty of other signs such as the craned neck, periodic scrolling motion, or more obviously the cascade of grating music and voiceovers as many kids have lost the idea of decency in public. Source: am in the demographic of interest, guilty of constantly scrolling YouTube shorts, and have observed countless others in my demo captured by TikTok and Reels.


I would hope a law with such wide-reaching and destructive effect would have more rigor backing it than vibes and anecdotes like this, but I know that's wishful thinking.


I agree, it doesn’t. And the law should not be passed.


They were not aggressive.


One does't need to peek unless one is really dumb, especially when the kids are changing reels with full volume on


You're willing to support a broad censorship program that will affect everyone and harm marginalized people because you snooped on some kids on a train in a narrow, non-representative slice of their lives. You know nothing about them, but you act like you do.


I never snoop like I said. And yes I know about kids because I was a kid too! and have suffered the same. Only people living under a rock will act fully dumb when predators harm kids through online methods.


I was a kid, too. I could spot and avoid the predators because my parents made sure I understood it was a risk, knew how to spot them, and knew I could trust them with any questions I had.

In reality, I encountered more predators offline, but had the skills to deal with it.


This is painfully out of touch. Forbidding a kid from social media is a bulletproof strategy for them to be ostracized, bullied, and isolated.


Peer pressure isn't new, though. It's both something that happened before social media and a problem exacerbated by social media, like being picked on for not being on $athletic team. Resigning yourself to accommodate your oppressors isn't reasonable or healthy, unless you're doing it to build a strategy for dismantling or usurping them.

I do think an outright ban ignores that there are constructive platforms with a semblance of rules and culture that can provide value, and platforms rife with scammers whose job it is to give vulnerable people mental disorders so they remain addicted to their approval. It's pretty easy to tell which are the best and worst of which, and it's generally teachable.

Finding out other's opinions can be important, but they shouldn't be the first or only lens a child views the world through. The wrong subreddits or influencers or streamers know more about their audiences than their audiences know about them and use it to trap and indoctrinate them under the guise of being a "community". They even make it fun! But not participating also doesn't hurt anyone.

Maybe hearing two year olds all over Target perfectly reciting "hey y'all and welcome back to my youtube channel, remember to like and subscribe for more content" isn't cause for alarm, but I'm more inclined to worry about kids not being able to connect and engage normally to the world around them AFK because of social media than the opposite.


You could say the same about smoking half a century ago.


The actual bill: https://www.congress.gov/bill/118th-congress/senate-bill/140...

> The term “covered platform” means an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.

This covers quite a bit more than just social media. Sites like Wikipedia and health information seemingly fall under this, though there is an exception for "news" outlets


I remember back in the early 2000's here on Airstirp One when our government of the time used the child protection strategy to introduce the old Criminal records bureau checks on all those people working with children and vulnerable adults.

Being a psychotherapist I had to have those checks done.

There was the "basic check" which gave the employer access to your criminal convictions and an "Enhanced check" which also included under 18 minor offences and police cautions from which no charge was brought and information the chief of police deemed important, like if you had spent time in Psychiatric care.

I assumed correctly at the time that the CRB check would infiltrate all areas of our working life.

Even if you are a road sweeper for the local council you now have to have a CRB check just in case you come into contact with a child while collecting rubbish. Or a company who has a contract with the local medical centre to sweep the car park and mow the grass. you guessed it, you need a CR check to get the job.

The question is?

Has this protected children?

No!

Can you imagine what would happen if any politician refused to vote for a policy that protected children.


> This means they're legally required to protect minors from exposure to anything that could contribute to a host of "harms," including anxiety, depression, eating disorders, suicidal behaviors ...

Looks like harms includes everything associated with living, is the article even correct ?

To me this looks like it will eliminate all advertising and everything else people could buy.

Who decides "harm", just about all food in the US causes "harm" in some form or another.


I think the fundamental issue is identification. The government cannot effectively regulate your interaction with the internet if you cannot be identified.

There seem to be different viewpoints: one is that the government should not try to regulate your interaction with the internet at all. Another that comes up is that it should regulate you, but only in certain special circumstances, which have different definitions of what those things are.

For example, more people want the government to regulate your interaction with the internet in regards to CSAM.

Some people want you to always be identified so that you can't enter sites that might be deemed too adult for you.

Then the other aspect of this is, for people who want you to be identified on the internet, how exactly does that work. The worst aspect of this seems to me to be that that is kind of an afterthought, or poorly designed. Leaking social security numbers is not acceptable.

I feel like generally people want the government to stay out of it. Except for some people, there are specific things they DO want regulated. As more and more of our lives go online, we may come to the conclusion that we do want robust identification online.

We have state and federal identification systems based on physical cards and numbers. I wonder if there has been any attempt to modernize these systems to make them cryptographically secure, biometrically verifiable, or anything like that.

I guess practically speaking, due to the extreme abuse and incompetence we have seen from government so far, I am currently in the "please just stay out of it" camp.

But as virtual and augmented reality picks up over the coming years, we may get to a point where so much happens online that we decide we need a robust "online government". Or maybe the online world will prove that something more like anarchy or self regulation works?


> I wonder if there has been any attempt to modernize these systems to make them cryptographically secure, biometrically verifiable, or anything like that.

There has been[0]! ISO 18013-5. My understanding is that individual facts like "isOver18" or "isOver21" can be signed by the issuer (the state) so that you can present proof of age without giving any other information (including precise age), though it's up to the state to actually do this and not just sign e.g. "birthdate".

[0] https://www.mdlconnection.com/implementation-tracker-map/


> I wonder if there has been any attempt to modernize these systems to make them cryptographically secure, biometrically verifiable, or anything like that.

Not sure about the U.S., but I think the DoD has something like an e-ID for access to classified sites/material?

The Estonian digital ID was an attempt - the original version got broken because they messed up with the RNG, but it was an attempt and I think it's fixed now.

The good parts of EIDAS across the EU are also an attempt.


I don’t know how well or bad this bill will be but it’s a good idea to try to stop the mental health assault being inflicted on users of social media. Humans haven’t evolved to be able to handle the targeted messaging that is common on Facebook.


People say that but I say why does the government get to decide that and not parents? I have a kid and parent controls are a thing. You know what I don't see even at my kid's young age? Parents using parental controls. I have locked out preteens from these kinds of things. You know what my siblings did? Grant them access because it was annoying to listen to them whine about not having access. I have told grown adults about what a videogame entailed. They go, "Oh no, not for my child!" And then they buy the game marked as mature and why it's mature and they present their license at the store then call me shocked that this video was not at all appropriate for children.

Giving parents the tools isn't working. They already have the tools and aren't using them. This is letting the government decide what's right for your child and even adults across the board because people hate this one part of digital life and are too lazy to teach their kids to interact with it responsibly with the guardrails and restrictions they can totally apply.

Every time this idea of "Oh, kids shouldn't have access to social media" comes up, I get upset because kids only get access through their parents. There are precious few children who have absolute control over their digital life. Their parents have just declined to see what's possible because of the modern version of "I'm not a computer person." Or they can't take their kid's displeasure.

Apparently, people think that at 18, you'll magically have the skills to be responsible on the internet instead of what will really happen we send a bunch of adults into the deep end to make mistakes without guidance or the lienancy of youth. Something like this law takes responsible parents ability to slowly ramp up their kid's ability to be competent digital citizen and cripples the utility as a whole, especially for those who hold minoriry opinions like, "man, I really dislike my country because of reasons the larger community is ignoring. Wish the government would treat me like a person." See LGBT community, but there are more.


It was a conversation with my parents. They told me what they didn't want me looking at and why, and wanted me to talk to them if I ran into something. We didn't respect each other on much, but they at least trusted me to be responsible online and keep an open dialogue, and it worked out.


This answers the question asked in your first sentence:

You know what I don't see even at my kid's young age?

Sometimes parents suck or just lack knowledge or means to do the right thing and thus society in the form of government steps in.


I wonder how a law seemingly so broadly written would behave with chevron gone. Theoretically it might be totally unenforceable. In practice though I imagine it would turn into the courts getting the final say for most enforcement actions.


Here we go again. Another "protect the children!" which only curbs the rights and freedoms of everyone - while leaving children with no greater outcome.

This needs to stop. Who keeps pushing these anti-democracy bills to try and control speech and thought online?

And for sure, this will hurt small businesses much more than corporations.


>Who keeps pushing these

The Powers That Be(tm).

There is a reason Congress enjoys next to no approval and the government-at-large are given derogatory nicknames like The Swamp.

Of course, ultimately we are also the ones who vote the swamp lizards in. So it is in fact us the people pushing for these.


> Blumenthal and Blackburn first introduced the Kids Online Safety Act in February 2022

looks like these are the two with this agenda.

https://www.blumenthal.senate.gov/newsroom/press/release/blu...

the one-pager shows how myopic and ill-informed this truly is, and shows how little due diligence was done outside of their very narrow viewpoint.

https://www.blumenthal.senate.gov/imo/media/doc/kids_online_...


Amazing!

This article is fast to make a conclusion of its own and make sure that the reader is swayed with a negative opinion. But not everyone will (hopefully) fall for that.

Kids should be kept far from any kind of sexual or predatory content, if not what will lead to life long trauma. What I see today is that small kids being heavily addicted to social media, reels, tiktoks and what not. Many (90% today) I see become so restless if smartphone is snatched away for a moment. Atleast this will (or rather can) protect them from worst but of course more can be done.


Was it the content that put the viewing device in the child's hand?

I think more of the blame lies not with the platforms, but with the parents who allow their children so much screen time they...

> become so restless if smartphone is snatched away for a moment


Yeah blame is equally on parents who can't give good productive engagement to kids for them to grow, instead they choose to make their ipads babysit their kids for them.


My generation was the TV babysitter, though it was a different magnitude of addiction inducing


These movements never stop there. "Think of the children!" is always a cover to eliminate freedom of expression entirely.


Will you allow a terrorist organisation and extremist (left,right or whatever) to fully 'express themselves' in front of 5 year olds. No! Kids are always in forefront. Simple labeling won't help subdue what's important.


Someone has to make that decision if we're going to start censoring. If I don't trust them not to label all queer content as porn (as they always do), why would I trust them here?

Please, if you're going to march us into a new authoritarian regime, at least try out some new rhetoric. This is boring.


> a measure certain to seriously restrict free speech and privacy online for _everyone_ .

I'm not saying I like KOSA, but my reading is it explicitly does not apply to adult-only services (who make reasonable steps to enforce this). If, for example, WhatsApp split into an "open network" and a "verified 18+ network" then the latter would be ok for KOSA even if the latter is used for general chat and not just porn and sexting. It would even be a start if they enforced the, mostly theoretical, 13 lower age limit for meta products.

(Lest anyone think this is unrealistic, in Israel there are already separate phones and services only for the ultra-orthodox, so they don't encounter anything "unholy" there.)

(And fetlife will probably carry on as usual.)

Personally, I wish more of the reaction to unreasonable "think of the children" plans would be to reply with reasonable plans. If the choice is between doing too much and doing nothing at all, we'll end up with too much somewhere down the line.


To build a verified 18+ network inherently requires undermining some privacy to determine, as you are now collecting ID from all of the users of your (e.g.) fetlife platform. The responsibility and liability should be with parents to prevent their kids from using platforms for which they are not legally old enough, not with the service provider.

(Note that I'm not saying this bill requires age verification, nor that it doesn't: I am merely responding to the suggestion to build "verified 18+ networks" as a way of easily being compliant.)


You will always give up some privacy this way, but there are better and worse ways of doing this with some combination of cryptography and trusted third parties.

There are already online ID verification services for all kinds of apps.

Most of the EU's EIDAS proposal - apart from the "we control your TLS certificates" paragraph - is about building a reliable online identification service that preserves as much privacy as possible. I think one of the ideas is to be able to prove you are (or at least hold the ID card and PIN of) an 18+ EU citizen, without divulging your identity. Part of the bill is then to make it mandatory for businesses that want to do citizenship/age verification to accept this method unless they have a really good reason not to.

Once you've verified someone's ID, if the law doesn't require this then you don't have to save it in any form, so all a hacker would get if they break in to the database later is that the boolean flag is_verified is set on a particular account (along with the usual things such as the password hash and the email address).

For comparison, I'm always a bit uneasy when a site needs my mobile number "to verify you're a human and prevent spam" but they definitely won't save it or use it for marketing purposes - I don't trust everyone. But I would trust a properly designed and audited privacy-preserving ID protocol, just like I trust using the same yubikey across different sites.


It does not inherently require the network to know anything more than "over 18". It does not require an age verification oracle to know anything other than "requested verification", and even that can be done only once so the oracle doesn't know how many times you provided that verification to someone. See e.g. digital IDs/driver's licenses.


Sure: if we believe that having a giant third-party in the sky that does verification of users itself a giant privacy violation that the web currently doesn't require us to subject ourselves to--and if we also believe that the information they collect won't be invasive, particularly when dealing with an international context--I absolutely agree that we can then use various cryptographic schemes to protect our information from the provider.

However, I certainly don't believe that: I believe that having a giant centralized verification service is inherently dangerous in a world where we are apparently reliant on inanity like CrowdStrike to feel a sense of security; and, in fact, there was even just recently a big hack of an identity provider company.

https://www.eff.org/deeplinks/2024/06/hack-age-verification-...


You don't need a centralized service. In the US, individual states are adopting[0] the ISO 18013-5 mDL standard, for example, which allows each state to be authoritative on identity information for its residents (or, if I understand it correctly, something like "isOver18" signed attributes).

[0] https://www.mdlconnection.com/implementation-tracker-map/


The "giant centralized verification service" can be some entity that already has your information such as your government.


My reading is they don't even need to verify age. The bill refers to users the platform "knows or reasonably should know is a minor" all over the place, and has a bit about a feasibility study for age verification (which can, of course, be done in a privacy preserving way). Unless I missed a particular sentence that didn't include that language, it seems that it's more that if the user tells you their age, or if you are a giant advertising company that stalks everyone in the world to build profiles on them and you determine through your nefarious activities that someone is a child, then you need to take that into account.

If you don't collect information on your users, then you don't know they are minors. The main sticking point is probably (some) games and child focused spaces where perhaps you should reasonably assume they are.

Basically, they need rules that even the likes of 4chan have enforced for decades: mention being underage, or being in high school, etc.? Banned (or in these cases, set to child mode).


> There's a lot more to KOSA than just the "duty of care" requirement. But basically, all of it comes down to the same fundamental flaw. At the heart of KOSA—and so many online child "protection" bills—is the pretense that it's possible to eradicate problems associated with young people if only big tech companies would do better. > > Online bullying is bad, of course. But bullying in general is bad, and bullying existed long before the internet. So did teens with eating disorders, problematic media habits, mental health issues, and risky or dumb decisions regarding sex and intoxicating substances.

Ok, I call "logical fallacy" on this argument.

However bad KOSA is, the quoted text feels a bit like saying "we'll never prevent all crime, so we will abolish the police and stop prosecuting _any_ crime". If there's a political amount-of-crime lever, you might never be able to pull it all the way over, but you very much can nudge it a little bit in either direction.

A properly written and thought out bill would not eradicate, but reduce, the cited problems, and it would make big tech, social media and advertising companies do a lot better than the worst excesses we see nowadays. There's a good argument that KOSA is not that bill, but to the extent that the quoted argument would also apply to a better thought out bill, I find the argument overly generalizing and invalid.

Or, for another example, gun crime exists all over the world (organised crime will always find a way - the Yakuza manage it even in Japan). But it's certainly the case that some countries have a lot more gun crimes than others. This argument is essentially saying that even if we had Japan's gun laws, there'd still be some gun crime, which is true but misses the point that there wouldn't be the same amount of gun crime.

Bullying existed before the internet, but the internet has certainly enabled new kinds of bullying, and it's possible the internet has drastically increased the amount of bullying. Eating disorders existed before the internet, forums where you upload a photo and others tell you you're definitely fat and should kill yourself, not so much.

It annoys me when someone tries to shoot down a low-quality bill with an equally low-quality argument. We can do better than this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: