Or, govern this stuff under international treaty. Internet traffic _is not_ like truck traffic. It's more like radio (in some places it is radio). It's the Ross Sea, not I-95. What gives any one nation (or subdivision, like a certain US state known for its leading position at the Darwin Awards) the right to meddle with a global resource like that? Well, as a practical matter, they rarely do. Instead, they usually let the oligarchs who own the ruling elites (as well as the infrastructure) in each country do it: except when it's more convenient to use government as a hammer.
The radio spectrum is a finite resource that can only be (reasonably) consumed locally. The internet is for all intents and purposes infinite.
One of the purposes of government is to oversee the distribution of scarce resources in a way that avoids the tragedy of the commons.
This use case clearly applies to the EM spectrum allocation and clearly doesn't apply to the internet so I think comparing the internet and radio is a flawed analogy.
This. With all of this AI/image recognition stuff, it'd be relatively easy to allow people to maintain a list of filters and when someone posts nudity, profanity, etc., just obscure it in their feed and present a "this is something you asked us to filter out <keyword>, proceed with caution" or just hide it entirely. Put the individual in control, and force them to create a list (if they wish) on next app open with a timestamp of if/when they declined to set something.
As for the concern about disinformation, ignore the content and watch the action. If some guy in the hills of Arkansas wants to think "them dems are lizard people" but doesn't show any signs of taking action, who cares (rhetorical)? If they show signs of action, follow the usual path of legal recourse/escalate to the proper authority.
> This. With all of this AI/image recognition stuff, it'd be relatively easy to allow people to maintain a list of filters and when someone posts nudity, profanity, etc., just obscure it in their feed and present a "this is something you asked us to filter out <keyword>, proceed with caution" or just hide it entirely. Put the individual in control, and force them to create a list (if they wish) on next app open with a timestamp of if/when they declined to set something.
Or even without AI tools. All this stuff is a complete non-issue on my extended-friend-group WhatsApps and my family text message threads. It was never a problem when I ran a private forum for people I knew, years and years ago. Mixing private communication with people you know with a global broadcast system seems to be what causes the problem to exist in the first place. Ordinary actual-humans-you-know communication channels don't need some 3rd-party censor, and for the most part don't even call for end-user-tunable censorship tools, beyond being able to leave & form groups or maybe set images to click/tap-to-view or something. Nothing sophisticated, certainly.
But why not? They weren't threatened with legal action (and if they were, the correct solution would be to undo the law that they were threatened with). They were attacked by the same moral busybodies who have been clutching their pearls over violent video games since at least I was a kid. The correct solution here is to completely ignore those people and keep ignoring them.
That's already available. Turns out, admin moderation is still necessary to make the website tolerable.
Besides, boards generally have just 150 threads running at once. If the majority of the live threads are just spammed crap, that makes everything worse. If there are 100 topics with lively discussion and 10 or so spam threads that haven't been culled yet, no problem. If there are 80 spam threads, then there's going to be a ton of churn as everyone competes for the other 70 threads. Each new one bumps another off the site. There'll be discussions started anew, with the same crap filling it up because people think they have to say it every time. This is what happens on fast-moving boards.
And one could ask "Why not change the number of live threads? Or never retire threads?"
To which I'd respond, "If you have to keep recommending changes to fix what your other changes cause, maybe the original change wasn't good."
1. Users themselves have to see the stuff to filter it. This kind of work has burned out scores of classifiers. There's some really, really horrible stuff out there
2. The job will be left to people to make content filters. And those people would need to look at the stuff to filter it. And face legal consequence to doing so. And probably not get much compensation
Who or what group would fill #2? It would either be a group that sought to profit from it, or a group funded by governments, or a group that wanted to be at the node point for the spread of such material, or a group that wanted to influence society (e.g. for spam, or a foreign government seeking to mess with a society, etc)
It is hard to fathom the sheer volume of horrible content our filters deal with unless you've done any sort of moderation. And usually if you're doing moderation you're well downstream of the worst bits.
It takes one mildly determined person and 30 dollars to rent a server farm to post 150 threads of garbage, and there goes everything else on the catalog.
edit: actually, in the world where there's _zero_ moderation then you probably wouldn't even need the server farm since there wouldn't be any risk of being IP flagged.
Filtering is a game of constant whack a mole. That's why mods and janitors are used to do it, because it would be overwhelming if it was on each and every user and, you'd spend the bulk of your time online seeing egregious content and updating your filters, as well as dealing with the stress of actually seeing that graphic stuff.
Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law. Section 2256 of Title 18, United States Code, defines child pornography as any visual depiction of sexually explicit conduct involving a minor (someone under 18 years of age). Visual depictions include photographs, videos, digital or computer generated images indistinguishable from an actual minor, and images created, adapted, or modified, but appear to depict an identifiable, actual minor.
>The correct solution here is to completely ignore those people and keep ignoring them.
Until they attack you with the law. You'll quickly find out that ignoring judges leads to bad outcomes.
Now, whatever they say could be total bullshit, and you could defend yourself and get paid damages, but not defending yourself at all is how you end up in deep debt or prison.