Don't confuse people "who dared ask, "why are we doing this?"" with people who want reparations, affirmative action, etc. And that's just a second definition I've thrown out there, alongside yours, when there's many more.
You reference those things as if they are a "bad thing" ... they are not universally "bad things". They are merely individual actions/tools/incentives to right historical wrongs. It's not wrong for people to want to right those wrongs, even if someone else feels that other actions would be more efficient at righting those wrongs ... that's a valid discussion to have, not summarily dismissing people proposing solutions.
>You reference those things as if they are a "bad thing"
More accurately, as if they are not obviously noble, since you conflated everyone to have been called SJW with someone brave and insightful. But yeah I do think those things are intrinsically wrong.
Well, online recreation is as real as online discussion. There's people on the other end etc. On the other hand, if a kid spoils a game in a real-world playground, we don't give criminal records for that. On the other other hand, a game company is being harmed when you spoil their games.
I like it. Maybe a system similar to certificates used in https. So trusted partners signing photos and videos, maybe news agencies. The partners will be careful because there existence depends on this trust system.
We will still have un-trusted photos and videos, but they can be recognized as such. Maybe browsers can add a small default 'logo' to an image or video, like the green lock now next to urls.
For news we can assume that it's signed by the journalists and news agencies who created the content. Or who researched the origin and are willing to put their credibility on the line.
Edit: To clarify, this doesn't prove if something is real or fake. It just makes attribution of the origin easier and creates a traceable chain of trust.
Maybe with a chain like with SSL certificates & authorities? Or maybe there could be a reference database with all the versions signed by a given authority so that you could verify it's an approved version? Why not go even further and show the same kind of alerts / block content if the source / signature can't be trusted?
Especially given that in the modern media landscape, standards for what's considered trustworthy have changed significantly. Think about it, before it was usually 'did this journalism thing professionally without having a conflict of interest', and that's roughly how the likes of Wikipedia define reliable sources.
But now that's not the case, and in many cases, it's amateurs who more trustworthy than the so called professionals are. That's true in science reporting (where it's often academics writing blogs and stuff outside of their university employment), it's true in technology (where many reliable sources are run by hobbyists) and it's true of the gaming and entertainment media worlds where fan sites, blogs and YouTube channels are often far more reliable than large media organisations are.
A good solution here would basically need to be able to figure out that Science Blogs is more reliable than say the Daily Telegraph when it comes to science reporting, or that Serebii.net is more likely to be right about Pokemon than say the Guardian or BBC is.
The idea is that you could at least trace news back to the source. Whether you “trust” Fox News or ABC News or whoever is up to you. So this would be a client that shows you News items which are each signed and verified where they came from.