Honest question in good faith - Why should we hold true to some past ideals of technologists 20 or 30 years ago?
I think it’s pretty clear at this point that the notion of the web as a self protecting organism that naturally rejects misinformation and stops bad actors is completely wrong.
I don’t know that I’m 100% in favor of what is being proposed here (partly because I expect there are likely nefarious reasons why governments are pushing it so hard.)
But I also think you can’t ignore that misinformation from anonymous actors has pushed democracy to the brink of collapse, and “that wasn’t the original idea of the web” isn’t the best rebuttal.
I think that this is a situation where "when guns are outlawed only the outlaws will have guns."
Requiring real world identities to post online had a number of chilling effects on speech online.
- LGBTQ people were outed after having their online persona linked to their real world identity.
- People who need to maintain strong personal/professional life separations have been outed (ex: Slate Star Codex)
- People have had their lives upended by being part of an angry community (ex: Blizzards RealID caused revealing peoples identity to the gamer community. Love my games but wow does that community love doxxing and sometimes SWATing)
- It has not prevented spam / trolls / people being hateful online (ex: Nextdoor)
This has been a long standing conflict (https://en.wikipedia.org/wiki/Nymwars). The UKs opinion has always been one of providing as much information as possible to the government for questionable purposes. This is the same government which constantly wants to backdoor encryption so that they can spy on citizens at any time.
> I think it’s pretty clear at this point that the notion of the web as a self protecting organism that naturally rejects misinformation and stops bad actors is completely wrong.
It's an interesting question, but the point isn't as clear to me. First of all I believe most people are able to see through misinformation. The biggest blame should be laid on algorithmic feeds that optimise for engagement, and are therefore designed to lead people into self-reinforcing loops where their ideas never get challenged. That's the main bad "innovation" that social media platforms brought; the arguments about dangerous ideas, censorship, and bad actors echo all the way back to the dawn of the printing press.
> But I also think you can’t ignore that misinformation from anonymous actors has pushed democracy to the brink of collapse, and “that wasn’t the original idea of the web” isn’t the best rebuttal.
I think this is overstating the role of "anonymous actors" - plenty of misinformation comes from well-known politicians and simply normal people. Are you referring to FB's concept of "coordinated inauthentic behaviour" and troll-farms in authoritarian countries? This is definitely a problem which platforms have to tackle, but again I don't think it's as huge a factor as people make it out to be and certainly has not pushed democracy to collapse on its own.
Interesting that democracy is “to the brink of collapse” due to individual people expressing their thoughts on technological platforms. Either this is an exaggeration or maybe there are other reasons for the “collapse” like bad actors in government (not regular citizens) providing false information and trying to manipulate the citizens.
> Why should we hold true to some past ideals of technologists 20 or 30 years ago?
Those aren't (just) their ideals any longer. I adopted them as my own, because I'd like the world to look a little like that.
Besides, what's 20 or 30 years? Since when do we think that the ideals of ancient philosophers, thinkers, religious figures, the FSF[0], heck even the "UNIX philosophy" have nothing to offer just because they're "old"?
Ideas aren't like milk, they don't spoil if you leave them out of the fridge for a few days.
[0] Yes, I know they are controversial, but their ideas were certainly influential.
> misinformation from anonymous actors has pushed democracy to the brink of collapse
If voters in a democracy are basing their votes on random stuff they read on Facebook or Twitter, the problem isn't misinformation. There is no way to have any platform for distributing information that is guaranteed to only distribute the truth. Any responsible adult should be aware of that and should apply critical thinking to whatever information they see, no matter where it comes from.
I don't know, to me it always seems to start with an account with a long name and man or woman in suits, carried on by people in similar online attire with extreme rage and confusion, until it hits someone with an ultrashort name and an anime icon who speaks shibboleth, at which point either the original misinformation is explained, or platform aristocracy kick in.
As far as my experience goes, those with full long names and professionally taken portraits as icons are easiest to trigger and corner because they only ever believe, refuse or double down. They never back down in embarrassment, or verify claims or make refutes with requisite care, and they are extremely angry by default as well. Complete anonymous posters(like in certain gray website) are very problematic in whole different ways but even those are not as easy to manipulate.
Facebook, TikTok, Instagram are a literal cesspool of misinformation that often gets its starting energy by being shared, upvoted and commented on by fake accounts. I don’t think “in any way” is fair.
Verified, authentic, checkmarked and vetted people still post false, misleading, or otherwise deliberate propaganda all the time. Truth claims from popular users fuel the fire of public opinion, whichever way the wind may blow.
Dare I say I don't think insistence on authentic identity will improve things.
> I think it’s pretty clear at this point that the notion of the web as a self protecting organism that naturally rejects misinformation and stops bad actors is completely wrong
Well HN does this thing called shadowbanning, where the user sees their post when logged in, but in a brand new isolated session (when not logged in), their posts are not visible. The only caveat to this is the user spent a considerable amount of energy writing a comment, only to have it ghosted and non-visible to other HN'ers meaning their time was wasted and their ideas essentially censored. Shadowbanning works however, and stops HN getting flooded with spam, but has the sneaky caveat of essentially censoring some content.
I think it’s pretty clear at this point that the notion of the web as a self protecting organism that naturally rejects misinformation and stops bad actors is completely wrong.
I don’t know that I’m 100% in favor of what is being proposed here (partly because I expect there are likely nefarious reasons why governments are pushing it so hard.)
But I also think you can’t ignore that misinformation from anonymous actors has pushed democracy to the brink of collapse, and “that wasn’t the original idea of the web” isn’t the best rebuttal.