
Did the early internet activists blow it? - IAmEveryone
https://slate.com/technology/2020/02/three-decades-internet-freedom-activism.html
======
v7x
I agree with most of the article, but the author faltered in the area I
predicted he would before I began to read. He expresses fear that without some
restraint that sites might become "like 8chan". To which I have to ask, why is
that a problem? Because its a hotbed for hate speech? What is hate speech? I'm
sure I dont need to elaborate further on what I see wrong with that. What
people seem to forget/ignore is that hate speech and other forms of taboo
content a) existed before 8chan, 4chan, and other sites with similar cultures,
b) would continue to exist even if these sites were removed, and c) currently
exists on every other digital platform where people may create communicative
content. Taboo content will always exist. You may not like it. I may not like
it. But even if it was made illegal to say anything stronger than "I do not
accept <x>.", people would still go on the same way as before, albeit likely
with more subtlety.

With that in mind, I'd also argue that there should be spaces for "anything
goes" content. If people are going to do it anyway, why not give them their
own space? Let it be an outlet for those who choose to partake in such
content. If we arent willing to accept certain kinds of content everywhere,
lets at least allow it somewhere. It will still exist regardless of what we
do, but at least if it has its own space we can separate oursepves if we so
desire.

Lastly, to make sites like 8chan out to have no value other than content some
might find deplorable is, frankly, as close minded as those who the author
apparently fears. If you hadn't geussed already I use 4chan regularlly and its
brought much more positive value to my life than negative. The technology
board is what got me into programming in the first place. Its through them
that I learned about Linux, the benifits of FOSS, the history of computing,
and skills that have lead me to be succesful in my life. They are even the
ones who lead me to hacker news. Maybe the messages they gave me were worded
in a way that some deem undesirable. But when looking at the message itself
and not the envelope its wrapped in there is value to be found in these
spaces. If the culture is not for you, thats fine! I understand. But please,
don't see that as a reason to condemn those of us who do use these sites. We
have a right to exist too.

~~~
eropple
There certainly are anodyne parts of 4chan--but they're obviously, to the
point where your post comes off as disingenous, not the problem. Hell, they
could migrate back to Something Awful, like a lot of us who grew up ended up
doing. (Which, just recently, torpedoed FYAD, the last vestige of the edgelord
era. The FYAD types seem to have fucked off to 4chan and elsewhere. Nobody was
surprised.)

Hate speech is abhorrent and its purveyors should be ostracized from decent
society. But you can deal with the crazy screamer on the streetcorner. The
organizational facet is not merely distasteful but _threatening_. These are
places to air doxxed information. They're places to gin up raids. Remember
"4chan is not your personal army"? It isn't real anymore, if it ever was. For
the last five years, 8chan in particular (largely because 4chan has an
ownership group that maybe-just-maybe is scared of real consequences, which
tells me that _real consequences work_ ) _has_ been the personal army of a lot
of sadsacks who, just to name one facet, really really _really_ want to make
the lives of women who have the temerity to make or talk about video games
_really_ horrible.

These places are literally-not-figuratively hives for fascists to radicalize
the clueless, and that is a threat at a societal level.

~~~
heartbeats
But they already are. The crazy screamer on the street corner causes an actual
harm to you. To avoid 8chan is trivial; just don't go there.

That people get radicalized going there seems like a poor argument. If I
understand you correctly, the principle is the following:

1\. Someone holds acceptable views.

2\. They visit 8chan.

3\. They now hold abhorrent views.

Why should 8chan be blamed for this, or rather, what did they do wrong here?
Don't you agree that people have the right to hold opinions without
interference, and to seek, receive and impart information and ideas of all
kinds, regardless of frontiers?

~~~
eropple
Radicalization is not about _opinions_ , it is about _convincing the
radicalized to undertake acts of violence and threats thereof for a political
purpose_. You get, yeah, that these are _also_ places where these little
shitheads gather and decide who to brigade (the aforementioned "oh, hmm, that
lady has opinions about video games, let's try real hard to destroy her life"
dreck that the various chans indulge in when they're bored), that geek each
other up to the point where somebody thinks sending a SWAT team at somebody is
a great idea? And that the management is shockingly hands-off about letting
them do it and have been for years now?

~~~
heartbeats
> Radicalization is [...] about convincing the radicalized to undertake acts
> of violence and threats thereof for a political purpose

I haven't heard it used that way. Wikipedia, abbreviated:

> Radicalization is a process by which an individual comes to adopt
> increasingly radical political, social, or religious ideals and aspirations
> that reject or undermine the status quo or contemporary ideas and
> expressions of the nation.

As in, adopting extreme views. And that raises the question: if you have
100'000 people and tell them all to go read Mein Kampf, then some of them will
undeniably go, "yeah that Hitler guy seems like a pretty smart fellow". What
is the solution? It can't be to say that extremist speech should be banned. If
so, we have to burn most of Western philosophy.

And it seems like "it is morally justifiable to use violence to achieve
political means" is just another political belief, correlating strongly with
the strength of your political views. So all you are saying is that some
political views should be banned. I don't like this.

> And that the management is shockingly hands-off about letting them do it and
> have been for years now?

The management did nothing wrong. Each site has a different vibe. In the case
of imageboards, it's usually 'anything which isn't explicitly prohibited by
law'. It's worth noting that 4chan bans the behavior which you describe.

Even if they didn't, it raises an important question: where should the line be
drawn?

It seems easy to think of 4chan or 8chan as a discrete entity, which should be
shut down. But say that we had some physical entity, like a pub, which for
some reason attracted an extremely racist clientele who liked to go have a few
drinks and then beat up the immigrants.

Should the pub be held liable for this? What about serverless mailing lists,
where I manually put everyone on the list in the "reply to" field, or Usenet?
Should that be 'shut down' too?

The thing I don't like about this is that both counterparties are willing. If
you're spamming someone, you have person A who sends to person B, but person B
doesn't want to hear from person A.

With an imageboard, you have person A who sends to B, C, and D, but B, C, and
D all _want_ to hear from A. Should they be stopped in this endeavor?

~~~
anigbrowl
Let's take a concrete example: how about people swapping recipes for chemical
weapons and discussing their efficient deployment against unknowing civilians?
Or people discussing and developing (largely anonymously) the logistics of
mass shootings, with a view to maximizing the body count?

~~~
heartbeats
They're just having a chat. It's unethical to censor them. Perhaps they should
be brought in on some other charges, but there's nothing illegal about
talking. They're not threatening to kill anyone, are they?

In practice, you see this as well. There's a lot of people _posting_ on the
Internet about how they're going to do this and that, but in practice most of
them are just blowing off steam.

I get why a site like Facebook which wants to have community standards would
like to enforce it, but it should be up to the proprietor.

~~~
anigbrowl
_They 're not threatening to kill anyone, are they?_

Yes, that's kind of the point.

------
CapitalistCartr
Freedom is the right to be wrong. If I am free as long as I only do it "The
Right Way", I'm not free. People _need_ to try different things, make
mistakes, be foolish. Without that freedom, we become a(n at least metaphoric)
theocracy.

Society's disapproval cannot be incorporated into law in a free society.

~~~
Barrin92
>. If I am free as long as I only do it "The Right Way", I'm not free

This is a shallow definition of freedom, the sort of idea of freedom that
children instinctively have when they're told to go to bed and start throwing
a tantrum and it is largely what the internet was built on, maybe not
accidentally by people who in many ways were juvenile adults.

There's a deeper version of freedom, a more adult one, that recognises not
only ones capacity to do what one wants, but responsibility for the social
good at large. Freedom doesn't just consist of running against the current, it
also consists of maintaining enough order so that authentic expression of
freedom is possible. A society where everyone is at the danger of being robbed
while crossing the street is not free, it's just in chaos.

Likewise an internet without hierarchies, without controls and without
enforcement of social taboos, be that by law, by gatekeepers or through
exclusion incentivizes the very worst actors to run amok. Social approval
should be incorporated into law, (either literal or culturally) because places
that don't do it become uninhabitable. (as the current public internet
increasingly is).

~~~
iamnothere
> Likewise an internet without hierarchies, without controls and without
> enforcement of social taboos, be that by law, by gatekeepers or through
> exclusion incentivizes the very worst actors to run amok. Social approval
> should be incorporated into law, (either literal or culturally) because
> places that don't do it become uninhabitable. (as the current public
> internet increasingly is).

"Uninhabitable?" To me it seems like the web is plenty full of people,
especially in the gated communities of the social networks. Small community
forums are still running as they always have. I would even say that things
have improved noticeably since 2016, which was a low point.

When you say "uninhabitable", I suspect that is actually code for
"uninhabitable _to me_." I also find certain parts of the web distateful, and
I avoid them. There are other places where I spend quite a bit of time, and I
don't find them uninhabitable at all, nor do I find them becoming generally
worse off.

The problem with your statement about taboo enforcement is the question of
_which taboos_ do you select? There is no such thing as an impartial viewpoint
here. Creating a moral arbiter with actual power is just creating another
cultural battlefield that will oscillate between left-wing and right-wing
control. In that, it will further entrench the current two-party duopoly and
prevent meaningful change. It will also force businesses to be dragged further
into the conflict, and may eventually lead to explicitly partisan companies--
something I hope to never see.

Frankly I find the mores of contemporary society to be awful. I wouldn't want
their taboos enforced on me. In fact, I would actively resist it using
whatever measures I could muster. But since I have the option of moving to
spaces where dissent and nonconformism is allowed, I choose to do so.

~~~
Barrin92
>especially in the _gated communities_ of the social networks. _Small_
community forums are still running as they always have

exactly my point, focus on "gated" and "small". What do you think is it that
distinguishes gated communities from the generic facebook newsfeed? The right
to censor, judge who gets in or not, be able to exclude bad actors, and to
have stake and identity when commenting, and to set common rules for conduct.
the internet is transforming from a public space to an internet of private
gated communities.

In this the internet perfectly mirrors declining cities that do not maintain
their commons. A lot of fractured, tribal communities emerge who are not in
contact with each other, the marketplaces and houses degrade, and a huge
amount of resources is spent on simply insulating oneself from everyone else.
It is not a pretty picture.

On who gets to decide what taboos exist? There is no need for a moral arbiter.
In communities that function well taboos and rules reflect the consensus of
its constituent members. The fact that Mark Zuckerberg gets to make
institutional decisions for 2 billion people is another sign of dysfunction.

~~~
iamnothere
This sounds like more of an argument for smaller, niche communities. I agree
with this. How can you contain all of humanity's diversity of opinion and
experience into a single "platform" with coherent norms? You can't. It's a
fool's errand. Small communities do manage their own norms in a way that is
consistent with the values of their members, and they develop their own
methods for doing so; there is no need to create a legal enforcement
mechanism.

------
RcouF1uZ4gsC
One thing that I find amusing is that the same people worried about the
government becoming too authoritarian, want to give the government the power
to stifle “hate” speech, which can easily be defined to mean what the
government wants it to mean.

~~~
makomk
It's almost like, on some level, they don't actually believe that they're the
downtrodden ones sticking it to those with power. Like this is just a talking
point that's used to justify the actions they're trying to take with the power
they claim not to have, and to dismiss any concerns about the morality and
consequences of those actions as concern trolling - after all, if you really
cared you'd be worrying about what the people _with_ power were doing, not
trying to police the powerless - and then thrown aside when it's not useful.

What makes this particular approach really dangerous, of course, is that the
easiest - and perhaps only - way to enact this censorship is if the people
demanding it actually do have power within the existing system and aren't
particularly oppressed. There's some truth to the oft-repeated argument that
this can't be oppressive because of power structures - if, say, a bunch of
woman of colour who're prominent for their activism within their community
think this way, that affects no-one. If a chunk of the existing white male
political establishment latches onto this for their own goals (like, say,
defeating the Republicans), on the other hand...

------
cwyers
I don't really like the premise of the article. It seems to revolve around
"how Wikipedia remains so information-rich and useful when the rest of the
internet (in his view) is filled with divisive, corrosive misinformation." I
think Wikipedia has a lot of problems, and coming at it from a perspective of
"how can we repeat Wikipedia's success" cuts us off from talking about a lot
of topics that are important.

The first examples I could remember off the top of my head:

[https://www.gwern.net/In-Defense-Of-Inclusionism](https://www.gwern.net/In-
Defense-Of-Inclusionism)
[https://twitter.com/GretchenAMcC/status/1227060591350566912](https://twitter.com/GretchenAMcC/status/1227060591350566912)

------
roenxi
The internet does need regulation; what I'd like to see (but don't expect to)
is a regime where platforms have to be transparent about their moderating
systems.

I have no objection to Facebook, Twitter, Google/Youtube, Hacker News or
WeChat aggressively moderating their platform, or even running social
engineering programs. Their platform, their problem, humans have lived with
that sort of thing for centuries. My issue is that content recommendation
systems are not transparent about why things don't appear.

Zerohedge got booted from Twitter recently. I don't mind that sort of thing
because it is obvious what happened; go to their twitter page and it says the
account is suspended. If Google tried something similar and removed them from
the index there isn't a tool (that I know of) saying "Zerohedge has been
removed from Google's index" and that is upsetting. I'd support legislation
where people had to provide that sort of information in a public and
accessible manner.

I suppose my big complaint is that if the big companies were organised against
a group (maybe Google has an irrational dislike of Russia-based IT companies;
it is possible) then that group has no way of identifying that they are being
treated differently. This will always be a concern for political minorities.

~~~
ivanhoe
I agree with you, but I don't see that as "regulating the Internet" (which is
the idea I don't like at all), because:

1) you're really just aiming to regulate the selective censorship (which is in
a direct relation to the existing monopoly laws)

2) why limit it on the Internet? TV and newspapers played exactly the same
game for decades and they still do on some subjects. And when no major
newspapers mentions a certain story, it's pretty much to the same effect as if
google's removed it from the index. You need to first hear about something to
know what to google for, and if it's not on a few mainstream media sites for
most of people it just didn't happen.

~~~
roenxi
It isn't that (2) is a bad idea on the face of it, just that a further
legislative push seems unnecessary as the tools are already in place. I assume
the libraries have an index of newspaper articles dating back a few decades
that can be used to chart out what stories get mentioned. A similar index of
TV programming is probably already available. If it isn't it should be.

------
smsm42
> I hear this too-much-free-speech argument a lot these days, but I can’t get
> used to it.

That has been my feeling for the last decade. When I grew up, there were all
this talk about how Internet would be a truly free medium of information
exchange, and what immense benefits this will bring to humanity. Usually,
there's generations between new things appearing and their effects being seen,
and one can only read in books about what your ancestors though about this new
thing and marvel about how wrong they are. Now, we live in the happy time
where it takes mere decades.

And so, 30 years after, the Internet did change the world. And it didn't. We
got the awesome freedom of information - and we've got the usual suspects
gnawing at it from all directions, and unlike what early enthusiasts
predicted, largely they succeeded. Some expected that. What they - and myself
- didn't expect at all that formerly "progressive" movement, academia and
large segments or highly educated and cultured people will join the most
oppressive governments in calling to reign in free speech, restrict the
freedom of information flow, and deny people with non-mainstream opinions
access to free speech, and threatening with vicious harassment and prosecution
anybody who dares (or ever dared) to voice a controversial opinion.

I remember when freedom of speech debate was about "should we allow the
(actual) Nazis to express themselves publicly" and the answer was "we don't
like them, but yes". Now the debate is "should we allow people who support a
mainstream political candidate to listen to what she has to say within 10
miles of where we are" and the answer seems to be again and again "we don't
like them, so no". And it's even worse on the Internet, because it's not 10
miles, it's everywhere. What happened to all those free speech supporters? Did
they get old and die? Did they only support free speech because they didn't
think nobody would say anything they don't like? I can't believe they were so
stupid as to think that. So what happened?

------
carapace
> But the reality is that no matter how much money and manpower (plus less-
> than-perfect “artificial intelligence”) Facebook throws at curating hateful
> or illegal content on its services, and no matter how well-meaning
> Facebook’s intentions are, a user base edging toward 3 billion people is
> always going to generate hundreds of thousands, and perhaps millions, of
> false positives every year.

In re: FB, there was an FB engineer saying the other day that it was
impractical to have humans review every post.

I _almost_ responded (but refrained out of a sense of not being snarky) that
the word "impractical" here actually means "uneconomical".

No one's _forcing_ Facebook to be what they are.

> no matter how well-meaning Facebook’s intentions are

They want _money_ at the end of the day.

I know I'm beating a dead horse here, y'all know this already, but the service
is _free_ to entice the "users" who are then _sold_ to the ad/marketing folks,
the REAL users of FB.

It's possible to set up a non-exploitative social network (and indeed, many
people are doing just that) but FB/Twitter/etc. have sucked the air out of the
room and _normal people do not care_ that they are digital cattle. They just
don't, not in any effective operational way.

If somehow FB shut down tomorrow _everything_ their servers do could be done
by other servers (the data would be a problem, archive it, yeah, but then who
gets to access what? That's just more data though, eh?) FB doesn't
fundamentally _add_ anything to the underlying capabilities of the Internet,
eh? There are no "magic angel babies" working there.

I think if you somehow went back in time to, say, the 1950's and tried to tell
people about FB they just wouldn't believe it. They wouldn't believe that so
many hundreds of millions of people would _voluntarily_ give a private company
every detail of their private lives (more than the _Stasi_ worked hard to
collect) for no more than the Internet dressed up in a crappy UI.

But people do crazy things _as long as everyone else is doing it too_ and
there we are.

I don't really have anything constructive to say. Sorry.

~~~
passwordreset
TLDR: $6.25M is all it would cost to implement Orwellian censorship across
Facebook.

TFA says there are "hundreds of thousands, and perhaps millions, of false
positives every year." Let us assume there are a million false positives that
require human intervention every year. There are 50 work weeks in the year, so
to resolve 1,000,000 false positives over 50 weeks requires humans to resolve
an average of 20000 per week, or 4000 per day, or 500 per hour, or 125 every
15 minutes.

Therefore, a team of 125 people could theoretically solve this problem by
resolving 1 false positive every 15 minutes. At $50,000 US dollar pay scale,
Facebook would need to pay out $6.25 million dollars to solve this.

Therefore, given "a user base edging toward 3 billion people is always going
to generate hundreds of thousands, and perhaps millions, of false positives
every year," it would cost FB ~$6.25M to hire a team of 125 people to resolve
the issue, per million false positives.

That seems economical for a multi-billion dollar company.

~~~
jonas21
Facebook already employs 15,000 content moderators, both directly and via
contractors. By many accounts, they're overworked and don't get enough time to
deal emotionally with what they have to see [1] -- and they only review a
fraction of the content on Facebook. 125 people would clearly be many orders
of magnitude too small to review everything.

The "hundreds of thousands, and perhaps millions, of false positives" quote is
referring to the rate _after_ human moderation (and is also just speculation
by the author).

I think people tend to underestimate the scale that would be required to solve
this problem because they tend to underestimate just how terrible people are.

[1] [https://www.theverge.com/2019/6/19/18681845/facebook-
moderat...](https://www.theverge.com/2019/6/19/18681845/facebook-moderator-
interviews-video-trauma-ptsd-cognizant-tampa)

------
justin66
Ought to correct the title, "Godwin" is a name people might be familiar with.

~~~
dang
The submitted title was "Goodwin: Did the early internet activists blow it?",
but we've switched to the article's own title, as the HN guidelines request.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

------
klyrs
*Godwin

------
hitekker
It sounds like Godwin is trying to distract public attention away from his
selling-out of .org to a private equity corporation.

[https://domainnamewire.com/2020/01/15/vint-cerf-and-mike-
god...](https://domainnamewire.com/2020/01/15/vint-cerf-and-mike-godwin-
follow-bad-talking-points-for-org-deal/)

[https://www.techdirt.com/articles/20200115/11301343739/techd...](https://www.techdirt.com/articles/20200115/11301343739/techdirt-
podcast-episode-234-mike-godwin-defends-selling-org.shtml)

[https://prospect.org/power/private-equity-corporate-
takeover...](https://prospect.org/power/private-equity-corporate-takeover-org-
domain-name/)

~~~
ggm
He can discuss multiple things at once. You think the world is single
threaded? He has a life outside of his board role (I also think the board
stuffed up, btw)

~~~
hitekker
A man who styles himself an internet visionary while abdicating his
responsibility to the future of the internet is likely to have an ulterior
motive when talking about the internet.

However I also recognize that our community, like other communities, tend to
forgive high status community members, as long as their lies aren’t too
obvious.

~~~
ggm
Well done for simultaneous criticism of him and me. This feels like _" if you
are not against him, you must be for him"_ reductionism. I merely note he is
capable of being seen, like all of us, as multithreaded.

~~~
hitekker
That criticism wasn’t against you; I was actually thinking about how the
community handled Hans Reiser 10 years ago. If you feel personally attacked,
though, then that might warrant some introspection.

~~~
Dylan16807
If he was being unreasonably well-treated, why did ReiserFS for the most part
get dropped like a hot coal just for being associated with him?

~~~
ggm
I think you inverted the logic. His case is "Hans Reiser was an outsider, but
when he turned out to be a killer (which does not strictly relate to his FS
work) we dropped him like a hot potato. Contrast that with Mike Godwin, who
did a thing (the respondand thinks is) reprehensible, but because he is a big
name, we give him leeway"

he's making a compare-and-contrast argument, as I understood it.

The difference is, Hans Reiser was a murderer. Mike Godwin is just conflicted
between his belief in a greater good outcome and his role as a PIR/ISOC
conflicted person. I think its a bit apples-and-fishcakes as comparisons go,
but I sort of get it.

~~~
Dylan16807
"our community[...]tend to forgive"

"I was actually thinking about how the community handled Hans Reiser"

That sure looks like an example to me, not a contrast.

But maybe you're right.

