
Attacked from Within (2009) - antidesitter
http://atdt.freeshell.org/k5/story_2009_3_12_33338_3000.html
======
3pt14159
It's a well written article, but it's wrong.

> The bottom line is that no active moderation system, no matter how many
> users are empowered to rate each other and each other's comments, can
> preserve community in the face of the multiple identity syndrome inherent to
> online forums.

This isn't true. Depending on what you want to design against you can use both
social signals (downvotes) and a web-of-trust _for singular identity_ and
another one for dealing with spam / trolling and you can keep anonymity, you
can keep good content, you might get some bubbling at the extremes, but you
know what? I don't want to listen to Alex Jones idiots anyway, and with the
proper graph analysis techniques you can filter out the Alex Joneses while
keeping in the Bill Kristols.

Also, a poor mans web-of-trust is the invite only ban both model. If you
invite me in, and I get banned you get banned too and anyone else you invited
has their remaining invites revoked.

We have the science to tackle shitty online behaviour. The problem is that it
destroys metrics for social networks. There are not 335 million people using
Twitter a month, even if there are 335 million "active accounts".

~~~
jancsika
> We have the science to tackle shitty online behaviour.

Can you point me to some extant services that leverage this science?

~~~
3pt14159
I'll do it for you for a reasonable bill rate. I've done it elsewhere
including at a Andreessen funded social network where I was a CDS.

~~~
jancsika
"point me to" means "please give me a public access point for the service or
code repository for such a service so I can inspect it and see how it works."

~~~
3pt14159
It's hard to service that request because social networks are closed source
and most research into webs of trust is academic. Take email; there's an ad-
hoc web-of-trust out there, but it's federated amongst a number of actors. I
agree we need something open, but the incentives to make one don't exist.

------
jcriddle4
What computer algorithm can be used to code up "...front page position should
be determined by quality of conversation not voting"? Are we talking about
sentiment analysis?

~~~
dragonwriter
Arguably, the HN flamewar detector is a crude and limited algorithmic approach
to exactly that problem.

It's obviously not a _complete_ solution, as it seeks neither to address all
quality dimensions nor to replace voting, instead being a factor alongside
voting, but still...

~~~
jcriddle4
Do you have a link to how the HN flamewar detector works?

~~~
dragonwriter
Not guaranteed to be current, but the “Controversy” section addresses it:

[http://www.righto.com/2013/11/how-hacker-news-ranking-
really...](http://www.righto.com/2013/11/how-hacker-news-ranking-really-
works.html?m=1)

~~~
jcriddle4
Thanks for the excellent link. Quite a good read. From the article: [In order
to prevent flamewars on Hacker News, articles with "too many" comments will
get heavily penalized as "controversial"....] Ok, that isn't exactly even a
slightly sophisticated method that could be extended to decide some comments
are "high quality". In theory if you wanted to implement article ranking you
need a method that ranks articles based on comment quality. Without a
automated method you are really just using humans to do the ranking.

