

How to make the Internet a nicer place to live - koblenski
http://ericsink.com/entries/feelings.html

======
michaelkeenan
I like this a lot.

I'm even more interested in system-level approaches to the problem. The voting
systems of reddit and Hacker News and Slashdot are an improvement over the
forums that came before, but more is possible. Are there any technologies that
don't exist but could?

For example: an ebay-like system for rating someone's civility after you argue
with them. (This might work better if ratings from well-rated people were
weighted more.)

Or maybe an internet-wide troll-reporting system. There could exist a browser
extension (maybe a part of the Reddit Enhancement Suite) that tags or hides
comments from people who are frequently downvoted, or frequently rated poorly
in the civility system.

Or a narrow AI that predicts your karma based on the words you use. If there
are words that are often in downvoted or rated-uncivil comments (perhaps
"lol", or "bullshit" especially if it's the first word in the comment, or "+1"
especially if it's the entirety of the comment), it could warn you and check
you want to continue.

Or if you're arguing in a domain-specific forum, like r/politics or
r/economics, you're tagged with your score in a domain-relevant quiz (e.g.
this for politically-relevant news: [http://www.pewresearch.org/quiz/the-news-
iq-quiz/](http://www.pewresearch.org/quiz/the-news-iq-quiz/)).

Or a multi-variable voting system that asks for upvotes and downvotes for
civility, apparent truth-seeking, agreement, and more.

Or much more broadly, what society-level interventions are possible? Lead
exposure lowered IQ, and iodine-supplemented salt raised IQ, and IQ seems to
cause (or at least be strongly correlated with) non-violence. Have we already
made use of all the easy biological interventions like that?

Any other ideas? What systems will the most polite, truth-seeking internet
forums of the 2020s use?

~~~
groby_b
Couple of thoughts, unsorted brain dump:

* Ranking/Karma systems can and will be abused. See the digg bury brigade, etc. It's really difficult to separate honest feedback from feedback deliberately tailored to vanish unpopular opinion, or hurt people. (It's not like eBay ratings aren't gamed, either. Or Amazon ratings/reviews)

* Related to that: One person's troll is another person's idol. Post on any political topic if you don't believe that :)

* Related: I believe comment-based rating works slightly better than user-based rating. Just because it takes a lot of effort to silence a user by downvoting every single comment. It still allows burying unpopular opinion

* I believe rating systems are gamed because rating is "free". If you only had limited rating capital to expend, I'd assume most people would not want to waste it on burying things.

* On the flip side, reputation doesn't matter to many people because it is assigned to online IDs, not actual people. I.e. reputation is free, as well.

* Instead of a narrow AI, I'm wondering if a simple Bayesian filter might do the trick for the vast majority of Internet trollery. Yes, it's gameable, but as troll filters become more sophisticated over time, gaming becomes OK. [3]

* An immutable ledger of past reputation transactions might be meaningful. (Yes, I'm on the Bitcoin bandwagon with that, sorry). It does not solve the issue of multiple accounts with different reputations, but it's in each participants interest to have each individual entity represent _consistent_ reputation.

* I believe any reputation system will have a social component. What your friends think about somebody you never met matters more than what random strangers think, because chances are your friends and you have at least similar value systems.

* Society-level interventions might start with educating people in "direct communication" [1] and "nonviolent communication"[2]. And yes, education in general helps too. I'm not sure "biological interventions" are without ethical issues.

* You _do_ increase the risk of an information bubble - people might confuse civility with agreement. But I do think that over time, that might sort itself out. In my experience, people I consider of high integrity tend to rank other people's integrity based on behavior, not opinion or political leaning. I might be an optimist.

* It seems I use politeness and trust interchangeable. I'll have to ponder that some more, but it seems fundamentally correct to me. I care about respectful politeness, not fake politeness with ensuing backstabbing.

[1] [https://csde.washington.edu/~mbw/direct-and-indirect-
communi...](https://csde.washington.edu/~mbw/direct-and-indirect-
communication.pdf)

[2]
[http://en.wikipedia.org/wiki/Nonviolent_Communication](http://en.wikipedia.org/wiki/Nonviolent_Communication)

[3] [http://xkcd.com/810/](http://xkcd.com/810/)

~~~
A_COMPUTER
>I'm wondering if a simple Bayesian filter might do the trick for the vast
majority of Internet trollery.

Not actually possible because "trolling" so often isn't in the actual words as
it is the context. For example, presenting an intentionally poorly-reasoned
argument designed to infuriate people. Or for instance, it is possible to
appropriate the rhetoric of "social justice" and construct a troll that is
indistinguishable from a legitimate statement, the only difference is the
context, e.g. used by an oppressor rather than the oppressed. Finally, in many
communities it is acceptable as part of the in-group to flame the out-group,
but not vice-versa. One person's "eat shit and die" is acceptable, the other
person's is not.

I suppose this could be mitigated by training your own bayesian filter based
on your ingroup and your outgroup separately, based on who you designate your
ingroup via social network or something. Sort of like "if I flame an outgroup,
I won't be punished for doing so by people who consider me part of their
ingroup."

------
A_COMPUTER
Sounds to me like
[https://en.wikipedia.org/wiki/E-Prime](https://en.wikipedia.org/wiki/E-Prime)

You construct your sentences to be based on your perception rather than making
universal proclamations. "I didn't like this" versus "this sucks!"

And yet again, the character limit of Twitter means it's really hard to do the
right thing.

------
daveloyall
The only way Microsoft will get out of the bed they made for themselves is if
they can avoid killing new pappys.[0]

Attempting to embarrass the sons of slain pappys is not a good start.

0:
[https://news.ycombinator.com/item?id=8595905](https://news.ycombinator.com/item?id=8595905)

