
Eigenmorality - vinchuco
http://www.scottaaronson.com/blog/?p=1820
======
zimbu668
[https://news.ycombinator.com/item?id=7925375](https://news.ycombinator.com/item?id=7925375)

------
navait
My game theory teacher wrote a book, Global Collective Action, where he
modeled global warming as a game between n countries and came out with a
n-person prisoner's dilemma. the game is essentially the same.

And I think this belies the problem of global warming: it's far enough away
that the actual consequences are distant, but emitting carbon is easy now, and
each person's footprint is small enough. Denialism is a symptom, rather than
the problem itself: a rationalization of our actions to justify the now.

So, I guess the problem with the eigendemocracy system is that it has no
protection against mass self-delusion, though I think the same could be said
of any system. Could the eigendemocracy also break people into completely
unconnected networks, each thinking the other n networks are evil?

~~~
baddox
Did your game theory teacher have any ideas for a type of government or
society that could be expected to do a better job with climate change? Climate
change seems like the final boss of the public good problem, and I can't think
of any society likely to do a very good job with it (without some sci-fi
paradigm shift).

Functioning democracy, by which I mean a government whose policies closely
match the desires of its population with each individual weighted equally,
should be expected to do a bad job, because each individual accurately
estimates that he or she is better off voting to _not_ solve climate change.

Dictatorships or non-functioning democracies, by which I mean governments
whose policies are determined by a small group of people, should be expected
to do a bad job, for the usual and obvious reasons.

~~~
navait
He told me to not buy land near oceans.

~~~
baddox
Perhaps it would be prudent to buy land that _will_ be near oceans.

------
kurotetsuka
Sounds sort of like something I've been working on - Hyphaelia[0]. Users
publish both respect and trust ratings for other users, and just respect
ratings for content. This allows me to calculate individualized rankings of
content for any user, given their direct trustees or alternatively, other
specified entry-points into the network.

Anyways, pretty interesting - I'll be sure to hit up the author once i'm done,
to see what he thinks.

[0]
[https://github.com/kurotetsuka/hyphaelia](https://github.com/kurotetsuka/hyphaelia)

------
ErikAugust
I've been fascinated with the idea of applying PageRank to things. This is
much deeper than I would venture, very thought provoking!

It breaks down for me however. Cooperation with others is by no means a
dynamic concept of morality. PageRank is also more dynamic than he lends to in
this article.

Take Socrates, he cooperated with his death sentence. Laid upon him by a
majority for what? Devotion to truth?

Thought experiment: Morality Engine Optimization. Games over. It breaks down.

------
MarkPNeyer
[https://github.com/neyer/respect](https://github.com/neyer/respect)

~~~
kurotetsuka
You might be interested in something I've been working on - hyphaelia[0].
Users publish both respect and trust ratings for other users, and just respect
ratings for content. This allows me to calculate individualized rankings of
content for any user, given their direct trustees or alternatively, other
specified entry-points into the network.

[0]
[https://github.com/kurotetsuka/hyphaelia](https://github.com/kurotetsuka/hyphaelia)

~~~
MarkPNeyer
yes, i very much am. my ultimate goal was along those lines but more complex:

[https://github.com/neyer/dewDrop](https://github.com/neyer/dewDrop)

------
baddox
What a great article. I have so many random comments.

> If you have a few million dollars, you can even set up your own parody of
> the scientific process: your own phony experts, in their own phony think
> tanks, with their own phony publications, giving each other legitimacy by
> citing each other. (Of course, all this is a problem for many fields, not
> just climate change. Climate is special only because there, the future of
> life on earth might literally hinge on our ability to get epistemology
> right.)

And yet, the author presumably believes that only (or mostly) the climate
change deniers do this? How can an unbiased but completely uneducated
(regarding climate science) possibly know?

> Now, would those with axes to grind try to subvert such a system the instant
> it went online? Certainly. For example, I assume that millions of people
> would rate Conservapedia as a more trustworthy source than Wikipedia—and
> would rate other people who had done so as, themselves, trustworthy sources,
> while rating as untrustworthy anyone who called Conservapedia untrustworthy.

In what sense is that "subverting," unless you're assuming that one source is
"truly" more trustworthy than the other?

> But here’s the thing: anyone would be able to see, with the click of a
> mouse, the extent to which this parallel world had diverged from the real
> one. They’d see that there was a huge, central connected component in the
> trust graph—including almost all of the Nobel laureates, physicists from the
> US nuclear weapons labs, military planners, actuaries, other hardheaded
> people—who all accepted the reality of humans warming the planet, and only
> tiny, isolated tendrils of trust reaching from that component into the
> component of Rush Limbaugh and James Inhofe.

How does that help? If there's a "truly true Truth" out there, why should an
unbiased uneducated person trust the broad connected graph over the smaller
isolated one as the source of that Truth?

> As long as I’m fantasizing, the point would be that, once people’s
> individual decisions did give rise to a giant connected trust component, the
> recommendations of that component could acquire the force of law.

Oh. Crap. That's terrifying.

> In other words, Google takes a link structure that already exists,
> independently of its ranking algorithm, and that (as the economists would
> put it) encodes people’s “revealed preferences,” and exploits that structure
> for its own purposes.

If you like the idea of revealed preferences (I do), let's try a polycentric
legal system where organizations offer services like law enforcement and
dispute resolution and people can choose to pay for those services from
whichever provider they like best. This largely solves the biggest problem of
democracy, which is that a good legal services in a democracy are public goods
and are thus likely to be underproduced. Good legal services in a polycentric
legal system are not public goods, which is a good thing. Of course, it has
its own problems, like what about really poor and really rich people.

