
Eigenmorality - bdr
http://www.scottaaronson.com/blog/?p=1820&re=1
======
derefr
As Aaronson points out, PageRank has a few edge-cases when used to do this
analysis, basically because it treats its graph as a closed, internally-
solipsistic system--it has no definition of morality other than what each of
its nodes prefer of one-another. This works if you have a diverse spectrum of
preference functions distributed among the nodes (the result tends toward a
"live and let live" meta-ethics), but if your analysis is aimed at a
preferentially homogeneous group (e.g. Nazi Germany), PageRank won't give you
the solution of "move the 'evil' majority toward the tenets of the good
minority." It'll instead suggest that the optimal system would have the 'good'
minority give up and become 'evil'.

Scott Alexander suggests ([http://slatestarcodex.com/2014/06/20/ground-
morality-in-part...](http://slatestarcodex.com/2014/06/20/ground-morality-in-
party-politics/)) you could instead use DW-nominate, the tool that does meta-
cluster-analysis to mathematically detect "party lines" in congress (which are
basically just clusters in human-utility-function-space anyway), to find what
preference-subfunctions (e.g. helping old ladies cross the street, returning a
wallet you find laying on the ground) correlate together into a cluster (that
might be called 'goodness') -- and then grounding/normalizing the PageRank
analysis with that, such that you can tell whether the system as a whole is in
a 'good' or 'evil' state.

~~~
blauwbilgorgel
Agreed. Once a network becomes too homogeneous, its collective problem solving
abilities go way down. Effective networks can adapt to and benefit from a
large number of different fluid participants. It is a cheap way to get
complexity/variety from simple individual nodes, and to be more robust, safe-
guarded against variance and overfit.

The author aims to converge upon a group with a good morality (even though its
members may have been in the minority). But what is to say that all-good
morality is good for a group's decision making as a whole? Would it be
sustainable from an energy viewpoint? Or collapse and destroy everything?
Won't we need a supply of villains for our moral heros [1]? That to converge
on an optimal answer as a network we need wildly opposing views? That to be
stable an ecosystem needs variety, decay and destruction?

More philosophically: Is a program like this moral cooperation plan based on
pay-it-forward-currency even moral in itself (as it clearly discriminates)?

Attempts at a supermorality have stunned philosophers and logicians for ages.
If a rich benefactor would give a 1000 people in a room a 1000$, and if you
ask for more you get 1$ more, but 999 people would get 2$ deducted, people
would leave that room with some having enough to buy a cup of coffee. 1
million dollars wasted by the greedy individualistic game theory that seems to
be in place in animals: I want energy for me and my family first, forget the
network. Contests to give a $ amount equal to the lowest unique number send
in, would have perfectly rational players roll a dice with the number of
contestants, all submit 1 trillion, and the one person who rolls a 1 submits a
billion. Instead they receive bids as low as single cents in a manner of days.

[1]
[http://www.sciencedirect.com/science/article/pii/S0022519311...](http://www.sciencedirect.com/science/article/pii/S0022519311001639)
"The joker effect: Cooperation driven by destructive agents"

------
knowtheory
This is _very long_ but worth reading.

The modeling exercise herein is basically attempting to use a game theoretic
model to test out some really dumb/simplified models of cooperation and
whether the behaviors observed approximate anything approaching what our
intuitions might say is moral behavior, up to and including an 'eigenjesus'
and 'eigenmoses' up against tit_for_tat bots and the like.

~~~
JackFr
To each his own. I actually think its overly long, wordy, rambling, poorly
conceived and short any novel or important ideas.

~~~
sage_joch
He is exploring how the Internet might be used to save civilization. What
constitutes an "important idea" in your book?

~~~
prestadige
Every tyranny is predicated on saving civilization.

Also, every idea that actually helps civilisation is incubated in a tiny
minority (perhaps in just one mind). Since that minority is engaged in
creative work, it is almost certainly an out-group. Adopting the morality of
the ruling class and building connections with it are the surest way to power.
But these are a full-time job.

I think the idea of quantifying morality might be improved by basing it not on
cooperation but simply upon _communication_ , e.g. how well do you know the
opinions of those you _disagree_ with? Note that this is almost the opposite
approach of the path to power.

~~~
sage_joch
I don't think "tyranny of the majority" applies here. The proposed system
makes minority opinions _more_ visible, if anything. There would even be an
incentive to have a minority opinion, if you truly believed the majority was
incorrect about something. In response to your last point: that sounds like an
interesting modification: letting every bot see every other bot's (possibly
evolving) code. But perhaps to avoid Skynet, bots should use the other bots'
published APIs (which could opt to include a "getCode" method), and judge each
other by their actions.

------
jonnathanson
Please note that what follows can be interpreted as criticism, but it's not
intended as such. I found this article quite interesting, and for me, it was
the starting point for a lot of different thoughts about game-theoretical
approximations of morality. So what follows is a somewhat tangential addition
to the article, and not a critique of it.

My problem is not with the "eigenmorality" concept, nor with the various takes
on playing it out across consecutive Prisoner's Dilemma sessions. That aspect
is extremely interesting. Rather, my problem is with the Prisoner's Dilemma as
a valid ground on which to test something like morality.

The Prisoner's Dilemma is a foundational, theoretical framework for evaluating
human behavior. And it's a wonderful, elegant framework. But it treats humans
as emotionless agents, and the "punishment" as an abstract, theoretical,
rationally navigable scenario. Place real human beings into the Prisoner's
Dilemma, with real-world consequences, and you get all sorts of unexpected
results. The Prisoner's Dilemma is notorious for holding up perfectly fine _in
vitro_ , but less so _in situ_. Cultural conditioning plays a _huge_ role in
how real people act in the game. So do emotions, and irrational heuristics
like overemphasizing loss aversion. (Tversky and Kahneman's work has a lot to
say about the latter.)

Using the Prisoner's Dilemma as a proving ground, I think you'd arrive at an
abstract model of morality -- but you wouldn't capture how morality actually
plays out with quasi-rational, emotional, circumstantially driven, human
agents. And, philosophically speaking, that's where morality actually counts
the most.

~~~
brenschluss
> But it treats humans as emotionless agents, and the "punishment" as an
> abstract, theoretical, rationally navigable scenario. Place real human
> beings into the Prisoner's Dilemma, with real-world consequences, and you
> get all sorts of unexpected results.

No, that's actually the entire point of the Prisoner's Dilemma - it's not a
framework for evaluation, it's the tension between the rational decision and
human action that is exactly why the Prisoner's Dilemma is a prized example of
game theory.

------
MichaelDickens
This is an interesting idea. Aaronson may be joking when he says he's
"solv[ed] a 2400-year-old open problem in philosophy," but in case he's not,
this doesn't come anywhere close to solving ethics. Philosophically speaking,
it's still necessary to show why his definition of "moral" holds up. All he's
done is assess a certain quality and then call it "morality." I think it could
better be called "meta-cooperativeness" or something like that.

I think Aaronson realizes this, because he does talk about how Eigenjesus and
Eigenmoses don't accord with our moral intuitions in some cases. He also
addresses this somewhat in the section "Scooped by Plato." His major point--
that something like Eigenjesus can be useful, even if it cannot deduce
terminal values--still holds.

~~~
darkxanthos
Yep. I think what he's really trying to do is find a definition of morality
that's useful, not necessarily complete. As models go, it was useful enough to
change my thinking a bit.

------
andrewflnr
I think the definition of morality in the article is far too simplistic. In my
(Christian) view, it's an important aspect of moral maturity to be able to be
_nice_ to immoral people without _cooperating_ with their goals. Besides that
dichotomy, the article already mentions that the model lacks critical
information, specifically, the actors don't know whether the other actors
they're [not ]cooperating with are "good" or "bad".

That said, I find this approach to defining morality fascinating. Maybe if the
definitions are refined it will manage to tell us something we already know
(not entirely sarcastic; that would be legitimately impressive for a
mathematical construct regarding morality).

~~~
spiritplumber
Occasionally "being nice" (cooperating when tit-for-tat suggests defection)
dampens avalanche effects caused by defection-happy actors; it prevents the
"an eye for an eye makes everyone blind" ending. In that sense, it has value.

~~~
aruggirello
Thus forgiving acts much like error correction in a data stream - by
preventing propagation of defects, it limits damage.

~~~
jessaustin
Forgiveness also has great psychological benefits for those who practice it.
In some cases people whose lives have been focused for years on some great
wrong that was done to them, have only been able to reach their own personal
goals after forgiving the offending party.

Of course vengeance can have psychological benefits too.

------
MarkPNeyer
my friends and i had started on this already. i had a hard time explaing to
people why it was valuable; looks like scott has done it for us.

please help us!

[https://github.com/neyer/dewDrop](https://github.com/neyer/dewDrop)

right now all we have is a way to state which facebook users a person trusts.
there's a chrome extension to help with this. it's extremely basic.

i have a server running at
[https://dewdrop.neyer.me](https://dewdrop.neyer.me) \- we need a lot more
help!

i'm just putting it on github now - so i'll update the readme in a few
minutes.

~~~
aruggirello
Cool... After reading that, I realised that Eigenmorality is to social
networks what Pagerank is to search engines - great to see somebody already
working on this!

~~~
irollboozers
You would think someone at Facebook is already looking at this question...

------
minority
Considering that a majority of people who agree with each other are "moral" is
highly problematic. Even if everyone in the system is morally equal, this
system would automatically create and enhance differences between groups.

The author uses the example of climate-change deniers to express the opinion
that minority groups have "withdrawn itself from the main conversation and
retreated into a different discourse."

Is this true of other minority groups - feminists? Homosexuals? Minority
ethnic groups? It seems highly awkward to claim the same thing.

A better system would be one which considers how to cater for individuals
rather than declaring a populist majority to be a special, protected ingroup.
There's enough of the latter already.

~~~
Fargren
The article does acknowledge this point, and offers a possible solution. In
the edit at the end, i t suggests a system for distinguishing those who
initiate defection from those who respond to ti with defection, and in
principle that could make defecting against a group that didn't do anything
"wrong" a wrong thing.

------
MichaelDickens
This seems related to the idea of coherent extrapolated volition
([https://intelligence.org/files/CEV.pdf](https://intelligence.org/files/CEV.pdf)).
Both have some of the same problems--in particular, setting up the system
requires making moral judgments about how to do so, so it's not actually
value-neutral.

(Aside: If I have two completely different thoughts about an article, should I
post them in two separate comments or in the same comment?)

~~~
jessaustin
(My preference is two separate comments. Better threads, better ordered.)

------
mrb
Wow: _" The mathematical verdict of both eigenmoses and eigenjesus is
unequivocal: the 98% are almost perfectly good, while the 2% are almost
perfectly evil."_ The author says this diverge violently from most people’s
moral intuitions, but actually this result is PRECISELY what moral relativism
predicts. See, there are 2 school of thoughts attempting to explain where
morality comes from:

\- either morality is an absolute concept (things are inherently good or evil,
theists might say this good/evil is defined by a god or gods). This is
[http://en.wikipedia.org/wiki/Moral_absolutism](http://en.wikipedia.org/wiki/Moral_absolutism)

\- or morality is relative, defined by people, defined by cultures (what one
culture might consider immoral, another culture will consider it moral, and
nobody is inherently right or wrong). This is
[http://en.wikipedia.org/wiki/Moral_relativism](http://en.wikipedia.org/wiki/Moral_relativism)

If moral relativism is right, it would be absolutely expected that the 98% are
"almost perfectly good", since they do things that the majority consider good.
What a fantastic essay...

~~~
Eliezer
That's... kinda not a very good description of the major contemporary schools
of thought on metaethics. I don't know if any respected analytic philosophers
take moral relativism seriously as philosophy; you can't ground the
foundational meaning of the word 'good' as 'different cultures think different
things are good', since there's no base case for the recursion. Well-known
mainstream positions in metaethics hold that moral language is not meant to
express statements which are either true or false, i.e., it is not semantic or
truth-apt; but I have no idea what it would mean for 'good' to be defined as
'different cultures think different things are good'. What's the difference
between 'good' and 'fzoom', then?

This appears both well-written and standard:
[http://cdn.preterhuman.net/texts/thought_and_writing/philoso...](http://cdn.preterhuman.net/texts/thought_and_writing/philosophy/An%20Introduction%20to%20Contemporary%20Metaethics.pdf)

I'd refer you to my own writings on the subject but I don't think they've been
very productive in practice of understanding, so I'll leave you with a
reference to the standard literature, and remark that the correct analysis
(using standard nomenclature, which is somewhat misleading) is obviously moral
cognitivism::strong cognitivism::moral realism::naturalist reductionism.

~~~
baddox
> I don't know if any respected analytic philosophers take moral relativism
> seriously as philosophy; you can't ground the foundational meaning of the
> word 'good' as 'different cultures think different things are good', since
> there's no base case for the recursion.

Wouldn't the meaning of "good" be "considered to be good by the given culture,
group, or individual"?

> Well-known mainstream positions in metaethics hold that moral language is
> not meant to express statements which are either true or false, i.e., it is
> not semantic or truth-apt;

Do you have any data on the percentage of philosophers who subscribe to
various beliefs? It sounds like you're describing non-cognitivism, which I'm
fairly familiar with, although I didn't think it was a widely accepted view.

~~~
wfn
Take a look at PhilPapers Surveys, maybe:
[http://philpapers.org/surveys/](http://philpapers.org/surveys/)

the rough gist (from
[http://philpapers.org/archive/BOUWDP.pdf](http://philpapers.org/archive/BOUWDP.pdf))
seems to be

    
    
      14. Meta-ethics: moral realism 56.4%; moral anti-realism 
      17. Moral judgment: cognitivism 65.7%; non-cognitivism 17.0%; other 17.3%.
    

The results are also here:
[http://philpapers.org/surveys/results.pl](http://philpapers.org/surveys/results.pl)
(set response grain to "fine" as the note at the top suggests)

correlation results should also be interesting.

------
jzwinck
"The deniers and their think-tanks would be exposed to the sun; they’d lose
their thin cover of legitimacy."

Don't we have the ability to do this now by visualizing or analyzing
citations? A set of "fake" think-tanks which promote bogus ideas should be
identifiable as a mostly-disconnected component of a graph today. We don't
need to get each think tank's explicit opinions about the others. Aaronson
points out this single-purpose inquiry would encourage gaming, but analyzing a
graph built for other incentives may give more "honest" results (at least for
a while).

And we have, at least five years ago:
[http://arstechnica.com/science/2009/01/using-pagerank-to-
ass...](http://arstechnica.com/science/2009/01/using-pagerank-to-assess-
scientific-importance/) . You can follow links from there to a project called
EigenFactor, academic research about shortcomings of PageRank in this
application, and more.

Results of such analyses should be used as input to human thought processes
and not some sort of legislative robot.

------
ipsin
I found the addendum about the time-sequence of bad acts to be the most
interesting, in that how you approach the problem leads to another wide spray
of outcomes.

Scott mentions the "forget the past" and "address root causes" sides, but how
do you deal with things in the middle?

Even being able to provide a model that allows for injustices from centuries
ago would be impressive, but how should such things decay? Again, the same
pressures come into play, based on the interests of the judged parties.

------
MakeUsersWant
It's probably no coincidence that repeated prisoner's dilemna models another
phenomenon: willpower.

George Ainslie argues in "Breakdown of Will" that will is actually the result
of negotiations between past and future selves.

[http://www.picoeconomics.org/HTarticles/Bkdn_Precis/Breakdow...](http://www.picoeconomics.org/HTarticles/Bkdn_Precis/Breakdown_Will.pdf)

------
hhm
This Tolkien quote builds a similar circular definition of "worth", which
might be amenable to the same kind of analysis.
[https://twitter.com/JRRTolkien/status/480127254857400320](https://twitter.com/JRRTolkien/status/480127254857400320)

~~~
wodenokoto
“All have their worth and each contributes to the worth of the others.”

~~~
hhm
I now realize I should have copied the text into the comment in the first
place.

~~~
marvy
It's not too late to edit your post :)

------
bryan_rasmussen
It seems to me that this would only be of interest if it can be shown that an
immoral person is not someone that cooperates with other immoral people but
not with moral people.

~~~
Rangi42
Do you mean that "moral" should mean "someone who cooperates with others" and
"immoral" should mean "someone who does not cooperate with others"? Then
"moral" and "immoral" would just mean "cooperative" and "uncooperative," which
we already have words for. Plus, morality shouldn't require you to cooperate
with immoral people (although whether you should actively punish them or not
is the eigenJesus-eigenMoses question).

------
cma
Needy babies are moral monsters according to many of these models..

~~~
gjm11
Needy babies are also stupid according to many notions of intelligence, ugly
according to many notions of beauty, etc. That doesn't mean there's anything
wrong with those notions; it means they aren't designed for assessing babies.

~~~
existencebox
Maybe this is me being pedantic, but I'd disagree with that they aren't
designed for assessing babies, and say that there's a more elegant explanation
of a second layer of filtering which informs on the validity of a first layer
of assessment. Stupid does define a baby, but this definition holds
contextually different connotations; and this context is what makes there be
"nothing wrong with" those "traditionally bad" assessments.

Not sure where this (my rambling) came from off of the parent article, but it
spawned some interesting thoughts at least :)

------
tveita
His definition is much closer to "popularity" than to anything I would
recognize as "morality".

It's strange to exclude intent from your model when it's an important factor
in almost all systems of morality.

------
neotoy
Good read, but I can't help but end in thinking that by the time all of this
would have been figured out, our civilization will be long gone.

------
yason
There is no right or wrong, just acts with unescapable consequences and your
freedom to learn something from your choices.

~~~
oofabz
The concepts of right and wrong certainly exist, or we couldn't be talking
about them. But not everyone agrees what is right and what is wrong.

It sounds like you are saying that there is no absolute right or wrong, that
right and wrong are human inventions prone to variation, not some fixed
celestial law. That is exactly the stance which Aaronson took in his essay so
I believe you two agree on that point.

------
hyperion2010
You can't "solve" this problem in the same sense that you cannot develop a
universally consistent foundation for mathematics. Goedel is there preventing
you from EVER proving that one set of axioms is better than another.

I again wrote a longer response but have shortened it because the author seems
to have committed a rather grave error which is to assume that human moral
'intuition' is in any way consistent. There are heaps of evidence (cue the
trolley car) that human moral judgements really should not be considered a
guide for anything. The fact that we can capture the disasters of collective
morality observed under various regime's during the 20th century ought to tell
us that following those models as a universal foundation for human relations
is a terrible idea.

Might also be worth paying a visit to eigennicolo and not adhere to such rigid
systems.

~~~
mathgenius
Well I would like to read your "longer" response. But I thought I would just
back you up on the Godel connection: the key to that theorem is also a self-
referential-ness.

I would also throw in that financial systems in general suffer from this same
problem: we assign value to items that get assigned value. Where is the
objectivity? There is none.

It is quite ironic that I found your comment at the bottom of the HN comment
queue, and it is also by far the most penetrating, IMNSOO.

------
lohankin
I was following Scott's posts for a while. Most notable feature of those
posts: everything he says is predictable. Blog is designed to appeal to the
liberal academic establishment, which knows answers to all important
questions, and is never in doubt. I don't remember a single example of Scott's
opinion which could be deemed controversial in any sense. "Eigenconformism"
would be a better name for his blog.

~~~
oofabz
I don't think the blog is consciously designed to appear to liberal academics.
As an MIT professor, Aaronson IS the liberal academic establishment, so it is
no mystery that his writing appeals to his peers.

I don't know about you, but I'm willing to admit Aaronson knows more answers
to important questions than I do.

~~~
lohankin
> Aaronson IS the liberal academic establishment, so it is no mystery that his
> writing appeals to his peers

I'm afraid you got cause and effect in reverse.

------
javert
Happiness is the only intrinsic value for a human being, and thus a moral
person is a person who pursues happiness effectively. (How to do that is
another story.) However, Aaronson's proposed definition of a moral person is
not the effective way to pursue happiness. Thus, it is immoral.

It's also immoral to call for all of us to sacrifice industrial output for
future generations to solve the supposed climate change problem. There is no
reason to presume that future generations are more important than the present
generation (in fact, it is demonstrably the case that they are not). Thus,
this position is profoundly immoral.

However, the implicit assumption that sacrifice is moral is common to most
world religions and also altruism, which is probably where he imported it
from. All of them are morally bankrupt. A scientist shold be able to be
skeptical and see such logical flaws, even if he is not able to propose the
correct solution.

~~~
PhasmaFelis
Are you trying to make some baroque point that I'm missing, or am I giving you
too much credit, i.e. you're actually just ranting incoherently about climate
change conspiracy and Randian selfishness-as-virtue?

Or, option 3, are you just trolling?

