
Eigenmorality (2014) - peter_d_sherman
https://www.scottaaronson.com/blog/?p=1820
======
woodruffw
> Ah, I thought—this is precisely where linear algebra can come to the rescue!
> Just like in CLEVER or PageRank, we can begin by giving everyone in the
> community an equal number of “morality starting credits.”

Two metaethical presuppositions stand out:

* That right and wrong (or right- and wrong-making action features) are discrete and quantifiable.

* That right and wrong (or ...) are expressible over communities as consequences to external individuals, rather than the properties of (potentially ineffectual!) actions themselves.

Written as counterclaims:

* We have no reason to believe that any right or wrong action corresponds to a _particular_ number of morality credits, or that actions are independent (consider: most of us have some sort of intuition about bad things being excusable or forgivable if done only once).

* We have no reason to believe that the outcomes of our actions are what make them actually right. Consider the murderer who cooperates with the "moral" group to find ideal victims, but fails to kill them out of cowardice or incompetence -- a reasonable intuition to have would be that the murder _is_ bad, despite external cooperation to the contrary.

~~~
narag
_That right and wrong (or right- and wrong-making action features) are
discrete and quantifiable._

The thought that such a thing was possible is so scary. Please, let me act
(mostly :) legally and keep your ideas of morality out of my life.

~~~
nine_k
What is legal is very much influenced by what is considered moral. See:
slavery, sex outside marriage, homosexuality, free speech, the very notion of
murder (stand your ground vs excessive self-defence) and of crime in general
(whether malicious intent is present).

Sorry, laws of society are not laws of nature.

~~~
narag
There are two opposite approachs to establish rules for a society: imposition
and negotiation. I prefer negotiation. Morals tend to work in absolutes, so
morality gets into laws by negotiation because persons with different morals
make a deal. When majority morals gets unimpemded into law, with no regard for
minorities, the result is not pretty.

~~~
threatofrain
Negotiation at the level of individual relationship to society and law is
imposition with extra steps, but it is a desired sophistication of force.
Sovereign citizens come to mind, talking about social contracts and moral law,
but such things do not intermediate their relationship to government.

------
edflsafoiewq
> A moral person is someone who cooperates with other moral people, and who
> refuses to cooperate with immoral people.

One immediate property of this is it is too symmetric: assuming "honor amoung
thieves" (immoral people cooperate with immoral people and refuse to cooperate
with moral people), it is unchanged under interchange of "moral" and
"immoral". That is, it can't tell you whether it's calling good "moral" or
evil "moral".

Anyway, the circular definition I'm interested in is Sausserian structuralism:
the meaning of a sign is determined by its place in a system of signs. I
suppose in linguistics the article's construction then leads to eigenwords,
but I am skeptical that eigenwords actually advance our understanding of
semantics. I'm not as optimistic about the power of this construction to
advance our epistemology as the author.

~~~
pickdenis
I think it's interesting _because_ we don't assume "honor among thieves".
Otherwise there's no useful way to distinguish morality from immorality.

"x cooperates with y" and "y is moral" shouldn't imply "x is moral".

~~~
nine_k
To me, there is a clear indicator: human suffering.

A group that strives to induce suffering on the outgroup, or on uwilling
members of ingroup, are "bad guys".

A group that strives to limit and reduce suffering, for the ingroup, outgroup,
or both, are "good guys", as long they don't match the previous definition.

Corollary: "good guys" normally stay away from aggression.

~~~
Jeff_Brown
> To me, there is a clear indicator: human suffering.

I like to divide morality into happiness ("utility") and rights. All else
equal, increasing aggregate happiness is better. However, if increasing (the
authority's measure of) aggregate happiness tramples a lot peoples' rights,
that sucks.

Rights (and their implications) are easier to codify legally than happiness
maximization. This eigenmagic seems easier to compute (because it's easier to
encode) for happiness than for rights.

(One can frame rights as an implication of happiness maximization. Some
economists do this. A (properly bounded) right to private property, for
instance, increases net happiness by avoiding tragedies of the commons.
Thievery should be illegal because stolen goods are on average less valuable
to the thief than the victim. I don't know whether such analyses imply all the
rights we'd like, or how absolutely.)

~~~
nine_k
Indeed, "rights" is a reasonably good proxy for not increasing suffering. If
nobody is entitled to arbitrarily take your possessions or do bodily harm to
you, a number of the worst sources of suffering are removed, at least most of
the time.

Since both happiness and utility are strictly subjective (e.g. not directly
measurable or comparable), such proxies is our only hope to produce some
formal, computable approaches to limiting of the suffering.

But the _intention_ to limit suffering for most of the outgroup seems a key
indicator for me.

------
kragen
Because the Eigenmoses matrix contains negative values, the Perron–Frobenius
theorem does not apply, and so there is no unique largest eigenvalue,
necessarily. This can be observed in the form of holy wars. Moreover, I think
we can construct real niceness matrices none of whose eigenvalues are real.

------
peter_d_sherman
Excerpt:

"Back then, I was extremely impressed by a research project called CLEVER,
which one of my professors, Jon Kleinberg, had led while working at IBM
Almaden. The idea was to use the link structure of the web itself to rank
which web pages were most important, and therefore which ones should be
returned first in a search query. Specifically, Kleinberg defined “hubs” as
pages that linked to lots of “authorities,” and “authorities” as pages that
were linked to by lots of “hubs.” At first glance, this definition seems
hopelessly circular, but Kleinberg observed that one can break the circularity
by just treating the World Wide Web as a giant directed graph, and doing some
linear algebra on its adjacency matrix."

------
amznthrowaway5
(2014) old discussion:
[https://news.ycombinator.com/item?id=7925375](https://news.ycombinator.com/item?id=7925375)

~~~
dang
Also 2015:
[https://news.ycombinator.com/item?id=9378741](https://news.ycombinator.com/item?id=9378741)

------
fyp
Are there still ongoing iterated prisoner's dilemma tournaments that we can
submit code to online?

The idea that you can submit "code of ethics" to duke it out in a simulated
environment is pretty fascinating.

------
gfodor
Interesting how multiple people can have the same idea — about 10 years ago I
was working on the problem of having people on mechanical Turk label terms
with meanings, and determined that applying page rank to the connected graph
of nodes (where nodes where turkers, and edges were agreements in labeling)
would allow me to derive a trust metric for each agent, allowing me to discard
noise/bots easily from the dataset.

------
swagasaurus-rex
Eigenmoses has negative values, Eigenjesus is clamped to 0.

What about a leaky RELU like function? Cooperation is noted and rewarded, but
uncooperative behavior is given some benefit of doubt - to a point.

------
marmaduke
> The players are bots, which do whatever their code tells them to do.

Interesting dualism (bot & code vs bot===code) which sums up current
philosophy of mind despite best efforts of neuroscience

