Scott Alexander suggests (http://slatestarcodex.com/2014/06/20/ground-morality-in-part...) you could instead use DW-nominate, the tool that does meta-cluster-analysis to mathematically detect "party lines" in congress (which are basically just clusters in human-utility-function-space anyway), to find what preference-subfunctions (e.g. helping old ladies cross the street, returning a wallet you find laying on the ground) correlate together into a cluster (that might be called 'goodness') -- and then grounding/normalizing the PageRank analysis with that, such that you can tell whether the system as a whole is in a 'good' or 'evil' state.
The author aims to converge upon a group with a good morality (even though its members may have been in the minority). But what is to say that all-good morality is good for a group's decision making as a whole? Would it be sustainable from an energy viewpoint? Or collapse and destroy everything? Won't we need a supply of villains for our moral heros ? That to converge on an optimal answer as a network we need wildly opposing views? That to be stable an ecosystem needs variety, decay and destruction?
More philosophically: Is a program like this moral cooperation plan based on pay-it-forward-currency even moral in itself (as it clearly discriminates)?
Attempts at a supermorality have stunned philosophers and logicians for ages. If a rich benefactor would give a 1000 people in a room a 1000$, and if you ask for more you get 1$ more, but 999 people would get 2$ deducted, people would leave that room with some having enough to buy a cup of coffee. 1 million dollars wasted by the greedy individualistic game theory that seems to be in place in animals: I want energy for me and my family first, forget the network. Contests to give a $ amount equal to the lowest unique number send in, would have perfectly rational players roll a dice with the number of contestants, all submit 1 trillion, and the one person who rolls a 1 submits a billion. Instead they receive bids as low as single cents in a manner of days.
 http://www.sciencedirect.com/science/article/pii/S0022519311... "The joker effect: Cooperation driven by destructive agents"
The modeling exercise herein is basically attempting to use a game theoretic model to test out some really dumb/simplified models of cooperation and whether the behaviors observed approximate anything approaching what our intuitions might say is moral behavior, up to and including an 'eigenjesus' and 'eigenmoses' up against tit_for_tat bots and the like.
What I took away from it was the point late-on about, if we had a PageRank-type system for establishing trust in issues that drive policy (such as climate change), then it would show where a group had left the consensus and formed a separate talking-shop that was trying to shout down the greater consensus.
I think this actually applies back to the Iterated Prisoner's Dilemma games. The 'morality' calculation might not be able to perfectly establish what is moral and immoral behaviour, for the reasons given in the article, but it ought to be able to establish where groups are not working to the moral codes of the majority, perhaps because they are discriminating or discriminated against, but also possibly because they are co-conspiring.
They are a certainly a mathematically accurate way to represent a distribution of human faces, but no one would ever confuse it with an actual human face.
And this is one way i view political/policy compromises (which is ultimately where the rubber hits the road regarding morality/ethics). One example is middle ground positions on immigration reform: http://www.vox.com/2014/6/12/5803912/americans-either-want-u...
There are circumstances where opinions fall into a multimodal distribution, and in those situations, taking a policy position that is the global average not only will piss everyone off, but won't necessarily fix the policy problem at hand.
In the former, eigenvectors are an orthogonal basis for representing the set of human faces: first, the vary in this respect, then next most significantly in this respect, and so on.
In the latter, the primary eigenvector tells you which nodes are the most significant. The eigenvector not looking like a good site (or typical face) has little to do with whether it's informative about the goodness of a site.
Also, the point of using morality eigenvectors is to quantify relative morality of positions, not to find a compromise between currently-popular positions, so the bimodality issue is not a problem in this respect.
Thanks for pointing out the muddle i was in there.
Even if the individuals all agree the collective made a bad decision, that is not to say that the group decision itself was bad: It may have very well been the perfect lesser of all evils. A system can converge to an optimal solution, while individual participants do not realize this. Likewise, a participant may spot a problem in need of a solution, where in reality there is no such problem or manageable solution.
I do agree that, next to making more optimal decisions, a committee can make poor collective decisions that any individual members would never make. If the crowd is too frantically opposed, not willing to give in, then that crowd or system itself may be broken and dysfunctional, and will always produce inferior solutions. It has a bigger problem than a single circumstance.
Also, every idea that actually helps civilisation is incubated in a tiny minority (perhaps in just one mind). Since that minority is engaged in creative work, it is almost certainly an out-group. Adopting the morality of the ruling class and building connections with it are the surest way to power. But these are a full-time job.
I think the idea of quantifying morality might be improved by basing it not on cooperation but simply upon communication, e.g. how well do you know the opinions of those you disagree with? Note that this is almost the opposite approach of the path to power.
"Hey all, I have a new way of measuring how much resources computer programs take..."
If I remember correctly, he banned a professor from commenting on his blog over what seemed to be a routine academic debate.
Not really: for the record, Scott "sentenced" John Sidles repeatedly to a 3-month ban (later "commuted" to two weeks) due to his increasingly derailing and nearly trollish behavior , but soon he "dismissed" it for "time served" .
An analogy: Would you trust a physicist who does not communicate often, or a creationist who writes popular science essays?
Before you rush off to study it (which I highly recommend, for anyone who has the chance / ability), it's not so much about learning better systems of ethics or learning about the many types of systems of ethics that have been thought up in the past, but it's about learning the arguments why they are/were wrong and how they break down in situations, and to carry these arguments. That's how a lot of fields in Philosophy work, and why it seems such a dry study of "arguing for argument's sake". The point is, by doing this, practice, you learn a lot of useful things along the way and sharpen your mind.
I do agree with the other poster that the featured article (while interesting to read), does seem to lack a bit in how it connects with actual philosophy of ethics.
I for one want to follow them too if they really exist.
One of my personal favorites: http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspond...
I find the linked-to article to be a gem of lucidity amidst a barrage of mostly noise, busywork, and lottery-playing. Not only that, but the topic of the foundations of morality is going to become a central issue soon, with multiple strong global trends taking us towards that difficult issue.
The article you linked to is nice, but Curry-Howard isomorphism is neither novel nor as important as the topic covered by the original article.
As fragile agents inhabiting a ball of mass and fire organized in billions of partly-autonomous somewhat-intelligent resource-sharing systems flying through space and time, just recently coming up with scarily and growingly powerful new ways to rearrange control which may destroy existing controlling systems, including us, we should care very, very much about this.
And the article provides powerful new tools to tackle this.
For anyone who has studied philosophy or ethics -- even casually -- this article was terribly naive and hopelessly unaware of its own ignorance. Beyond that, the applied linear algebra was not interesting, no real game or decision theory to speak of and the whole 'eigenmoses' and 'eigenjesus' was too cute by half.
Iterated prisoner's dilemma? Not my idea of a powerful tool.
Was there more? I may have missed it.
So who are you working for?
EDIT: The answer to the rest is not generally interesting.
Yes, it's written like the draft of an academic research paper that might or might not make it to peer review. Unfortunately, that description also matches the bulk of published papers.
My problem is not with the "eigenmorality" concept, nor with the various takes on playing it out across consecutive Prisoner's Dilemma sessions. That aspect is extremely interesting. Rather, my problem is with the Prisoner's Dilemma as a valid ground on which to test something like morality.
The Prisoner's Dilemma is a foundational, theoretical framework for evaluating human behavior. And it's a wonderful, elegant framework. But it treats humans as emotionless agents, and the "punishment" as an abstract, theoretical, rationally navigable scenario. Place real human beings into the Prisoner's Dilemma, with real-world consequences, and you get all sorts of unexpected results. The Prisoner's Dilemma is notorious for holding up perfectly fine in vitro, but less so in situ. Cultural conditioning plays a huge role in how real people act in the game. So do emotions, and irrational heuristics like overemphasizing loss aversion. (Tversky and Kahneman's work has a lot to say about the latter.)
Using the Prisoner's Dilemma as a proving ground, I think you'd arrive at an abstract model of morality -- but you wouldn't capture how morality actually plays out with quasi-rational, emotional, circumstantially driven, human agents. And, philosophically speaking, that's where morality actually counts the most.
No, that's actually the entire point of the Prisoner's Dilemma - it's not a framework for evaluation, it's the tension between the rational decision and human action that is exactly why the Prisoner's Dilemma is a prized example of game theory.
If you can make humans really play PD and they are rational, nothing unexpected happens. The problem with these human experiments is not that PD can't model decision making, it's in the leaking implementation where payoff does not quantify the utility for the players.
I think Aaronson realizes this, because he does talk about how Eigenjesus and Eigenmoses don't accord with our moral intuitions in some cases. He also addresses this somewhat in the section "Scooped by Plato." His major point--that something like Eigenjesus can be useful, even if it cannot deduce terminal values--still holds.
That said, I find this approach to defining morality fascinating. Maybe if the definitions are refined it will manage to tell us something we already know (not entirely sarcastic; that would be legitimately impressive for a mathematical construct regarding morality).
Of course vengeance can have psychological benefits too.
please help us!
right now all we have is a way to state which facebook users a person trusts. there's a chrome extension to help with this. it's extremely basic.
i have a server running at https://dewdrop.neyer.me - we need a lot more help!
i'm just putting it on github now - so i'll update the readme in a few minutes.
Drop me a line: username @ gmail
The author uses the example of climate-change deniers to express the opinion that minority groups have "withdrawn itself from the main conversation and retreated into a different discourse."
Is this true of other minority groups - feminists? Homosexuals? Minority ethnic groups? It seems highly awkward to claim the same thing.
A better system would be one which considers how to cater for individuals rather than declaring a populist majority to be a special, protected ingroup. There's enough of the latter already.
(Aside: If I have two completely different thoughts about an article, should I post them in two separate comments or in the same comment?)
- either morality is an absolute concept (things are inherently good or evil, theists might say this good/evil is defined by a god or gods). This is http://en.wikipedia.org/wiki/Moral_absolutism
- or morality is relative, defined by people, defined by cultures (what one culture might consider immoral, another culture will consider it moral, and nobody is inherently right or wrong). This is http://en.wikipedia.org/wiki/Moral_relativism
If moral relativism is right, it would be absolutely expected that the 98% are "almost perfectly good", since they do things that the majority consider good. What a fantastic essay...
This appears both well-written and standard: http://cdn.preterhuman.net/texts/thought_and_writing/philoso...
I'd refer you to my own writings on the subject but I don't think they've been very productive in practice of understanding, so I'll leave you with a reference to the standard literature, and remark that the correct analysis (using standard nomenclature, which is somewhat misleading) is obviously moral cognitivism::strong cognitivism::moral realism::naturalist reductionism.
Well, many contemporary metaethicists argue that forms of moral relativism undergird / best justify non-cognivitism [1,2], for one. Also Gilbert Harman  and David Wong  have proposed that forms of moral relativism are associated with naturalism (!), and their work overall is an excellent reference I strongly recommend you check out.
Wouldn't the meaning of "good" be "considered to be good by the given culture, group, or individual"?
> Well-known mainstream positions in metaethics hold that moral language is not meant to express statements which are either true or false, i.e., it is not semantic or truth-apt;
Do you have any data on the percentage of philosophers who subscribe to various beliefs? It sounds like you're describing non-cognitivism, which I'm fairly familiar with, although I didn't think it was a widely accepted view.
the rough gist (from http://philpapers.org/archive/BOUWDP.pdf) seems to be
14. Meta-ethics: moral realism 56.4%; moral anti-realism
17. Moral judgment: cognitivism 65.7%; non-cognitivism 17.0%; other 17.3%.
correlation results should also be interesting.
Just because there is an absolute morality does not mean everyone has to agree on it. Why? Ask yourself this question. Physics is absolute. Do physicists have 100% agreement?
Nobody takes relativism seriously.
Don't we have the ability to do this now by visualizing or analyzing citations? A set of "fake" think-tanks which promote bogus ideas should be identifiable as a mostly-disconnected component of a graph today. We don't need to get each think tank's explicit opinions about the others. Aaronson points out this single-purpose inquiry would encourage gaming, but analyzing a graph built for other incentives may give more "honest" results (at least for a while).
And we have, at least five years ago: http://arstechnica.com/science/2009/01/using-pagerank-to-ass... . You can follow links from there to a project called EigenFactor, academic research about shortcomings of PageRank in this application, and more.
Results of such analyses should be used as input to human thought processes and not some sort of legislative robot.
Scott mentions the "forget the past" and "address root causes" sides, but how do you deal with things in the middle?
Even being able to provide a model that allows for injustices from centuries ago would be impressive, but how should such things decay? Again, the same pressures come into play, based on the interests of the judged parties.
George Ainslie argues in "Breakdown of Will" that will is actually the result of negotiations between past and future selves.
Not sure where this (my rambling) came from off of the parent article, but it spawned some interesting thoughts at least :)
It's strange to exclude intent from your model when it's an important factor in almost all systems of morality.
It sounds like you are saying that there is no absolute right or wrong, that right and wrong are human inventions prone to variation, not some fixed celestial law. That is exactly the stance which Aaronson took in his essay so I believe you two agree on that point.
I again wrote a longer response but have shortened it because the author seems to have committed a rather grave error which is to assume that human moral 'intuition' is in any way consistent. There are heaps of evidence (cue the trolley car) that human moral judgements really should not be considered a guide for anything. The fact that we can capture the disasters of collective morality observed under various regime's during the 20th century ought to tell us that following those models as a universal foundation for human relations is a terrible idea.
Might also be worth paying a visit to eigennicolo and not adhere to such rigid systems.
I would also throw in that financial systems in general suffer from this same problem: we assign value to items that get assigned value. Where is the objectivity? There is none.
It is quite ironic that I found your comment at the bottom of the HN comment queue, and it is also by far the most penetrating, IMNSOO.
I don't know about you, but I'm willing to admit Aaronson knows more answers to important questions than I do.
I'm afraid you got cause and effect in reverse.
It's also immoral to call for all of us to sacrifice industrial output for future generations to solve the supposed climate change problem. There is no reason to presume that future generations are more important than the present generation (in fact, it is demonstrably the case that they are not). Thus, this position is profoundly immoral.
However, the implicit assumption that sacrifice is moral is common to most world religions and also altruism, which is probably where he imported it from. All of them are morally bankrupt. A scientist shold be able to be skeptical and see such logical flaws, even if he is not able to propose the correct solution.
Or, option 3, are you just trolling?