Hacker News new | past | comments | ask | show | jobs | submit login

> They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

To me, the point of this argument (along with similar ones) is to expose these deeper asymmetries that exist in most people's moral systems - to make people question their moral beliefs instead of accepting their instinct. Not to say "You're all wrong, terrible people for not donating your money to this shrimp charity which I have calculated to be a moral imperative".




> to make people question their moral beliefs instead of accepting their instinct

Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind. From the other side, this looks like, quoting OP:

> you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic.

And feels like:

> pressuring someone into changing their mind is not okay; it's a basic act of disrespect.

And doesn't work, instead:

> Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance.


> Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind

I may be putting my hands up in surrender, as a 20 year old (decidedly not genius though). But I'm instead defending this belief, not trying to convince others. Also, I don't think it's the worst thing in the world to have people question their preconceived moral notions. I've taken ethics classes in college and I personally loved having them challenged.


ha, got one. Yes it is pretty fun if you're in the right mental state for it, I've just seen so many EA-type rationalists out on the internet proliferating this worldview, and often pushing it on people who a) don't enjoy it, b) are threatened by it, c) are underequipped to defend themselves rationally against it, that I find myself jumping to defend against it. EA-type utilitarianism, I think, proliferates widely on the internet specifically by "survival bias"—it is easily-argued in text; it looks good on paper. Whereas the "innate" morality of most humans is more based on ground-truth emotional reality; see my other comment for the character of that https://news.ycombinator.com/item?id=42174022


I see, and I wholly agree. I'm looking at this from essentially the academic perspective (aka, when I was required to at least question my innate morality). When I saw this blog post, I looked at it in the same way. If you read it as "this charity is more useful than every other charity, we should stop offering soup kitchens, and redirect the funding to the SWP", then I disagree with that interpretation. I don't need or want to rationalize that decision to an EA. But it is a fun thought experiment to discuss.


IMO: the idea that "this kind of argument exposes deeper asymmetries..." is itself fallacious for the same reason: it presupposes that a person's morality answers to logic.

Were morality a logical system, then yes, finding apparent contradictions would seem to invalidate it. But somehow that's backwards. At some level moral intuitions can't be wrong: they're moral intuitions, not logic. They obey different rules; they operate at the level of emotion, safety, and power. A person basically cannot be convinced with logic to no longer care about the safety of someone/something that they care about the safety of. Even if they submit to an argument of that form, they're doing it because they're conceding power to the arguer, not because they've changed their mind (although they may actually say that they changed their opinion as part of their concession).

This isn't cut-and-dry; I think I have seen people genuinely change their moral stances on something from a logical argument. But I suspect that it's incredibly rare, and when it happens it feels genuinely surprising and bizarre. Most of the time when it seems like it's happening, there's actually something else going on. A common one is a person changing their professed moral stance because they realize they win some social cachet for doing so. But that's a switch at the level of power, not morality.

Anyway it's easy to claim to hold a moral stance when it takes very little investment to do so. To identify a person's actual moral opinions you have to see how they act when pressure is put on them (for instance, do they resist someone trying to change their mind on an issue like the one in the OP?). People are incredibly good at extrapolating from a moral claim to its moral implications that affect them (if you claim that we should prioritize saving the lives of shrimp, what else does that argument justify? And what things that I care about does that argument then invalidate? Can I still justify spending money on the things I care about in a world where I'm supposed to spend it on saving animals?), and they will treat an argument as a threat if it seems to imply things that would upset their personal morality.

The sorts of arguments that do regularly change a person's opinion on the level of moral intuitions are of the form:

* information that you didn't notice how you were hurting/failing to help someone

* or, information that you thought you were helping or avoiding hurting someone, but you were wrong.

* corrective actions like shame from someone they respect or depend on ("you hurt this person and you're wrong to not care")

* other one-on-one emotional actions, like a person genuinely apologizing, or acting selfless towards you, or asserting a boundary

(Granted, this stance seems to invalidate the entire subject of ethics. And it kinda does: what I'm describing is phenomological, not ethical; I'm claiming that this is how people actually work, even if you would like them to follow ethics. It seems like ethics is what you get when you try to extend ground-level moralities to an institutional level. when you abstract morality from individuals to collectives, you have to distill it into actual rules that obey some internal logic, and that's where ethics comes in.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: