Hacker News new | past | comments | ask | show | jobs | submit login

Btw, I'm arguing that the EAs' maths is wrong and you, and others here, are arguing that it's sound.

So I have to ask you, and the rest who think the EAs' maths is sound: have you, or are you planning to donate your kidney to a stranger?

Because either the EAs' maths is sound, or you donate a kidney to a stranger, or otherwise you think your life is worth more than 4000 or 3000 or however many other people.

And sorry to be so lawyery about this, but I detect an undercurrent in this thread of "shut up, the maths have spoken" (see for example cjbprime's comment just below: I'm confused and the maths is sound). Well, if the maths is sound then go do the right thing, or else quit arguing just for the sake of argument.

Because the two people up there, who justified their decision to live-donate a kidney, at least they followed their principles, however misguided, and nobody can fault them for that. And I think their thinking is wrong and I only have one kidney anyway. But who are you, and how do you live up to your morals?




There are several options here:

One could think the math is correct, but conclude that they don't value other people that much. That doesn't mean the math is wrong. It means the assumptions behind the math ("I don't value myself more than 3000 other people") aren't held by them right now.

Or one could think the math is correct, and be generally inclined towards kidney donation as a result, but think it would be better at a different time, such as when they don't have very young kids at home to care for, or when they have a job which is less demanding and would tolerate them taking a month away from work at once.

Or one could believe in the general statistical inference, but think that the specific 3000 number is too high and insufficiently studied, and the actual risk to self is much higher. I don't believe this myself, but it sounds like you might. This is not a criticism of the method, just of the specific number being plugged in to it.

You can argue against these philosophical positions and assumptions without having to retreat to questioning what numbers mean.


No, that doesn't work as you say and it's not a matter of accepting philosophical positions.

That's because the EAs are not making a philosophical argument but a mathematical argument. And if you think their maths are right then you must agree with their agument, otherwise you must think the maths are wrong.

What the EAs' maths are trying to do is to define an objective measure of morality. That's the point of attempting to quantify the value of a human life and that's the purpose of using maths in general: because maths is objectively true or false, while morality is otherwise not. So if they get the maths right, their conclusions must apply to everyone and anyone, regardless of other assumptions.

That is the appeal that EA has to quantitatively-trained and generally mathematically-minded types. Let's do away with the subjectivity of moral philosophy and calculate the truth about morality. We face a question of morality? Calculemus!

So if you think their maths are right you must accept their conclusions, and you must donate one of your kidneys or accept that you are acting immorally. It doesn't matter when you choose to do it or how moral you think it is, what matters is that you accept it is the moral thing to do.

You can't have your cake and eat it: either the maths are wrong, or refusing to be a live kidney donor is wrong.


I disagree with everything you wrote, and I doubt you can find a prominent EA who doesn't.

The mathematics are there to benefit people who share moral assumptions like "the extreme suffering of other humans is bad" or "I don't value myself more than 3000 other people". There is no objective morality. But there are many people who share moral assumptions like the above ones, and the mathematical calculations are for their benefit.

EAs are actually incredibly thorough about writing down all of their subjective moral weights and comparing them -- GiveWell's staff has done this and published the result for many years, for example. They've created spreadsheets where you can plug in your own moral weights to see how it affects their giving suggestions. The fact that morality is necessarily subjective and individual is an extremely normal part of the EA conversation.


That's just splitting hairs. The weights don't matter. What matters is that the EAs claim that their maths measure the morality of actions.

It doesn't matter if they disagree over the parameters, what matters is that they agree their maths quantify morality. That is the objectivity that they claim.

And they even have spreadsheets to do it, huh? Wow. But, what are these spreadsheets calculating then? I mean, how can you calculate something subjective? If a quantity is subjective, then why can't I calculate it any way I like? If I can calculate a quantity any way I like, then does that quantity really measure anything? Can I use E = mc² to calculate the morality of my actions? If not, why not?

That stuff just doesn't make any sense, sorry.

> I disagree with everything you wrote, and I doubt you can find a prominent EA who doesn't.

"Prominent", huh? Interesting hedging there. Why should I care that someone is "prominent"? Don't peoples' opinions count if they're not "prominent"? And what's "prominent" anyway? Like, X followers on Instagram?

You know, the more I'm having this conversation, the more it sounds to me like some weird kind of Silicon Valley roleplaying that's just out of touch with reality.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: