This is a really good critique, but the "misery trap" section feels like it is describing a problem that EA definitely had early on, but mostly doesn't now?
In early EA, people started thinking hard about this idea of doing the most good they could. Naively, this suggests doing things like giving up things that are seriously important to you (like having children), illegibily make you more productive (like a good work environment), or provide important flexibility (like having free time), and the author quotes some early EAs struggling with this conflict, like:
> my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable
I really don't think many EAs would say "seems reasonable" now. If someone says this to me I'd give some of my personal history with this idea and talk about how it turns out that in practice this works terribly for people: it makes you miserable, very slightly increases how much you have available to donate, and massively decreases your likely long-term impact through burnout, depression, and short-term thinking.
I think it's not a coincidence that the examples the author links are 5+ years old? If people are still getting caught in this trap, though, I'd be interested to see more? (And potentially write more on why it's not a good application of EA thinking.)
In case people are curious: Julia and I now have three kids and it's been 10+ years since stress and conflict about painful trade-offs between our own happiness and making the world better were a major issue for us.
It seems like the basic trick here is setting boundaries. It’s broader than EA, but some formulations of EA (and utilitarianism in general) seem to advocate for not setting boundaries; consider Pete Singer’s drowning child thought experiment and how he advocates for treating everyone in the world as having an equal claim on your time and money.
I think that practically speaking, setting boundaries is important and a philosophy that doesn’t take that into account is a bad philosophy. Real world organizations set boundaries all the time; that’s what budgeting is.
But boundaries still seem philosophically suspect due to their arbitrariness: why give 10%, say, and not some other number? Does someone have a philosophical answer for people struggling with this?
The reason for 10% is because (a) tithing isn't that hard on most people's finances and (b) it's a nice round number, memetically strong and aesthetically satisfying. Sure, it's arbitrary -- but it's arbitrary in a way that was designed to appeal to as many people as possible without making them feel like they have a moral obligation to make arbitrarily large sacrifices.
That's it. There's no broader philosophical reason for why 10% is Objectively Correct. This is driven entirely by practicality. (If we had twelve fingers, the number would probably be 12%.)
Yes, I'm well aware. I agree that it's arbitrary, and it doesn't bother me much.
The problem is that this argument doesn't seem likely to succeed if you're talking to someone who is especially interested in philosophical or moral consistency. The arbitrariness of the boundary will bother them.
the characteristics of evolved human brains make not setting boundaries suboptimal. it's the same reason you "pace yourself" when running a race. better hardware would enable a more aggressive approach but we have to work with what we've got.
My counter-argument question is: why would sending 10% of your income to the other side of the world be a better use of that money than giving your dinner server a 10% higher tip (or other arbitrary numbers)? Both improve the quality of someone else life. Both reduce your own spending power by equal amounts. But money also isn't some static thing that disappears when you spend it. Is is a medium for representing the continuous exchange of work (net of losses due to taxes). This is what "The Wealth of Nations" and others around the time of Adam Smith started to teach, creating the modern field of economics, and the basis for describing capitalism.
The article here seemed to mention being comfortable with the number being what was important, perhaps more so than the exact value of the number, or its arbitrariness?
Perhaps then the only losing move is not to play? So, even savings are good, since you are giving someone else access to your money, but completely removing yourself from society as a miser or hermit is expected to be a negative for society, since you are depriving others of your talents.
The cost to yourself is the same. However, the benefit to others varies drastically.
I think EA is on firmer ground when the boundary has already been set. Let's say you have $10,000 that you've already decided goes to charity. (Suppose it's in a donor-advised fund.) What is the best way to spend it? EA says that it matters how you spend it, that you should try do at least as good job at this as you do choosing an investment when you invest your savings, that it's a hard problem, and that there should be charity advisers to help you make a good decision.
The assumption of EA is that you strive for extra income to donate it. If we simply assume that you only maximize income for your own well being then spending your money will also improve the wellbeing of others. The only thing that wouldn't improve it is to keep money in your bank account, forcing other people to wait for you to spend it.
It is not a zero-sum game. Some money spent on labor adds more jobs than others. For example, if I employ people building a road to nowhere, I have only employed the builders. If I build a road that is needed, more economic opportunities are unlocked.
> If people are still getting caught in this trap, though, I'd be interested to see more? (And potentially write more on why it's not a good application of EA thinking.)
A big problem for me is that the various huge income differences exist in large part due to systemic violence. A factory worker makes $N, a software developer makes $50N supporting ads selling the factory's product, and the profits are largely captured by the super rich making $100000N. Violence -- colonialism, authoritarianism, wage slavery, nationalism and so on -- is required to make this happen and keep it going.
I'm not involved with "effective altruism" as a movement, only as the general concept of giving money in a numerically effective way. So numerically the situation is that for every $1 a high income person in a developed country considers donating, they may have done more than $1 in damage to get it; or some part of that $1 was earned through some absolutely evil means; or both. This depends on the job but once you reach the bar of, let's say twice the U.S. personal median income, it's virtually certain. So there is no way to reach ethical parity: it would be digging your way out of a hole.
Existing people need to eat and live so they must participate in the system as best they can. But once the essentials are gotten along with some small treats like ice cream, things become much more difficult ethically. The extra money is blood money, and what can a person do with that?
This feels like a pretty standard critique though I don't know if it counts as a "misery trap" if that is a technical term. I'm sure it's been discussed before and would be interested in knowing how "EA" people resolve it.
that's my quote, just want to add a few notes because I originally typed that out quickly in a comment on twitter and didn't expect it to get traction
1. At the time iirc maybe 60% of people meant something like "that's a reasonable symptom to run into from EA thinking, I've had something like it too, it's hard to fix even if you know it's not ideal". Although maybe 20% of people did think it actually just made sense as a way to operate and didn't see it as a big problem
2. I agree it's much less common now. I think it's possibly because EA has shifted more towards longtermism. Although around 2016 I switched to working on AGI safety which is very longtermist, so it's possible that's my local bias
3. Luckily the symptom was short lived, maybe ~4-6 weeks in total while I was also experiencing startup founder anxiety separate of EA. I consider myself maybe a little hurt by EA, but not that traumatized, and I still think it has lots of merits even if I don't identify as an "EA person" anymore
A good companion to this piece would be John Stuart Mill's autobiography, where he describes his nervous breakdown after being raised as a utilitarian by his father: https://www.gutenberg.org/files/10378/10378-h/10378-h.htm. It raises some of the same issues:
> In this frame of mind it occurred to me to put the question directly to myself: "Suppose that all your objects in life were realized; that all the changes in institutions and opinions which you are looking forward to, could be completely effected at this very instant: would this be a great joy and happiness to you?" And an irrepressible self-consciousness distinctly answered, "No!" At this my heart sank within me: the whole foundation on which my life was constructed fell down. All my happiness was to have been found in the continual pursuit of this end. The end had ceased to charm, and how could there ever again be any interest in the means? I seemed to have nothing left to live for.
> All my happiness was to have been found in the continual pursuit of this end. The end had ceased to charm, and how could there ever again be any interest in the means? I seemed to have nothing left to live for.
Wow, this one hits me hard. I think for many years now I've reverted to focusing on the ends, and while I may find them charming for a while, lose interest in them and scramble to find other ends, instead of just focusing on the means.
For example, I have struggled for years in terms of branding some of the work I do, because I don't know what's the overall narrative I want to tell. When I focus on just what the tools are and how they function, I honestly feel a lot happier and relaxed than trying to focus on the ideal outcome of the tools.
For those who haven't, I suggest visiting the link posted above, searching for the quoted passage, and reading more of the context around that paragraph. I really really appreciated it today, so thank you for posting it.
That's fantastic! Although personally, while I get a chuckle out of reading that excerpt, I can't actually understand the point of view I'm intended to chuckle at.
I don't want my life's work, or anyone else's, to be engaged in a struggle to change mucky 'institutions and opinions.' Eww! Gross! Honorable, necessary, maybe. (But only 'maybe', I wouldn't assume it about those who claim to be engaged in such a struggle). It's not completely implausible that realpolitik can be creatively fulfilling, worthwhile, and life's calling. But I'd chalk that up to humans being able to imbue even the worst situations with meaning, rather than it being particularly attractive as a life's work.
If I think of that question:
Suppose [...] that all the changes in institutions
and opinions which you are looking forward to,
could be completely effected at this very instant:
would this be a great joy and happiness to you?
I'd answer 'yes', and I think a lot of HN readers would too (aside from "Monkey's Paw"-style unintended consequences). I wish we had a safe, abundant world for everyone, where all respected each others' rights -- so we could get on with the best parts of human life.
The best parts of human life are limitless, so the Mills' thought experiment can't touch them: creating, and sharing.
Creation is limitless, and there is no end-game, nor can there be. The question falls apart immediately: "imagine that everything that should be created has been created." It would be nonsensical -- whether in a cave, in the present day, in a 1950's sci-fi future or some (implausibly) transcendent Singularity, there are always things that you don't know, and thus new things you can do with the new knowledge once you acquire it. And thus there will always be new things you can experience and share with others.
Edit: and as a note about EA specifically, so as not to derail one of my favorite recent HN comment sections.
I'm a fan personally, and my experience is that 'expat' living is a good way to help accomplish it, without any particular misery traps (10 years out of the past 18).
Specifically, I align with the 'Our World In Data' take, about how inequality globally is so much more than within rich countries -- so any 'sticky' ways of funneling your money into poorer countries is good, and you don't even need to worry too much about getting scammed, as long as the scammer legitimately lives and spends in a poorer community.
And then if you're actually living in that poorer country, while working in a richer one, you're 'funneling money' there, by and large not competing with the locals (except in things in which there is already effectively global competition, like, say, oceanfront property/rentals). And you can also develop excellent 'local knowledge' in order to give more targeted help, which is rewarding -- but that kind of 'give a kidney' stuff isn't scalable, where remote work is.
> inequality globally is so much more than within rich countries -- so any 'sticky' ways of funneling your money into poorer countries is good, and you don't even need to worry too much about getting scammed, as long as the scammer legitimately lives and spends in a poorer community.
That is a great insight. It’s a big “if” though. A lot of the money skimmed from, e.g foreign aid to Ukraine will go to billionaire oligarchs like Kolomoisky
That's why the OurWorldInData essay mentioned that some use charities that focus on 'giving locally', even letting you personally contact and vet people.
An example of skimming I've seen firsthand: large-scale subsidized programs of house construction for poor folks.
The skimming that arguably occurred: the local program administrator's son, both of his woman's kids, several of his ex-wife's kids in another province, all got houses under this program. It's arguably skimming or corruption, because he and his family gained 6 houses from this program intended to help the least fortunate, out of maybe the 20 or so I know were built in the area. 'Arguably', because these people did qualify; they were adults living with other family who legitimately couldn't afford their own houses or rent, it was only an injustice in the sense that it disproportionally benefitted one family in the community.
But all of the money stayed in the area, and mostly in the country, because (in the words of the prefect of police in the movie Casablanca), he was 'only a poor corrupt official'; no one in their family had even attended local universities at that point. The construction and trade workers were local, the materials were local, and his now more-materially-secure family was engaged in economic activity locally.
It's possible, of course, for scammers to organize large-scale fabrication of 'sob stories' and I'm sure it's happened and will happen, but hopefully the scammers are 'sufficiently local'. :)
I think that's too simplistic, which is why I used a vague term like 'scammer'.
Giving money to, say, a powerful criminal mob would be bad.
But giving money to someone who is simply dishonest or exaggerating when seeking donations from people looking to give locally? Not such a problem.
Moving money from people in the richest country to people in the poorest country is probably good, if that money gets spent (instead of hoarded) via normal economic activity -- paying people in a community in that poor country -- I'd say, regardless of the morals of the person through which it moved. Relatively small amounts of money can be turned into durable wealth in a poorer country; for instance, it can be the difference between a young family buying/building a house or not.
I think this is overall a very well written article, as someone not involved with EA, it seems a good critique of EA.
But! This actually makes me more interested in EA, because the criticism only really chips away small caveats in the idea it presents. It points out some inconsistency in the central principle, but a lot of things are still worth doing while having some unclear edge-cases that seem inconsistent.
I think it's also easy to reject the dichotomy between "EA says you should sacrifice everything in the embrace of extreme utilitarianism" and "EA is just generic try-to-do-some-good-ism that brings nothing new".
I just started reading the 80000 hours website. Their argument against the former seems to be that your continued career and income is your most important asset (unless you inherited a fortune), and not being miserable or depressed STRONGLY supports building a sustained income and career, and continuing to give.
So you should allow yourself to eat ice cream sometimes not just because it's intuitively obviously okay, also because it supports your mental health, which is a continued asset that increases your capacity to keep giving.
If you burn out and start giving less or stop, obviously that's not effective altruism, and I think the 80000 hour website and EA more broadly must be very aware of that.
(The other side of the dichotomy seems even easier to rebuke. It's just overly reductive for the sake of making a point.)
Overall this is a great article that has ruined my weekend by giving me way too many other interesting web pages to read :)
> I just started reading the 80000 hours website. Their argument against the former seems to be that your continued career and income is your most important asset (unless you inherited a fortune), and not being miserable or depressed STRONGLY supports building a sustained income and career, and continuing to give.
I have a fundamental problem with the argument of “mental health maximizes giving” which is that it doesn't do a good job of justifying self-care in many situations.
Fundamentally, the issue is that sustainability can't be an enduring argument when you have finite time horizons (as all humans do).
For example, if I ever retire or become irreparably sick, should I just donate all of my money and make my life miserable? That is, if I'm no longer productive, what's the point of caring for my mental health?
I believe one of the biggest problems in EA and my own explorations in basically trying to "optimize good" is that "good" can be a very fuzzy concept.
Good for what? For whom?
> For example, if I ever retire or become irreparably sick, should I just donate all of my money and make my life miserable? That is, if I'm no longer productive, what's the point of caring for my mental health?
Taking your example to the morbid extreme, if I'm irreparably sick and "good" is the number of healthy, living people, do I kill myself so others don't have to spend resources on me? But then this gets tricky, because is spending resources on someone else something that drives love, connection, and unity, which can be overall good for those individuals and also for society? What if "good" is defined as what's best for the plants, should we commit mass suicide to allow nature to restore itself or at least give up the concept of private property to perhaps align ourselves more closely with nature?
Personally, I think I've spent so much time on the rabbit-hole of what is "good" and more so, "which is better," that sometimes the only thing that gives me respite from the anxiety is saying "more loving, less saving"—in other words, appreciate things as they are instead of trying to constantly make things better.
Maybe for me, the underlying problem is trying to know with certainty which is is "more good" ("better"), instead of accepting the uncertainty in it and going for it anyway. I dunno.
Your question is the source of its own answer: if we want EA to be the most effective force for good it can be, we should make it something people will be glad to be a part of.
It sounds like slight of hand, to say that things aren't optimal purely as a result of people not wanting them to be optimal, but it tracks. An EA that forces people to give up the things that they love is not as successful, on any axis, as one that does not.
I was somewhat expecting this argument when I wrote the comment.
The thing is that argument only works at the policy and governance level of EA. If I'm an individual who is convinced on the philosophy behind EA, and I'm willing to place it before anything else, then it doesn't matter what's optimal to “ensure commitment” because I'm already committed.
That's why I mean when I say that sustainability isn't a good general argument: it can help you in some cases, but not all. Examining EA as a system of values, that either suggests that EA is insufficient or that self-care shouldn't be that important.
> came across some stats on how safe it was to donate and it totally changed my picture. I thought, 1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people. I want to be the kind of person who'd do that, and you just have to follow these few steps.
What these noble gestures based on statstical thinking invariably omit is the fact that kidney donors have an elevated chance of suffering from kidney disease themselves. This risk is much higher than the 1/3000 risk of death in surgery reported in the article. For example:
> Two studies report a higher risk of end-stage renal disease (ESRD) among donors than among healthy nondonors; however, the absolute 15-year incidence of ESRD is <1%. All-cause mortality and the risk of cardiovascular events are similar among donors and healthy nondonors, although one study provides evidence for a 5% increase in all-cause mortality after 25 years that is attributable to donation.
So it's not like you sacrifice yourself to save 3000 people, it's sacrificing yourself for a chance to have two kidney patients where you only had one. And while "less than 1%" sounds like a small personal risk, it is a large risk at a population level. And as more people are convinced of the altruism of donating a kidney, while alive, it will affect more people.
In this, I'm speaking as a kidney patient and one who would never accept an organ from a living donor, exactly because of the above.
And that's before thinking of the trade-offs in quality of life that come with transplantation, and the potential effect of "freezing" research in finding true cures for kindey disease (which is not one disease) because we can keep people alive for more than 10 years with transplants and dialysis.
> So it's not like you sacrifice yourself to save 3000 people, it's sacrificing yourself for a chance to have two kidney patients where you only had one.
This is somewhat dismissive and feels unfair.
There is a huge QoL difference between an existence on Dialysis (CAPD being less miserable than haemo - but still hard) and post transplant life.
However, I agree with your end position. I don't think I would accept a live donor kidney from a loved one, so accepting from a stranger would feel unreasonable.
I keep hoping for the mythical "grown" organ to resolve issues - but I have concluded it will not be in my lifetime
Maybe I'm being unfair. I would feel more charitable if I saw a concerted effort to warn potential living donors of the risks they get themselves into. There is virtually no such effort and I believe this leads at least some well-meaning people to make a life-changing decision with less than complete information.
And that is what I think is the most unfair thing, the greatest injustice, of all. To deprive an independent adult from the freedom to make an informed decision about their own health.
I'm speaking from personal experience again and this is probably coloring my views. In any case, since I will probably need kidney replacement therapy sooner or later I can say with certainty that both options, dialysis (both kinds) and transplant, just suck.
> I can say with certainty that both options, dialysis (both kinds) and transplant, just suck.
You're not wrong, and you have my sympathy. I'm somewhat leery of over-sharing on here, so will leave there. Hope you're not overly weighed down by the inevitability.
I mean, it's pretty obvious that the expected additional kidney cases per donation is less than 1. We aren't talking about an exponential explosion of kidney transplants, it tends to a finite sum with fewer deaths.
Because it's an argument against the idea that kidney donation doesn't make sense since it causes more cases? I mean I guess there's a world where 25% of global GDP goes towards surgeons moving kidneys around, but between you and me I suspect that this particular function trails off a lot quicker than that.
> 1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people
If I’m being honest with myself, I’m entirely sure I would ever sacrifice myself to save 3,000 random strangers.
There are movies like (spoiler alert) Failsafe which explore how difficult the choice of self-sacrifice is to save even 3 BILLION people, let alone 3,000.
I've been part of Effective Altruism (EA) for over a decade (joining Giving What We Can (GWWC) over 10 years ago - donating at least 10% of my income; donating 50% during one of the years of my life - https://boldergiving.org/stories.php?story=Boris-Yakubchik_1... ).
I made a 5-minute video interviewing college students who joined GWWC. It features Nick Beckstead who later became a Program Officer for Open Philanthropy. https://www.youtube.com/watch?v=dZKh0M9x8s4
The article mentions Zell Kravinsky (not explicitly an EA), who I invited to give a talk at Rutgers. You can hear his talk (an my intro at the beginning) here: https://www.youtube.com/watch?v=RvUcbcUMtXw
As a developer, I created an open source charityware Video Hub App, most proceeds from which go to charity; It's netted almost $15,000 to the GiveWell-recommended cost-effective Against Malaria Foundation (AMF) over the last 4 years: https://videohubapp.com/en/blog
I'm a strong proponent of Effective Altruism and would be honored to answer any questions. https://yboris.com/
I've now given close to $75,000 to Against Malaria Foundation (AMF) - my primary place of donations; and probably over $25,000 to other organizations (some deworming, but most of it focusing on animal welfare). In addition, I've organized some book-sales for charity at my college and had some birthday-for-charity events which together netted more some donations. The events were great opportunities to talk to many people about this cause I'm passionate about. I know for sure I've helped inspire many people to make effective giving and doing good a bigger part of their lives and that may be even a bigger positive than the direct donations I've already made.
According to GiveWell, AMF is a highly cost-effective charity. A rough summary is a $2 donation results in protecting about 2 people for about 4 years (cost of nets has fluctuated over the decade). I am confident that because of my donations I have prevented thousands of cases of malaria from happening (so children can go to school, parents can go to work, etc). As a bonus it probably prevented many needless deaths. https://www.givewell.org/charities/amf
Personally it makes me happy that I've been able to help avert misery and pain. If I have a bad day, it can be a very cheerful thought to remind myself of the good I've contributed to individuals (even though I'll never meet them).
From the sideline chorus of cynics; the Achilles heel of Effective Altruism is that most alleged altruists are the usual fair-weather activists who want to be seen to be on the right side but are not actually interested in solving any real world problems.
The only lens that works when viewing politics (and altruism is a form of politics) is to assume people are miscommunicating their intended outcome. We've had thousands of years of experience with cause and effect, hundreds of years experience with modern mathematics for evaluating evidence and all the resources in the world to run experiments if we cared to. If people aren't trying to deploy that beyond 3-word-slogans and "we all know what is going on here, it is obvious, no need to check the hypothesis!" style strategies then they aren't trying to improve the situation. The first thing anyone runs in to when they try to solve a problem is that nobody involved seems to change their mind after reviewing contradictory evidence and that is a huge barrier to change.
There is a clear gap between how people who are interested in solving a problem act and how the average person out there to "do good" acts. If people are trying to improve a situation the strategy & understanding of that situation should keep pivoting. And there should be a steady stream of minor discoveries as apparently obvious truth turns out to be false.
Why is "being interested in solving [...] problems" your metric? How often do big problems really get solved? If donating money can be shown to save lives, why isn't that a good thing in itself, even if it doesn't solve a huge problem like malaria?
I'm not sure it's true, though. Many EA discussions seem to be about solving big problems, which seems like an indicator of interest?
Tricky question to answer, particularly without picking specific examples and getting too political.
I suppose what I'm trying to say is there is pretty overwhelming evidence that the voting public doesn't understand cause and effect of the policies they are calling for. Eg, spending 20 minutes debating politics with anyone makes it apparent that at least one and often all sides of a debate are people who don't really understand the particulars of the situation, in most cases haven't read the policies involved or done much research beyond listening to some with charisma and maybe some credentials. They don't change their minds.
Then the actual politicians build up their rhetoric and positions, and they have to mirror what the voters are thinking to get votes. But the voters are often demanding incoherent or stupid policies that aren't at all aligned with expert advice. My personal favourite showcase is a combination like loose border control combined with a strong welfare state - no fair & feasible way to do both. The situation ends up with politicians who have to say things that clearly they don't believe in service of goals they cannot pursue, in order to hold office.
If they were serious about what they are communicating, we'd expect them to check the evidence & maybe run a couple of trials to see how feasible the whole plan is. And if they did they would change their mind quite quickly, because there is actually quite a lot of evidence out there about what works. But they won't, because they aren't serious & that they are actually doing is mimicking the opinions of the large, oft-changing and schizophrenic opinion of a mass of voters. And the voters really don't care because big groups of people are not at all altruistic outside their in-group.
The only issue I have with "branded" stuff, like EA (which sounds awesome. Five Stars would eat there again), is that it often establishes a "metric," that is then used to "measure" others (and usually, find them wanting). This happens in pretty much every human endeavor. The urge to be better than others is powerful, and many people dedicate enormous resources to being "better than."
I've been involved with an outfit for my entire adult life, that has, as a Core Principle, "You must give it away to keep it."
It's kind of a "pass/fail" test. If we pass, we keep breathing. If we fail, we die twitching, so there's definitely a measure of self-interest, involved.
That said, some pretty amazing stuff gets done, with almost no fanfare, and we don't bother shaming each other with metrics.
I'm happy with a metric that encourages people to do good, as opposed to a metric which describes how much power one has over others, or how many yachts they can afford. Hopefully the EA community isn't terribly concerned with how they measure up against others (though, as an outsider, maybe I don't understand how important that is). By thinking about how to do good, they're already leagues ahead of 90% of humanity.
The concept of negative yields recognizes that owning capital can entail obligations to the public. It effectively turns money into a boomerang. If you have too much and don't want it, you ought to give it to someone who has too little.
The way the current economic system is structured fools people into believing money is their private property, as if the people who are dependent on that money are obedient servants. Hence governments must keep deficit spending for all eternity as the boomerang isn't returning.
I've always been curious about EA (and espoused ideas like aggressively making money so I could donate more to non-profits)... if I wanted to participate in EA discourse, where are those conversations happening now? effectivealtruism.org?
Effective Altruism discussion can be helped IMHO by introducing "Social Value Orientation" (SVO), which describes people's preferences for resource allocation (e.g. money) between the self and others.
Is there a conceptualization of SV balancing ? I keep thinking about finding the right blend of proself and prosocial .. to the point of needing a new term like being proWE as you forget about others vs self but focus on a "us" enjoyment
What strikes me about EA is the lack of values around family, community, or civics. If you value the life and happiness of every human on the planet equally, except maybe yourself, I’d venture to say you are not ok. You have no friends/family/connections/support/village/etc). Math and abstract virtue in the mind can’t replace that.
I don’t know that I’d sacrifice my life for any number of others. But I care about my friends and acquaintances. Some of them are not ok. I’d rather lift them out of poverty, or get them decent healthcare, than save a life on the other side of the planet. I’d rather try to influence voters and the political process in my own state and country, or help house the homeless in my town. I have about 400 Facebook friends, and some are billionaires, some are broke, maybe a few are homeless or on the edge of homelessness.
You can’t save the world if you can’t create a microcosm of the world you want to live in, around yourself, with real heartfelt connections.
When I first heard of Effective Altruism, it was all about vaccines and mosquito nets — low cost solutions with clear, positive outcomes. I found that convincing, and even donated some money to some of the recommendations.
When I checked back a few years later, the community seemed to be all about surviving the Rapture, I mean “Singularity”, summoning God, I mean the “Friendly AI”, and maximizing the happiness of the expected 10 to the trillion lives in Heaven, I mean the “Simulation”. I found all of that a lot less convincing.
There are several social reasons why this shift happened, but the root problem, to me, is utilitarianism taken to a dangerous extreme. Effective Altruism was arguably always rooted in some form of utilitarianism as a guiding principle. The problem is, the concept seems to have attracted some very logical, abstract thinkers who somehow didn’t realize when they started to lose contact with reality.
Note that all of this is just my opinion, but here’s my point. I find it always problematic to assign a numerical goodness value (“utilons”) to a human life, though I concede that it can work as a heuristic in small, non-extreme cases. What definitely doesn’t work is to take that number, multiply it by population sizes and claim to have solved ethics. No matter how much you wish it was, ethics is simply not a subbranch a mathematics. To put it very bluntly: That road has been traveled before, and it leads to genocide.
To be clear, this isn’t hyperbole. I won’t link any sources to limit exposure, but you can find articles by self-proclaimed altruists who argue that it’s better to lose a billion lives to climate change than to delay AI research by even one year. The suggested reason being that delaying the Ascent into Heaven — I mean the “Upload into the Simulation” — costs humanity a collective 10 to the gazillion utilons, compared to the mere billions of utilons potentially lost to the climate catastrophe. Just think about the implications.
EA seems very fragile. The opposite of robust. First of all you have the potential problem of it being reductive (if one only looks at statistics) by discarding things like lived experience. You might just end up sending a million malaria nets to Africa which might contaminate the wildlife. Then you have the problem that you talk about that high income, smart nerds can have absolutely unhinged theories about how the world is going to develop.
Ethical behavior doesn’t take much philosophy. Be good to your family. Be good to your friends. Be good to your community. Be good to people who you for some ideological reason think that you should be good to or serve (here philosophy reintroduces itself). Be decent to people who are reasonable.
It’s much simpler, intellectually speaking, to figure out if you are on the right track by using interactions as cues. Following through might be more difficult.
> Be good to your family. Be good to your friends. Be good to your community. Be good to people who you for some ideological reason think that you should be good to… Be decent to people who are reasonable.
That’s easy. More challenging is being good to everyone, always. People you don’t like. People who push your buttons or aggravate you. Your estranged family member. The foolish man. The obnoxious bore. The guy who has polar opposite political beliefs.
> You might just end up sending a million malaria nets to Africa which might contaminate the wildlife.
EAs have long since internalized this idea, and do actually look at the reality on the ground. For example, here's a citation-laden summary of the empirical evidence that those malaria bed nets are helping, both on net and on the margin:
But this doesn't get you to "be vegetarian" or "stop torturing the kids in that ritual with the bullet ants", both of which are pretty obviously right in any framework that doesn't overweight the action and values of your peers and family.
That's the most controversial part of the movement, and the one that's most talked about. But most of EA is still focused on Global Health and Development, Scientific Research, Farmed Animal Welfare, and so on. There was a recent post exploring this, with some interesting comments
https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/...
Helping people can be an incredibly unthankful job, take for example the reddit user who's mom is a crack whore and to help her he drives around one day in the week to look for her and buys her something to eat all the while not being sure if she still knows who he is. EA just pisses me off, where on your list is this poor woman or her son and forgive me for asking but where on your list am I? Do good things and try to get away with them.
You put my own discomfort really well. I immediately responded to efforts like GiveWell when I came across them, but all the abstract long-termism and especially the focus on AI just strikes me as smarty pants faffery and I really can't connect with it.
> When I checked back a few years later, the community seemed to be all about surviving the Rapture, I mean “Singularity”, summoning God, I mean the “Friendly AI”, and maximizing the happiness of the expected 10 to the trillion lives in Heaven, I mean the “Simulation”. I found all of that a lot less convincing.
AFAIK - one reason for this shift is because A) people who (want to) work in ML/AI want jobs B) billionaires like to fund shit that strokes their ego (AI > mosquito nets) and C) people who work in the EA field need money too and turns out mosquitos nets aren't super sexy to fund. So, inherently, people in EA will stroke the ego of billionaires with rapture-like projects because they get funding from those projects and they'll probably say stuff that is a bit extreme (along with some psychos just being thrown into any crowd).
I know people who work in EA. It's complicated and difficult to balance. You have to realize - people (billionaires) contribute often just to stroke their own ego... not because they only want to help the world. At best - you can get them to do a split investment on their pet AI project and then a bit on real EA stuff. Otherwise, they'll just not contribute anything - and then you have the same capitalistic issue that you'll always have... too much wealth being concentrated into the hands of a few. The people I know in EA are often like, "Well if I take any money out of these guys - it's better than them having it. They have so much and no one person should have that much. It's inefficiently used by their hoarding."
> When I first heard of Effective Altruism, it was all about vaccines and mosquito nets [...] When I checked back a few years later, the community seemed to be all about surviving the Rapture, I mean “Singularity”
I feel exactly the same way - but having been present through a lot of it, perhaps I can add some context.
Back in ~2009, Toby Ord was at 'The Oxford Uehiro Centre for Practical Ethics' and started 'Giving What We Can' which is exactly what you remember: Demonstrably cost-effective interventions like mosquito nets and deworming tablets. Measuring charity impact by quality-adjusted life years. Membership by pledging to donate 10% of your income to such causes. Approving of earning to give.
The thing is, GWWC was basically complete when it launched. Their recommended most efficient charities today are the same ones they launched with.
Oh, there were still some things to discuss - is it even more efficient to spend money on lobbying government to spend the foreign aid budget on those highly efficient causes? - but not enough to create a thriving discussion community or keep philosophy professors employed.
But GWWC wasn't the only thing happening at Oxford's philosophy department at the time. GWWC, The Oxford Uehiro Centre, the Centre for Effective Altruism and the Future of Humanity Institute were all in the same place at the same time - and in some senses EA is a broader version of GWWC, sharing its spreadsheet-and-QALYs-based approach. Toby Ord's focus moved from global poverty to existential risks, and GWWC and EA just sort of got rolled up together.
Thus creating this strange organisation - where one branch of the organisation would say the benefits of donating to medical research aren't quantifiable enough; while another branch works on things that are a hundred times more speculative than that.
And as the 'existential risk' branch has a lot more to argue about - they're a much more visible part of the community.
Thank you for this little history. I think this would be a very interesting book to read. The history of the movement(s), summarizing the different lines of thought and debate at different times. Has that book been written?
I think there's a strong selection bias here on the arguments you're reading. It's pretty uncontroversial that vaccines and mosquito nets are generally good; although one could argue about exactly how good they are, there's not much room left for fruitful discussion about them. (Such discussion was done to death years ago.) What's left as the arena for discussion, where there may be new things to say, are the points which are hard to resolve. The fact that most talk is about controversial things does not mean most EA-affiliated people care about controversial things; it could simply mean that the uncontroversial things were fully decided years ago.
> It's pretty uncontroversial that vaccines and mosquito nets are generally good; although one could argue about exactly how good they are, there's not much room left for fruitful discussion about them.
But one of the main focuses of EA is arguing about "exactly how good they are", and in particular the assumption that a cause whose general good was "fully decided years ago" shouldn't be seen as a good destination for philanthropic dollars without the organization focusing on providing metrics and RCTs to back up the idea that it's more beneficial than other definitely-has-some-positive-impact causes
Applying more scrutiny to aid organizations is fine and some of their scepticism of (e.g) microfinance is completely justified, but saying "but you can't prove that this money is spent as efficiently as they could have been" to sending stuff to poor people in South Asia or "you can't prove this will ever work" to funding African entrepreneurs isn't particularly compatible with recommending money is sent to more opaque organisations paying expensive developers to pursue passion projects on because it's theoretically possible the organization will have a non-zero and non-negative impact on a possible future harm which might theoretically be big.
Obviously not all EAs do this and many have very strong and consistent donation guidelines or are more interested in the maximise my personal potential to contribute side of EA than the demand more from aid agencies side, but it was the founder of GiveWell that recommended $30m of philanthropic cash was spent on buying a board seat at the non-profit subsidiary of extremly opaque and extremely well funded corporation OpenAI, which clashes quite hard with his earlier body of work. EAs who are interested in AI safety obviously aren't at all unusual in seeing moonshot research as a valid use of philanthropic dollars, but most people funding universities aren't simultaneously questioning the value of organizations for feeding people for not providing enough metrics to prove they're feeding the right people.
(It cuts both ways too: the arguments for investing in "AI safety" - diversification in the face of uncertainty - are arguments against the EA principle of maximising the efficiency of philanthropic dollar based on rigorous analysis of different organizations' return)
But I will say that this doesn't seem unusual to me; there seems to be a "missing middle" in many situations. For instance, in business, it's easy to get capital for things that are a sure thing - "for the past decade, every dollar we have put into this initiative has generated three dollars of revenue" will get funded every time - and also often pretty easy to get funding for high risk but high reward initiatives - "this is unlikely to work but if it does, every dollar invested will return one hundred dollars" - but it's pretty hard to get funding for medium risk / medium reward initiatives - "this will probably work, and if it does it will be a stable small business".
I think the same thing is going on here, and maybe it even makes sense.
Oh I agree, there's definitely an overlap with the business world and it's not unusual for a business to pursue an expensive (and doomed) moonshot the exec hope will work on the one hand whilst obsessing over comparatively minor productivity differences and cost savings possible in the divisions that actually build and maintain and sell existing product, or for investors to rigorously audit growth metrics for a business that is actually groiwng and then chuck money at other founders despite their pitch because they have a hunch the founders are really good people (but didn't Reddit do well!). I just think it makes the whole thing a lot less "rational" and a lot more we have heuristics that select for people and organizations we like - like other forms of philanthropy - than claimed. And yeah, sometimes that works out just fine.
That subset is known as longtermism and it is just as horrible as it sounds. I just watched a bunch of dystopian video clips on YouTube and honestly I prefer extinction over that "end game".
I find longtermism to be dishonest because it has nothing to do with long term thinking. The long term nonsense is just there crank up the utility score.
there is certainly a genocidal arrogance about EA which truly frightens me. The whole idea of Altruism itself is colonial and relies on a state where one group grovels and another either saves them or doesn't according to whim. It also reifies in-group out-group thinking. oh well.
> Quoting scientific studies that show the risk of dying as a result of making a
kidney donation to be only 1 in 4,000, he says that not making the donation would have meant he valued his life at 4,000 times that of a stranger, a valuation he finds totally unjustified.
Well that's some real dodgy use of numbers, right there. In "1 in 4000", "1" is the number of people who died as a result of donating a kidney and "4000" are the number of people who didn't, counted over some sample of living kidney donors.
These two numbers, "1" and "4000" have no obvious relation to the value one places on one's life compared to the lives of others. For example, "4000" is not the number of others lives saved by donating one kidney. By donating one kidney one can "save" one other person's life at most (and it's not really "saving" as it is delaying the inevitable).
Equally dodgy is the calculation of "1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people" earlier in the article.
Where does this dodgy statistical thinking (like magical thinking, but with statistics) come from I know not, but, anecdotally it's very common in discussions about doing good with numbers and it seems to be designed to shut down debate by claiming "sciense says".
Btw, if I wanted to know how much I value my life over that of a stranger all I'd have to do is ask myself: how many people would I sacrifice to save my life? I am guessing that for the majority of people on the planet the answer is "0". Simple question, simple answer, and no dodgy "maths".
You've used a lot of value-laden words here (dodgy, magical) and made a lot of meta-claims about how bad this kind of thinking is, but I don't think you've shown any actual fallacy in the person's thinking.
These folks are saying they see an opportunity to save a life at very low risk to themselves. I don't see the specific problem with this reasoning. Yes, the donation is not guaranteed to save a life and everyone still dies, but these are not compelling objections from a starting point of 1 in 4,000.
> These folks are saying they see an opportunity to save a life at very low risk to themselves.
No, that's not what they 're saying. One says that "not making the donation would have meant he valued his life at 4,000 times that of a stranger" and the other is saying that "1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people".
"They see an opportunity to save a life at very low risk to themselves" is your interpretation of what they are actually saying. It might be true, but I'm discussing what was actually said. Do you want to discuss what was actually said, or your interpretation of it?
These claims are based on what would happen if many people donated kidneys and the statistics held true. If 4,000 people donated kidneys to people who needed them to survive, on average, one person would die and 4,000 lives would be saved.
From an expected value perspective, one person donating a kidney is like 1/4000th of this.
If you base moral decisions on expected value, as effective altruists try to do, this calculus is straightforward.
The calculation is "not making the donation would have meant he valued his life at 4,000 times that of a stranger". That doesn't follow from the "expected value perspective".
The "4000" is swapped, slight-of-hand like, between unequal, orthogonal values. One is the number of people who donate a kidney, the other is the value of a human life. It's counting 4000 apples and then saying "I got 4000 oranges". It's the old switcheroo, and it means nothing.
Btw, you say "From an expected value perspective, one person donating a kidney is like 1/4000th of this."
"Of this" what? What is "one person donating a kidney" 1/4000'th of? I am asking because I honestly have no idea what you mean. Sorry if my tone sounds combative, but in this I'm just confused.
I think my post is pretty clear. There are four sentences. There is only one sentence the "this" can refer to, and it's the one that comes right before. Could you read it again?
The statements sound clear, and seems to pretty much match our intuition about how it should work. But perhaps incomplete in the math? Since you are not alone in saving 4000 people, but that 4000 people are saving 3999 lives. Or that you are saving 3999/4000 of a person, or roughly one person per donation. So the expected value of the profit of a donation is approximately 1 person saved. But the expected value of the cost of that is that 1 of 4000 people is personally sacrificed. Then, taking the ratio of those, we get that the expected value for social-value over the personal-cost is about 4000-to-1 (approximate, since this assumes that the life lost is of equal value to the average value of the lives saved). Or that society values the donation being about 4000x the value of your life.
(your own personal utility function might disagree however)
A more precise model still might be to try to factor in the expected loss in healthy years for the donor, and the average lifespan gain of a kidney recipient.
If you pull a child out of the way of a speeding truck, you've saved their life without losing any quality of life for yourself or for them. That's a "perfect" life save. Organ transplant is more subtle.
If you already plan to donate your kidneys at death, donating one of them when younger seems less obviously good.
It'd seem worthwhile to build a more detailed model before going through with it.
Edit: I need to learn to summarisy my thoughts better. What I'm saying below and above is that the expected value of live kidney donation to society is misapplied to calculate the value of live kidney donation to the donor. We tacitly accept as a society that some people will die so that others may get a new kidney. That has nothing to do with the value that one person donating a kidney places over another person's health (and not life, because you're not "saving" lives).
Although I understand your comment much better than the OP -4000 to 1, not 1/4000- those maths don't make any sense at all to me. First of all, nobody saves anyone by donating a kidney. You certainly don't "save" an entire person that way. You add some quality-adjusted years of life to their life expectancy, but that's all. And they're not that many, average is about 10. So at best you can say that each person donating a kidney is "helping" or "supporting" one other person.
So it makes no sense to say that 3999/4000th of a person is "saved" or "helped" or whatever by one's _personal_ sacrifice. If I was the person dying while donating my kidney, my death would not equal 4000 people living, or living longer. It would mean my life ending for one person's life lasting a little while longer.
One person! That's how many people are directly affected by one person's donation of a kidney. To claim that it's 4000 instead, is just sophistry with fractions. Or rather, it's an oversimplification because understanding the trade-off between one entire life and 10-ish quality adjusted years of life is too hard and does not easily support self-aggrandising statements.
Look at it the other way: you won't let 4000 people die by refusing to donate a kidney. But this is more or less what is claimed in the two opinions above. Everydobdy who does not donate a kidney is an asshole who values their own life more than that of 4000 strangers.
Btw, I'm arguing that the EAs' maths is wrong and you, and others here, are arguing that it's sound.
So I have to ask you, and the rest who think the EAs' maths is sound: have you, or are you planning to donate your kidney to a stranger?
Because either the EAs' maths is sound, or you donate a kidney to a stranger, or otherwise you think your life is worth more than 4000 or 3000 or however many other people.
And sorry to be so lawyery about this, but I detect an undercurrent in this thread of "shut up, the maths have spoken" (see for example cjbprime's comment just below: I'm confused and the maths is sound). Well, if the maths is sound then go do the right thing, or else quit arguing just for the sake of argument.
Because the two people up there, who justified their decision to live-donate a kidney, at least they followed their principles, however misguided, and nobody can fault them for that. And I think their thinking is wrong and I only have one kidney anyway. But who are you, and how do you live up to your morals?
One could think the math is correct, but conclude that they don't value other people that much. That doesn't mean the math is wrong. It means the assumptions behind the math ("I don't value myself more than 3000 other people") aren't held by them right now.
Or one could think the math is correct, and be generally inclined towards kidney donation as a result, but think it would be better at a different time, such as when they don't have very young kids at home to care for, or when they have a job which is less demanding and would tolerate them taking a month away from work at once.
Or one could believe in the general statistical inference, but think that the specific 3000 number is too high and insufficiently studied, and the actual risk to self is much higher. I don't believe this myself, but it sounds like you might. This is not a criticism of the method, just of the specific number being plugged in to it.
You can argue against these philosophical positions and assumptions without having to retreat to questioning what numbers mean.
No, that doesn't work as you say and it's not a matter of accepting philosophical positions.
That's because the EAs are not making a philosophical argument but a mathematical argument. And if you think their maths are right then you must agree with their agument, otherwise you must think the maths are wrong.
What the EAs' maths are trying to do is to define an objective measure of morality. That's the point of attempting to quantify the value of a human life and that's the purpose of using maths in general: because maths is objectively true or false, while morality is otherwise not. So if they get the maths right, their conclusions must apply to everyone and anyone, regardless of other assumptions.
That is the appeal that EA has to quantitatively-trained and generally mathematically-minded types. Let's do away with the subjectivity of moral philosophy and calculate the truth about morality. We face a question of morality? Calculemus!
So if you think their maths are right you must accept their conclusions, and you must donate one of your kidneys or accept that you are acting immorally. It doesn't matter when you choose to do it or how moral you think it is, what matters is that you accept it is the moral thing to do.
You can't have your cake and eat it: either the maths are wrong, or refusing to be a live kidney donor is wrong.
I disagree with everything you wrote, and I doubt you can find a prominent EA who doesn't.
The mathematics are there to benefit people who share moral assumptions like "the extreme suffering of other humans is bad" or "I don't value myself more than 3000 other people". There is no objective morality. But there are many people who share moral assumptions like the above ones, and the mathematical calculations are for their benefit.
EAs are actually incredibly thorough about writing down all of their subjective moral weights and comparing them -- GiveWell's staff has done this and published the result for many years, for example. They've created spreadsheets where you can plug in your own moral weights to see how it affects their giving suggestions. The fact that morality is necessarily subjective and individual is an extremely normal part of the EA conversation.
That's just splitting hairs. The weights don't matter. What matters is that the EAs claim that their maths measure the morality of actions.
It doesn't matter if they disagree over the parameters, what matters is that they agree their maths quantify morality. That is the objectivity that they claim.
And they even have spreadsheets to do it, huh? Wow. But, what are these spreadsheets calculating then? I mean, how can you calculate something subjective? If a quantity is subjective, then why can't I calculate it any way I like? If I can calculate a quantity any way I like, then does that quantity really measure anything? Can I use E = mc² to calculate the morality of my actions? If not, why not?
That stuff just doesn't make any sense, sorry.
> I disagree with everything you wrote, and I doubt you can find a prominent EA who doesn't.
"Prominent", huh? Interesting hedging there. Why should I care that someone is "prominent"? Don't peoples' opinions count if they're not "prominent"? And what's "prominent" anyway? Like, X followers on Instagram?
You know, the more I'm having this conversation, the more it sounds to me like some weird kind of Silicon Valley roleplaying that's just out of touch with reality.
No, your comment is not clear at all. I read it again. Here's what it reads like if I replace "this" with the sentence that comes before it:
From an expected value perspective, one person donating a kidney is like 1/4000th of [if 4,000 people donated kidneys to people who needed them to survive, on average, one person would die and 4,000 lives would be saved].
Is there any chance you may want to try and help me become uncofused?
To play EA's advocate: if you assume that donating the kidney will "save" a life, then if you had 4,000 kidneys, you could, on average, sacrifice yourself to "save" 4,000 lives. Of course you don't have 4,000 kidneys, but the intuition is that the decision is the same whether you're sacrificing 1/4,000th of your life to save one person, or your whole life to save 4,000.
However, I agree with your point that this isn't really "saving" a life as much as "extending" or "delaying the inevitable". In terms of QALYs (Quality-Adjusted Life Years), the benefit may be much smaller while the risk remains the same. And I especially agree with your conclusion in the last sentence.
> To play EA's advocate: if you assume that donating the kidney will "save" a life, then if you had 4,000 kidneys, you could, on average, sacrifice yourself to "save" 4,000 lives.
No, see, this is what I mean when I say that maths are being misapplied. We know how many kidneys you can donate before you die: two. Or three if you have three. Some people even have four. So what does the "4000" have to do with that? The "4000" is the number of different people who each donated one kidney, so it doesn't tell us anything about one person donating 4000 kidneys.
And anyway, if you need to imagine a person with 4000 kidneys before you can figure out the value of live kidney donation, then you don't have a good way to figure out the value of live kidney donation. Which is my point.
>These two numbers, "1" and "4000" have no obvious relation to the value one places on one's life compared to the lives of others.
If you think donating a kidney has a 1:4000 chance of killing you, and a 1:1 chance of saving someone else's life, then I do think deciding not to do it is saying that you value your continued existence 4000x someone else's. We can argue about the specific numbers, and about whether this is the best way to apply your life to helping others, but I don't think the interpretation is wrong.
> For example, "4000" is not the number of others lives saved by donating one kidney.
I don't think see anyone claiming that?
> By donating one kidney one can "save" one other person's life at most
Often you're able to save more than one: it is reasonably common for altruistic donors to be able to facilitate donation chains that would not otherwise happen. The situation is that there are many people who are willing to give a kidney to help a specific person they know, but who are not compatible donors. One way that can work out is that I give my kidney to your friend, and you give your kidney to my friend: no altruistic donor needed. But often this doesn't quite work out in terms of compatibility, and having an altruistic donor can make multiple transplants happen that wouldn't otherwise.
>it's not really "saving" as it is delaying the inevitable
That's right. Especially considering that even post transplant the recipient is going to be on immunosuppressive drugs and is not likely to have the same duration or quality of life as the typical person. Often when talking about this, people use the concept of a QALY: how many quality-adjusted life years is it?
> Equally dodgy is the calculation of "1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people" earlier in the article.
Why is this calculation dodgy?
(While I strongly admire people who have donated kidneys, I don't think it is a very good trade-off of cost to yourself versus benefit to others, and personally I still have both kidneys)
> "4000" is not the number of others lives saved by donating one kidney.
Why did you leave out the "For example" preceding that sentence in my comment? Leaving "For example" out completely changes the meaning of my comment.
It doesn't look like you're interested in a honest conversation.
> (While I strongly admire people who have donated kidneys, I don't think it is a very good trade-off of cost to yourself versus benefit to others, and personally I still have both kidneys)
I don't, and I suggest you keep both of yours because you are going to need them.
> Why did you leave out the "For example" preceding that sentence in my comment? Leaving "For example" out completely changes the meaning of my comment.
I interpreted your use of "for example" as indicating that what followed was a specific claim that backed up the more general claim in the previous sentence? I didn't include it in my quote, because I was responding to your two claims separately. While it's still not clear to me how you see it changing the meaning of your comment, I've edited mine now to include it.
I think it comes from the statistical expectation. Like, if you were able to donate your kidney any amount of times, you would on average expect to lose your life on the 3,000th donation. So the question is, do you want to be the kind of person who would do that at the cost of your own life?
To me, "effective altruism" would be ordinary altruism--which is an instinct, and not something you can teach or unteach--but with practices and guards for identifying and avoiding parasites. Skimming through this article, it would seem that they mean the opposite.
No, what you said in your first sentence is a pretty good description of EA as it's typically practiced. The article's emphasis on non-core stuff can give a misleading impression.
[Serious] Change my view: EA is moral laundering for accumulating wealth.
For the record, I am skeptical of philanthropy to begin with, but I am nonetheless interested in what followers of EA might have to say to counter this assertion ^.
I'm not a huge EA person, but, here is what I'd say:
1. You seem to be under the assumption that accumulating wealth is bad. I don't think it is. I don't know what the consensus view is among EA-people, but I assume at least some of them are fairly ok with capitalism in general, and likewise agree that most wealth accumulation is perfectly ok.
I'd even go one step further and say that wealth accumulation is usually a sign of doing good in and of itself - you're creating value for society, which is how you're accumulating wealth. (Again, for the most part - there are edge cases.)
2. Even if it is "moral laundering", is that a bad thing? EA biggest focus is probably on the idea that what we give to matters. All things being equal, it's better for wealthy individuals to donate to the Against Malaria Foundation, rather than to building a new wing of a museum. Both can be a form of "moral laundering", but one is morally superior.
One problem with effective altruism is that it is highly utilitarian leading to hijacking of utility metrics as mentioned in the article. However, the author only talked about people making arguments in good faith, hoping to improve metrics. What about people acting in bad faith? Longtermism has hijacked it's utility metric by assuming endless exponential growth of the human population in the future and therefore giving future lives more value, overriding all short term concerns which is then used to justify authoritarianism. It falls into the same trap as any ideology. People think it is going to save the world, so it must be adopted at any cost, even if people are going to starve and die. This has lead to the downfall of communism for example. The height of irony is that longtermism is purely about short term thinking.
I for one absolutely distrust self-styled altruistic do-gooder organizations, with good historical reasons, and always will.
One fundamental problem is their desire to project that image of 'goodness', which leads to their own internal corruption, as they don't deal with internal problems transparently, because exposure of such corruption tarnishes their image, and protecting that image then becomes their central goal.
The Catholic Church is a controversial example (as there are many Catholics, mostly nice decent people), with its legacy of child abuse of many forms. Exposure of this activity, conducted largely by individuals within the Church without official Church sanction, would clearly tarnish the image of this altruistic organization, so the leadership covered it up, thereby allowing the abuse to persist. Image rapidly becomes more important than substance.
Secondly, the members of such organizations, convinced of their moral rightness, tend to seek to seize political power in order to enforce their vision of goodness on the overall human population. Many just hop on the bandwagon because they see it as a route to wealth and power - as long as they sing the song of moral purity, they can become rich and influential, and live lives of luxury and pleasure.
Thirdly, their 'moral stance' is typically influenced by low-quality information. The author mentions the Nazis as an extreme example, but they had indeed absorbed fraudulent claims of genetic superiority of ill-defined racial groups, and so to them ethnic cleansing and genocide were the responsible thing to do, for the long-term health of the Aryan people. The author believes they knew what they were doing was morally wrong, but I doubt this - at most, they'd have called it a short-term necessary evil for the long-term greater good.
So, what's the right thing to do? My view is, accurate information should be the basis of all decision making, and the only way humans have found to come up with accurate information (about things like risks of global warming, causes of disease, etc.) is the rational scientific process, so that's what we should be spending our time and energy on developing, more than anything else. Others may disagree and wish to devote themselves to the creation of art, literature, music, architecture, etc. and that's fine too. Maybe they want to sit around and do nothing, like monks in a monastery. That doesn't bother me either, as long as they're not trying to control society in the name of their moral purity.
As far as moral behavior, this great quandry has the simple solution that you should simply treat others with the same consideration and respect that you'd like them to treat you with, assuming you're not some kind of masochist who craves abuse, right?
> My view is, accurate information should be the basis of all decision making, and the only way humans have found to come up with accurate information (about things like risks of global warming, causes of disease, etc.) is the rational scientific process, so that's what we should be spending our time and energy on developing, more than anything else.
While I agree in the importance of the scientific process, I disagree with the rationale. Accurate information should be the basis of decision making, but it can't make the decision. Deciding requires values, requires knowing "what is good". Science can tell us exactly where we are and how we got here, but it can't tell us why we're here. Science can tell us the range of outcomes if we do or do not stop carbon emissions, but it can't tell us whether or not we should. That's a moral decision.
And that's why (in my value system) it's important to have some people devoting themselves to the creation of art, literature, music, architecture, etc. By exploring the ranges of human emotion, and by learning how we emotionally respond to various tragic, comedic, or romantic scenarios, we can better understand our own values on which to found our decision making. Informed, of course, by scientifically accurate information.
You absolutely can. Is this actually an issue that EAs have to grapple with? It seems so obvious to me. You just have to stop thinking about art in the fuzzy, "good because other people say it's good" sense of the word. You have to actually come to grips with the function that art serves, which I gather is difficult for a lot of people because the answer doesn't gel with the cultural pacifism that they believe they believe in. Marx of all people got this right.
Art is cultural warfare.
>I suspect no society, ever, has been healthy that didn't invest significant time and resources in the arts.
Obviously. A culture that does not invest in its art is going to be overtaken by a culture that does. Just as a nation that lacks a military is not going to be its own nation for long. The purpose of art is to define, defend, and spread the cultural values of the people that create it. Doing so ensures that those values survive to the next generation, ensuring the long-term success of the culture and movement. Thus, justifying investment and engagement with art, or raising children, is easy. Quantifying the benefits of that investment is much harder because, in contrast to mosquito nets and vaccinations, culture spread is going to be opposed by other cultures. It's PvP, rather than PvE.
For those that don't believe that art affects cultural values, please consider this thought experiment; imagine you have a young child that wants to watch a new movie. You take a look at it, and the movie's story is of a bully and cheater winning at some competition, the entire community siding with that antagonist, and the main character eventually accepting that this is fine and normal and must simply be tolerated. The moral is clearly that liars and cheaters win, violence is a valid way of getting what you want, and everyone who disagrees will be unhappy and defeated in the end anyway. Would you want your child to watch that movie? If not, why?
Of course, this view means that most forms of "high" art - the kind that exists so that people in the know can look down their nose at people who just lack taste and class[0], or so that they can commit tax evasion[1] - are not actually good art. They do not effectively communicate cultural values, or defend them. The attitude of the art world in defending this excreta widens the rift between them and the general public that they should be trying to communicate to. It lowers people's interest in engaging with art on an intellectual and philosophical level, allowing more and more pandering nonsense to be shovelled into them. The modern world has no more interest in "artists" making "art"; it wants "content creators" creating "content".The ongoing destruction of people's ability to understand or appreciate art - where it actually exists - at a high level serves only to lower natural defenses against competing cultures.
"All poets are soldiers. We fight our wars across centuries."
> You absolutely can. Is this actually an issue that EAs have to grapple with?
No, not particularly.
If you've committed to donating 10% of your income to the most effective charities, you can still spend the remaining 90% at your own discretion. Buy a house, have children, donate to a ballet company, buy yourself a Ferrari, whatever.
I see how that justifies art in general, but that doesn't justify new art. If you have values that have already been expressed by other people, what's the use of producing something new? Do we need this much people working in art?
This is where my point about culture being PvP rather than PvE is important to understand. Cultures are in active conflict with other cultures. It's like marketing; do you need $X dollars to inform people about your brand? No, but if you don't spend it, your competitor will, and take you to the cleaners. You can't rely on fixed targets like "supplying a mosquito net for every family" or "delivering 100,000 doses of the MMR vaccine to this region." You are playing an iterative game, and new strategies will be discovered, exploited, countered, and become obsolete. The language of art and how it must engage with people is in constant flux; no one views Shakespeare's plays as low-brow entertainment any more.
Further, art builds on art. Less so now that copyright has strangled the ability of artists to expand on each others' work, but a community of artists working and improving alongside one another does produce better art. Sturgeon's Law is in full effect; 99% of art produced is crap, but it needs to be supported so that the 1% that actually matters ever gets made. And artists often improve best when they have bad examples to analyze; of cinema, it is said that it is easier to understand the rules of film-making from a bad film than a good, because the mistakes are in full view.
This is a pretty restricted idea of what art is and can be. Creating art has been a human impulse probably since the beginning of our species, painting on cave walls, adding flourishes to pottery, tattooing and other body modification. Today you have people writing poetry no one else will read, singing songs no one else will hear, and that's art too.
Obviously art can be propaganda. Batman and Iron Man are capitalist billionaires with hearts of gold. Of course there's a message there. That's only a small sliver of what art is.
I suppose you can crunch some numbers and decide that higher arts funding at the high school level will result in an X% increase in romantic poems exchanged between lovers with a concomitant increase in worker productivity and the general birth rate. Maybe the animal paintings in the Lascaux caves reinforced cultural or religious bonds that improved the local group's success. Looking at things this way is definitely a choice though, and I don't think it's a healthy one generally, though I'm not sure how I could justify that to someone who doesn't already agree. What is the point of being alive other than to care for others and be happy? Making art makes people feel good.
This is a really good critique, but the "misery trap" section feels like it is describing a problem that EA definitely had early on, but mostly doesn't now?
In early EA, people started thinking hard about this idea of doing the most good they could. Naively, this suggests doing things like giving up things that are seriously important to you (like having children), illegibily make you more productive (like a good work environment), or provide important flexibility (like having free time), and the author quotes some early EAs struggling with this conflict, like:
> my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable
I really don't think many EAs would say "seems reasonable" now. If someone says this to me I'd give some of my personal history with this idea and talk about how it turns out that in practice this works terribly for people: it makes you miserable, very slightly increases how much you have available to donate, and massively decreases your likely long-term impact through burnout, depression, and short-term thinking.
One piece of writing that I think was helpful in turning this around was http://www.givinggladly.com/2013/06/cheerfully.html Another was https://www.benkuhn.net/box/
I think it's not a coincidence that the examples the author links are 5+ years old? If people are still getting caught in this trap, though, I'd be interested to see more? (And potentially write more on why it's not a good application of EA thinking.)
In case people are curious: Julia and I now have three kids and it's been 10+ years since stress and conflict about painful trade-offs between our own happiness and making the world better were a major issue for us.