I respect the British organizers’ willingness to sacrifice their reputation on the altar of doing what was actually good instead of just good-looking.
Surely their reputation is a factor in their ability to do good? Optics sometimes necessitate sacrificing the "mathematically superior choice" for one that's worse in the name of not pissing off a ton of people the support of whom you rely upon.
Reputation has a value but that value is not infinite. Presumably they made a guess at how bad the hit would be. (Also, with the right audience, being willing to make tough choices like this would enhance their reputation)
The problem here is that, in order to believe this, we would have to accept that "the castle was the cheapest choice". There are near-infinite ways to present (and omit) numbers in a way that makes your point and your biased choice look like the best one, especially for smart people with a messiah complex. Unless there is someone on the other side actively interested in refuting it, they can massage the numbers as much as they want. It is reasonable for an average person to doubt that buying a castle was indeed the cheapest choice they had.
Nobody really objects to them holding conferences. (At least not too many people do.)
Renting venues for the conferences vs buying a venue seems like a straightforward financial calculation one can make and decide on fairly objectively.
If you want to have a moral discussion, we should probably talk about what kind of venues conferences should be held at; not on whose balance sheets the assets are held.
Sure. I don’t object to them doing anything they want, but if you tag yourself as an “altruist” (effective or not) then other people’s opinions and perspectives are at the very core who you are (or pretend to be). The idea is that they “don’t care” because they believe the numbers, and the numbers only; if the numbers are telling you did the right thing, then it doesn’t matter what people think, right? Well yes, if you’re naive enough to believe that the numbers can be infallible (in which case you’re being used), or if you’re smart enough to massage them for your own good (in which case you’re using others). Seriously, in the end it’s just a bunch of people in a big circle jerk looking for social media points, like so many others. “Oh look at how objective we are,” give me a break.
> I don’t object to them doing anything they want, but if you tag yourself as an “altruist” (effective or not) then other people’s opinions and perspectives are at the very core who you are (or pretend to be).
I don't understand that. You can eg measure relatively easily and relatively objectively how many people die from malaria and how many life years that cost.
Yes, but you can’t measure objectively the context around these people and what causes that to happen. Why do so many people die of malaria in certain places but not on others? That points to the fact that it is actually possible to avoid these deaths, with the right effort. Then you might be tempted to think: well they don’t have the money to handle it, or the know-how; then you send them money and you send them specialists and you realize it doesn’t change anything. At this point you might be thinking: if they get money and support to deal with the problem, why are they not doing it? Jumping from this to some sort of racist/Darwinist conclusion is then almost inevitable, if you are naive enough to think that the numbers explain everything. There is a very complex social-historical-cultural system of multiple intertwined factors that go way beyond the numbers when it comes to “altruism”. It is human. Relying on the numbers is just a way for people to a) think of themselves as better than other “non-effective” altruists, b) mask their prejudice with “it’s not me, it’s the numbers saying you’re inferior”, and c) to just circle jerk and find themselves two/three girlfriends at the same time. It’s not that different than right-wing ngo-hating bigots really.
For example, East Germany and North Korea had and have vastly inferior outcomes to West Germany and South Korea. No racism or darwinism required to see that.
I'm not sure why you are so confident that racist conclusions are inevitable? You also seem to suggest that the facts are racist, but that we have to blind our minds to that reality?
> [...] if you are naive enough to think that the numbers explain everything.
Our knowledge, including our quantitative knowledge, is very limited, but it is not zero either. There are some areas where numbers explain enough.
And, of course, just because numbers don't explain everything perfectly doesn't mean that our personal non-numeric pet theory is automatically any better.
1. I guess blaming the government is one alternative to blaming DNA, but what happens if the government changes and the problem remains? Over and over again?
2. You are the right, I can’t be sure that racist conclusions are inevitable, but that’s what I have experienced from people who fail to analyze the context with the numbers and to accept that a) you can’t explain everything and b) the numbers very often lie, are biased, manipulated, and/or simply incomplete.
3. The facts are not racist; your answer is exactly what I mean. You can’t get a causal conclusion from a post hoc analysis. Have you heard of eugenics?
It's easy to say in retrospect, but it's not like all the movement ever did in it's entire existence was buy a single castle. Do you really think they were expecting EA to explode in the public consciousness after the SBF thing and to have all of their line-items exhaustively audited for what could make the best outrage-bait article? That's quite a black swan event.
>and to have all of their line-items exhaustively audited [...]?
That part they should have expected (and I'm sure they did). Scott claims they expected negative social reactions, just maybe not to this extent.
Even in the absence of journalist attention, EAs love to exhaustively audit line-items. Something as big as Wytham Abbey was clearly not going to escape commentary.
I don't follow why SBF somehow excuses this. It seems you're suggesting buying the abbey would have been fine if only it could have been kept quiet, but because of this unforeseeable black swan event people heard about the Secret EA Castle and now we have all this outrage.
I heard about it from the EA Adjacent forum, and I don't see what SBF has to do with the argument that this burned a lot of social capital and reputation within the community for very nebulous gains (compared to buying a counterfactual property). The abbey might be relatively cheap for what it is, but not absolutely cheap. And it turned out to be very expensive in other ways.
My point is that you could take an action which seems reasonable and justified and thought out within the community that you're in (e.g. EA) because you know everyone's working off the same set of axioms and can follow your reasoning. But when you suddenly get the entire world watching, that's not true any more, and you realize you can't explain the axioms to them, because it's a lot easier to say haha castle stupid EA people than to actually think it through and besides they're already scrolling past on their feed to the next bit of outrage-bait.
Thanks. I didn't experience that part, but I can definitely see that this would happen. My position is still that it was a very questionable idea even without outside attention, but this definitely didn't help
William McCaskell wrote and promoted a book in August 2022, in an effort to popularize and justify the sorts of “longtermist” utilitarian views held by (some, increasingly dominant) EA folks. Allegedly this was backed by a multi-million dollar PR budget, which is why it got so much press at the time, and also why so many people were talking about EA philosophies last summer — even before FTX.
I think your response is strange. The EA/longtermist folks have been making a very deliberate and considered effort to promote and popularize their movement. This wasn’t something that “just happened to them.” And FTX blowing up was a major event in the course of that debate, since it starkly illustrated the weaknesses of a moral philosophy that centers numbers and dollars over the kind of traditional ethical judgement practiced by other charitable movements.
This piece, in turn, feels like more of the same promotional effort. Nominally it’s about the author’s kidney donation, but it immediately and prominently shifts to arguing about how EAs are great people who will donate their kidneys to strangers and how unfair the world is to criticize them over castles, which incidentally were the best possible use of money. It’s not subtle or incidental at all, and it felt like I was reading a piece of promotional religious material or something.
> And FTX blowing up was a major event in the course of that debate, since it starkly illustrated the weaknesses of a moral philosophy that centers numbers and dollars over the kind of traditional ethical judgement practiced by other charitable movements.
Honestly, I think this just doesn't make sense (and I have no ties with EA whatsoever). You've written it nicely, but it just doesn't follow. It makes no sense to judge a movement by its worst possible member, and it doesn't make sense to say that the overall philosophy doesn't work when one guy obviously didn't follow the philosophy and then had it explode in his face.
The argument, to me, feels akin to "well, vegetarians think they're morally superior, because they don't kill animals, but just look at PETA! PETA does this horrible stuff where they kill animals in shelters[1]." And then perhaps follow it up with "this shows the weaknesses of a moral philosophy that centers around saving animals lives..." but it doesn't. PETA doing shady stuff doesn't illustrate any philosophical failures any more than SBF doing shady stuff. If you want to address the philosophical failures of EA, you may, but I don't see any of that in your comment.
> but it immediately and prominently shifts to arguing about how EAs are great people who will donate their kidneys to strangers
Is this so surprising? Look at the immediate and negative response that EA receives today. Of course any mention of EA would want to be brought with a caveat that "hey, EAs do some good things too, you know - we're not all SBF!"
But I question if your interpretation of the article is even correct. Simply Ctrl-F "effectiv" gives a bunch of hits in section IV, and then no more hits for the rest of the article, except for a stray one in section VII, which was essentially my impression when reading the first time. He talked about it enough to address the controversy, then moved on. Like a reasonable person, not an author of "promotional religious material".
I like the general idea of EA. But it's a human movement, and thus vulnerable to mismanagement and corruption that's characteristic of distributed organizations that manage large sums of money. To that end I've observed three worrying trends in the EA movement over past couple of years. They are (in no particular order):
1. To focus EA efforts on donations from high-net-worth individuals, often at the cost of giving these individuals massive influence over organizational priorities.
2. To shift (more) towards a longtermist philosophy, wherein "effectiveness" is determined by the wellness of hypothetical future beings, rather than measurable near-term impacts like "lives saved, bed nets distributed." This measurability was supposed to be the bedrock of EA, what kept it from becoming like other wasteful organizations.
3. As a consequence of (1) and (2), to shift the balance of internal priorities away from practical and measurable efforts, towards work like "AI alignment"; spending millions on book tours to promote EA/longtermist ideas; and spending on charities that provide facilities to help EA organizations "come up with ideas about the future."
4. To close ranks against outside criticism of EA's priorities, and to refuse any efforts for a community-wide re-evaluation of these new priorities, or to pose tough internal questions about donors or spending.
In this new regime, spending on luxurious meeting facilities ("castles") sits on equal footing with malaria nets. Because perhaps the ideas developed therein will save billions of future lives, or the facilities will encourage big new donations, and that's an organizational priority now. In any case, there's no way you can prove this won't happen, because nothing is empirically measurable. Also: castles are awesome.
None of these priorities represent the entirety of EA, but it's obvious that the opinionated people who control these (huge!) purse-strings are gradually gaining organizational control of the movement, to the (I suspect) long-term detriment of the "let's spend effectively on Malaria nets" wing. It's quite sad, but it's also exactly what I'd expect of an organization that is insufficiently defended against this kind of drift.
Far from "everything is great but SBF couldn't have been predicted," you see evidence of all these mismanagement in the events that I mention. First, there's the well-funded McCaskill book, which attempted to mainstream EA/longtermist priorities. This would not have been possible without (1) [and quite possibly, without stolen FTX deposits.] Then there's the presence of obvious grifters like SBF within the inner-circles of the community, and the fact that nobody with power was asking the obvious question about whether these people should be such an important part of the movement. (It did not take a lot of looking, apparently.)
And finally, you see it in the orgy of undisciplined, poorly-justified spending by EA organizations that occurred right at a time when they were deliberately courting increased prominence, including two different castles. All of this would be perfectly normal in a cult like Scientology, but has no place whatsoever in a mass-scale effective altruist movement.
As someone in the "malaria nets wing", I think you're directionally correct, but overstating things.
> it's quite obvious that the opinionated people who control the (huge!) purse-strings are gradually gaining organizational control of the movement
This is to some extent true of Open Philanthropy, although the effect looks larger than it is because they're consciously committed to not throwing all of their resources behind whatever they think is the best option. I'm not a fan in principle, but it's not insane. See https://www.openphilanthropy.org/research/worldview-diversif... for their take.
GiveWell remains firmly on the global health side, and I don't see that changing. Here's the first my-screen-worth of organizations they've funded in the last year, with approximate numbers:
- 87 million to the Malaria Consortium
- 77 million to Hellen Keller International (Vitamin A supplementation)
- 42 million to New Incentives (infant vaccination)
- 17 million to Sightsavers (parasitic worms)
- 8 million to PATH (malaria)
- 6 million to Nutrition International (Vitamin A)
- 5 million to IRD Global (healthcare infrastructure)
Like I said, I don’t think EA is bad or that the situation is irreparable. I just think there are people within “the movement” who are taking it in a worrying direction. And by “taking it” I don’t mean they’ll brainwash everyone in the org, but I do believe they might succeed in capturing the EA brand and a lot of its organizational capacity towards their priorities.
I think in the medium/long term, the Givewell wing of the EA movement will either need to (1) develop better organizational strategies to defend against this kind of takeover and keep priorities balanced, or (2) consider breaking off from the rest of the EA movement and recognizing that the brand now means something different. But that new movement will also need to develop some defenses to prevent the same thing from happening.
To use an excellent rationalist phrase: there’s a “Chesterton’s fence” that I think a lot of EA folks have torn down in their attempt to refactor charity and make it more efficient: namely, that the intentions of leadership really matter. And in any human organization that manages large sums of money and power, you have to have sharply-enforced defenses against charismatic leaders who say they’re your friends and share your priorities, but actually want to take the movement in a very different direction.
> consider breaking off from the rest of the EA movement and recognizing that the brand now means something different.
Yes, this seems fairly likely to happen.
> the intentions of leadership really matter. And in any human organization that manages large sums of money and power, you have to have sharply-enforced defenses against charismatic leaders who say they’re your friends and share your priorities, but actually want to take the movement in a very different direction.
I don't think this is exactly the issue. Openphil's leadership is, as far as I can tell, sincerely not trying to dominate the movement. The problem is that they're such an important funding source for charities that they can't not do so: even if they would never actually withdraw funding to punish people, the mere fact that they could creates a chilling effect.
In principle the same dynamic could apply to GiveWell and global health charities, it's just that there aren't the same sorts of deep ideological differences there: e.g. maybe Alice thinks parasitic worms are the most important problem, and Bob thinks it's malaria, but they're always going to agree that both are extremely bad and should get significant funding.
I think the notion of “sincerity” should be viewed very skeptically here. Not because I believe you’re wrong about anyone’s intentions: but because intentions don’t matter. If I sincerely believe issue A is the most important issue in the field, and I sincerely believe my “obtain donor funds at all costs” strategy is the best strategy to pursue it, then I can end up dominating the movement without ever intending to do so. It takes a strong and explicit effort to prevent this from happening, and that defense won’t happen if everyone is more concerned about being amicable than about vociferously defending the mission.
And of course, once one branch of the movement dominates it, then you’re at the mercy of their continued sincerity. This means you have to assume they’ll always continue to behave sincerely, and their organization won’t be captured by opportunists in the future.
I am an EA of long standing, though not very active on the movement side of things. I think SBF is a big deal, at least in so far as it prominently exposed a moral weakness in the movement. A heavy use of Bayes' rule coupled with a focus on the best uses of dollars led in my eyes to SBF being seen less as a successful EA celebrity and more of a moral exemplar. We're trying to effectively improve the world! SBF has developed a magic money printing machine which will make the world awesomely better by generating dollars for EA causes! Earn-To-Give proven the best strategy because of unlimited upside risk! We should be more like SBF!
I had two problems with this: firstly that a movement focusing on extracting money from the megarich rather than 10% tithes from the public more generally may potentially generate more cash (good!) but will probably do so most effectively with thought leadership and fundraising teams and donor care and castles for conferences of important people. This loses a distinctive simplicity and non-heirarchy that feels important.
The second is that I had much less sympathy with the 'longtermism' sub-sect than SBF and many of the richer and increasingly more prominent Californian types do[0]. And that the Good Old-Fashioned EA focus on cheap but unglamorous interventions on malaria, cash transfers etc. were being increasingly overshadowed in the public eye.
So I don't think it reflects badly on EAs that SBF turned out to be shady (as you say, all groups have a worst member). But it should prompt some awkward questions about the extent that the movement was taken in by the smoke-and-mirrors act. And ideally a reconsideration about whether chasing the money and interests of SBF-types is the right direction for the movement.
[0]: I find it suspicious that the equations demonstrating the infinite importance of fairly recherché concerns on the specifics of AI safety, for example, just happen to line up with the research interests of some EA-adjacent people. That suggests people aren't discounting sufficiently for their own group biases.
>to have all of their line-items exhaustively audited
Historically, yes, that's the entire point of the movement. Audit everything in the goal of doing the most good and not wasting money on frivolities. The auditing approach faded as the movement has grown and became more longtermist, though I (somewhat) expect it to come back now that we're post-ZIRP, post-SBF.
I'm a bit taken aback at the replies who say that EA never takes that into consideration. It had been actually exactly the kind of discourse I was seeing back in 2019 in the EA sphere. Actually, the reason this changed is exactly to avoid another SBF disaster: If you focus too much on consequentialism and convoluted reasons for why you're doing is the highest EV, you are incentivizing very unlawful and unwanted behaviours. EA pivoted to a more deontological framework to avoid those kind of dangerous reasonings at that point
This stuck out to me too. If Effective Altruism is about using math to figure out how to do the most good they seemed to have ignored higher order effects of their actions. What if buying the castle hurt Effective Altruism's reputation so less people got into EA so less people donated money to buy mosquito nets or malarial regions in Africa resulting is less overall good?
It's hard to trust a community who claims to focus on effectiveness but then it turns out puts great effort into looking good. That kind of deception is more damaging than a bit of bad press.
There's plenty of charities with great marketing. EA doesn't need to be another one.
But the point is, you're just asserting that. I think the parent poster was observing that, as effective altruists, they might attempt to quantify the pros and cons of such reputational factors (some game theoretic calculation, perhaps?) and include it in their determinations.
Not when you factor in how this strategy affects those who employ it. Sure, compromising your ethics and lying to people in order to secure more donations will bring in more money for the good cause - short-term. Longer term, how long until you start thinking, since you're already lying to the donors anyway, why not also lie about the whole charity thing and start pocketing the donations for yourself?
"Ends don't justify the means" isn't true on paper, for perfectly rational actors - but is true for actual humans.
There's a danger here, of which I imagine they're aware - higher-order effects are very hard to estimate in chance and magnitude, or in how your actions specifically contributed to them. This makes them perfect for justifying whatever you want, intentionally or accidentally. Overemphasize some positive second-order effects, fail to notice some negative second-order effects, and suddenly your first-order selfish choice looks like a charity. Societies and markets are dynamic systems, so predicting outcomes is less like predicting path of a rocket from Newton's laws, and more like predicting the weather.
The previous owners were using it as a single family home (according to Wikipedia). Make of that what you will. It's definitely a bigger 'waste' of resources than using it for conferences.
No-one cares about the waste of resources though. A castle is a symbol of power. It's the combination of lots of big, bold ideas, lots of public speaking, giving money out, THEN buying a castle. People begin to worry that these people want to rule them.
EA and e/acc have a lot of overlap in being free market, pro technology, urban, fairly online groups. The "doomer" and charity parts are the difference. EA's mostly believe that current trajectory of AGI will "paperclip maximize" humans, regardless of our suffering. While e/acc believe that heralding in AGI is the next frontier of tech, and that more tech is always good, that tech = positive progress. EA like to donate towards helping humanity. E/acc believe in getting really rich through tech and by markets, this helps other people too, via Randian market prosperity.
The thing is, perception and empathy often are included in their "mathematical calculations", or at least some kind of simulation or guess of it. It wasn't in this case, and that's very strange.
The way this post leaves no room for debate about how the decision was good and paints all of the people who disagree as ignorant outsiders feels very disingenuous.
There was a lot of criticism coming from inside the EA community, too. It became taboo to criticize it with multiple EA figureheads (author of this article included) making definitive statements that any critics were wrong and the decision was unequivocally right.
> The way this post leaves no room for debate about how the decision was good and paints all of the people who disagree as ignorant outsiders feels very disingenuous.
Agreed, I went to look for these thinkpieces to see what the arguments were since the guy with footnotes longer than some articles didn't cite any. Searching Google News for '"effective altruism" ( "castle" OR "Wytham" OR "£15m" )' netted 10 total results:
- Three articles about a DIFFERENT castle in the Czech Republic also bought by effective altruists (What's the marginal utility of castles?)
- One article about EVF (formerly CEA) claiming FTX funds weren't used to by Wytham Abby
- A smattering of anti-EA of anti-SBF pieces that make single line references to the castle
Even just googling for the same you get one New Yorker article that makes offhand mention, then mostly forum threads and a couple blogs that are debating (pretty fairly imo) about whether it was an effective use of money.
I'm sure I could have missed some pieces but this looks like the classic and all too common "got mad about some random hyper-niche tweets but can't admit it"
I agree. It seems that EA forgets to factor in PR into its mathematics. Really the mathematically superior choice should take optics and the effect of good and bad optics into account
Surely their reputation is a factor in their ability to do good? Optics sometimes necessitate sacrificing the "mathematically superior choice" for one that's worse in the name of not pissing off a ton of people the support of whom you rely upon.