Hacker News new | past | comments | ask | show | jobs | submit login
Sam Bankman-Fried is a feature, not a bug (joanwestenberg.com)
97 points by jlpcsl on Nov 5, 2023 | hide | past | favorite | 164 comments



> Effective altruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis

I'm not sure how anyone could argue that what SBF was doing fits it any way with that. He's just been found guilty of fraud on multiple counts, so clearly the whole "Effective altruism" was just a image he was trying to present, while acting completely against it in private.


Yes, he admitted multiple times that it was all just an act for public perception. Can't get much more clear than this:

  KP: you were really good at talking about ethics, for someone who kind of saw it all as a game with winners and losers
  SBF: ya
  SBF: hehe
  SBF: I had to be
  SBF: it's what reputations are made of, to some extent
  SBF: I feel bad for those who get f***** by it
  SBF: by this dumb game we woke westerners play where we say all the right shiboleths [sic] and so everyone likes us



Fraud was certainly a feature of the industry as a whole, e.g. https://web3isgoinggreat.com/


I think this attitude goes way beyond just crypto/web3. It seems like corporate greed and environmental destruction is a-ok as long as the company tweets in support of woke stuff.


Corporate greed is the goal. The means vary; whatever works.


That’s what happens if one tries and mispronounces “shibboleth”.


They lose their head? (:


Based.


> I'm not sure how anyone could argue that what SBF was doing fits it any way with that.

That's pretty easy, actually. He seems to see himself as some kind of Robin Hood-esque figure, taking money from the rich and distributing it to the poor (or, more specifically, distributing the money of the rich to the places where its most effective).

The argument is about whether EA naturally leads to this "the ends superseed the means"-type of actions, which is what the article argues.


I'm not even sure if he saw himself as redistributing money vs. creating value from scratch, from the productivity increases that cryptocurrency would usher in when it became mainstream - and if he was the one to not "re-distribute," but simply distribute, some portion of those newfound gains, why, he would be the best possible steward because of his EA ideals!

But, of course, what he didn't do was get the consent of his depositors to have their deposits rerouted and gambled with in the way that he did. Even if he had been right, if he had been able to ride the wave and cryptocurrency became everything he thought it could be - his actions were legally found to be fraudulent.

In terms of a broader pattern with regards to consent and power dynamics, there's another inescapable data point of the EA movement's problems with internally handling sexual harassment allegations - I won't link to the time.com article on the EA movement, as it horrifyingly orientalizes and conflates polyamory with harassment, but assuming that the article's quotes from primary sources are accurate, it suggests a culture of protection for those who would abuse their power.

I think that the EA community needs to grapple with the notion that just because one believes a certain pathway to be optimal, it does not absolve them of the need to gather explicit consent, given freely, with information discrepancies and power dynamics taken into account, from those who would be impacted. Optimization under constraints can still be optimization, and people's freedom to choose the extent to which they participate is the most important constraint of all.


> it horrifyingly orientalizes and conflates polyamory with harassment

I mean, Caroline Ellison herself is the one who said her relationship goals (which was to a large degree coterminous with company structure) was "best characterized as something like 'imperial Chinese harem'" no? It's not Time "orientalizing", it's one particular fucked up polycule. Which also happened to be one of the highest visibility ones in "the movement" (whether you take that as EA, crypto, Rationalism, Bay Area rich kid libertinism, whatever).

Polyamory takes on a distinctly awful flavor when imported into EA (and before that Rationalist, which I would assume is where it reached EA from) communities. I would assume abuse and codependence before healthy relationships in these cases, because frankly I see 99% of this movement is at best emotionally immature.


Oh, no doubt - I'm certainly not trying to defend EA's particular flavor of polyamory. But from Time's perspective, you have a term like polyamory that the general public is not likely to be familiar with, and you only quote extreme and stereotyped viewpoints in defining it, without reference to any other definitions or statistics or lived poly experiences outside of EA's particular interpretations, you contribute to a culture of fear that all non-monogamous relationships are equally toxic. Perhaps horrifying was an exaggeration - but the omission in the reporting is irresponsible at best, and in the current culture, can cause real damage even to unrelated communities.


IDK, if I wanted to avoid a culture of fear around non-monogamous relationships I would want the toxic ones to be outed and shut down ASAP - like long before they grew into continent-spanning pseudointellectual movements handling hundreds of millions of dollars. Asking for the article to dig into "but some are healthy too" is just "not all men" writ (only mildly) larger.

I also think this is really a non-issue, millennials and later mostly don't care at all and the people who don't like them aren't going to be convinced by anything ever, let alone some more carefully-crafted magazine article.


Sam wasn't really a crypto guy. He knew some things about finance and trading, and set up a prop shop offshore.


If I remember the stories correctly, Robin hood lived in the forest with his merry men. Not in a palace in the bahamas :)


I struggle to think of the head of every political regime in the last century who based his government on stealing money from the rich and giving it to the poor, who didn't end up with a palace. Well, apart from the ones that got killed in military coups and such.

And I doubt the idea that they were all liars that never believed in what they preached would withstand scrutiny.

Suppose, your ethics are unworkable or irrational and you are unwilling to question them. If you think about it, by necessity, you must become more amoral as time goes by. At the very least the luxury of a palace probably helps silence one's moral instincts.


Also, in the original stories, Robin actually was more like rob from the Church and give to the gentry. He was very friendly with the "right sort" of noblemen, he just really didn't like rich priests from the church. So we (in our modern age) have kind of perverted the idea of Robin Hood to fit our times but he never really was some kind of hero for the poor.

See here: https://en.wikipedia.org/wiki/A_Gest_of_Robyn_Hode#Summary


> taking money from the rich and distributing it to the poor

There is zero evidence that this was his intent.

There is a lot of evidence that he intended to maximize trading profits and use those profits to help people.

The fact that he failed does not mean that he intended to steal from the rich.


> There is zero evidence that [stealing] was his intent.

The federal jury seems to have disagreed, finding him guilty on all charges, all of which require intent. As do his co-conspirators who testified to their fraud.

This is such a bizarre take; this was one of the most lopsided financial prosecutions of all time and you're out here saying it had no foundation in fact at all?


Your replacement in brackets is completely different than what the parent said. “And distributing it to the poor” was a key part of what they were responding to.


The last sentence is literally "The fact that he failed does not mean that he intended to steal from the rich".

The commenter was very clearly disputing SBF's _intent_ to steal. _You_ may doubt his good intentions for that theft (quite reasonably), but that's not the point that we are debating.


Not only did panarky clearly dispute intent to steal, they specifically said that there's a lot of evidence of intent to help people: "There is a lot of evidence that he intended to maximize trading profits and use those profits to help people."

Juxtaposed with zero evidence of intent to steal.

I'm sorry, but I believe you have misread the thread completely.


How is he defining "rich" - because AFAIK his algorithm didn't just take money from the whales to give to Alameda - and when things are settled, it's almost assuredly the whales that will get paid out first. So he's almost assuredly taking from the middle-class-at-best to give to his own personal pet projects.


> So he's almost assuredly taking from the middle-class-at-best to give to his own personal pet projects.

He's still (for the largest part) taking investment money from the top 1% or at least top 10% of the world and is (in his opinion) putting it to work saving actual lives. I don't think it's that hard to see how he could justify his actions to himself.

> and when things are settled, it's almost assuredly the whales that will get paid out first.

This is pure speculation on my part, but I'm pretty sure that "how is the remaining money re-distributed once my company implodes" didn't make it into his calculations. Also, where the remaining funds go is pretty irrelevant to the calculation anyway, as it's not going to be funneled towards EA causes.


> He seems to see himself as some kind of Robin Hood-esque figure, taking money from the rich and distributing it to the poor

Well, and to himself. At one point, didn't he own three private jets?


the problem seems twofold. on the one hand there is a seemingly objective definition of what Effective Altruism is supposed to be. that definition is one that i would like to subscribe to as well. on the other hand it appears that the idea of EA was popularized by someone whose actions didn't actually fit the definition which discredits the whole movement. that's the first problem.

the second problem is that we tend to redefine what things are based on the actions of those that promote them. it's the same thing that happens with religions. a lot of people now and in the past did things in the name of those religions that were a discredit to them.

the difficulty is to keep these things apart.

the consequence for me is that i can not associate myself with things like EA or blockchain development, because in the eye of the public now these do more harm than good. and for the same reason many not only leave their particular religion but disassociate themselves from the concept of religion itself.


As near as I can tell, “Effective Altruism” is just the latest lame label for what is essentially “trickle down economics”, where some overly entitled individuals condescendingly try to justify some notion that they know how to spend money better than the plebs, so they must clearly be the obvious beneficiaries of any/all proceeds of any given transaction. It isn’t new, and it certainly isn’t helpful. It’s cynical and needs to be called what it is—a framework for extending and perpetuating income inequality while making those who benefit the most feel good about themselves.


i don't see the connection. trickle down economics is about reducing taxes for high earners so that they can spend the money on things that will eventually benefit lower earners. that has nothing to do with effective altruism as far as i can tell.

at best i can see that high earners claim effective altruism for themselves, and are discrediting it with their actions, just like i said.

if you talk to people promoting effective altruism on the ground (which i had the opportunity to do recently) then you get a different picture.


It's often used as an excuse justifying short term greed and deception for the sake of theoretical long term benefits.


It's actually very common for companies doing terrible things like fraud, environmental damage, bait and switches, patent abuse etc. to lean hard on promoting social work they're doing.

This is so common that short sellers like Kyle Bass actually look for it as an additional red flag when hunting down fraudulent companies.


SBF cannot even manage "ineffective" altruism. He puts other people through hell without a hint of empathy. How would anyone expect he could do "effective" altruism. In his own words, he "fucked up". But he is not sorry, he offers no apology, he's "not guilty". He is a compulsive liar.

It will be interesting how he fares in prison. Martin Shkrelli wants to be his friend. Perhaps they can be pen pals. Maybe he'll get special treatment or early release.

Even if SBF was following the EA playbook to the letter, it did not work. The attempt was ineffective. He failed.

Michael Lewis shared some SBF story where the scenario (the bet) presented to SBF was a chance to make the world better or, if it failed, the entire world would be destroyed. SBF allegedly would take that bet. All or nothing. Where have we seen this type of thinking before.


Ellison also shared the coin flip story on the stand, something like "heads the world is twice as good, tails the world is destroyed."


Somehow our society puts the non-alturistic people into positions which have most impact. I guess the ineffective alturist knows its almost impossible to get to such position without wronging others. And its the ruthless fool with savior complex that puts himself above others “to save them”.


What would be the difference between SBF being an effective altruist and doing fraud to hoard wealth until his dying day when he would turn it all into a public good, and saying he was and doing everything up until the last step?

The fraud is still illegal. He would still have gone to jail.


His brother was heading some “Guarding against pandemics” non-profit that he was involved with. I believe there was one California proposition they were heavily funding related to it.

I don’t know if it was started in earnest or if there was some ulterior motive.


Non-profits are just money laundering machines for the rich with sometimes unintended positive side effects


Not always, but too often. Humane Society International does all sorts of verifiable charitable work, for example.


Good doesn’t cancel out the bad


Even further: debatable unproven good doesn't cancel out the immediate unmistakable bad. Sponsoring a regulation with stolen money is also self-serving (political influence), plus we don't have any data available to say that the regulation resulted in any societal good. We do know, though, that the stolen money negatively affected millions of people.


Equally: indentured servitude in restitution isn't a valid rehabilitation strategy. It doesn't deter nor contract ones thinking to doing criminal activities. The only thing we can do it weigh punishment to severity of the crimes. SBF will soon find out what value that is.


This sounded like more bad actually.


Yes but the article talked about EA and how giving to causes blindly ignoring means is kinda the M.O. of it all. So "good" (giving) doesn't cancel out the "bad" (robbing customer funds).


I realize this, I was just saying the "good" also seemed just as bad as the bad. Specifically to cause that seemed opportunistic and fear based driven.


Good and bad are subjective. Based on humans and guard railed by laws. What one considers good, another can consider it bad. There are arguably objective good and bad’s but in the case of behavior, it’s subjective. Polluting the planet is objectively bad. Funding a political cause is subjectively so.


> He's just been found guilty of fraud on multiple counts, so clearly the whole "Effective altruism" was just a image he was trying to present, while acting completely against it in private.

I don't think that's right. In his head, it wasn't fraud. He was moving money around to backstop losses, sure, and OK, technically it wasn't his. But it was all going to be OK in the end and no one would know. No one was going to lose any money, so no fraud. QED.

In the real world, criminals don't think they're criminals. Everyone's got a good reason for doing what they do.


I think the key thing is that everything he's ever said and done (including the stuff that harmed only Sam Bankman Fried as well as the stuff that benefited him and the stuff that was illegal and lost his customers money) screams gambling addict.

Its debatable whether the EA stuff was rationalisation of those impulses or reputation laundering, but he was quite explicit about the fact that his idea of "saving humanity" involved making massive financial bets whilst dismissing the idea of hedging against the downsides even in fawning interviews published by VCs that had invested in him. In that respect, his public image was entirely in keeping with the sort of person that wouldn't be bothered by a few regulations about customer funds, because what if he could double them by ignoring the regulation?...


I can only buy this logic if one of the following is true:

1) SBF is an crook

2) SBF is an idiot


I agree. Given his professional history I believe the first option is likely. At the same time, he clearly isn’t that smart. He left an enormous trail of evidence in his wake, and his fraud wasn’t even that sophisticated.


He was smart enough to get into MIT. Probably not as smart as Goethe.


I think anything SBF did to benefit others was a byproduct or side effect of what he was actually doing, enriching/empowering himself, like you say. Or, at best, a hedge in the form of insurance against bad PR.


Does that change the point? Even if not in earnest, he was able to deflect suspicion wearing a cloak of EA. That, in and of itself, is a problem. Nothing is above reproach, no matter how pious or nerdy.


Is it just me or do a whole lot of EA people just love to keep talking but never do anything actually altruistic?

I mean I get the idea of trying to optimize the value of charitable donations. But donating something is infinitely better than chatting all day and donating nothing.

Even on a small scale, the number of times I've seen non-EA people do something nice is way, way more than the number of times I've seen an EA person come out of their "circle" and do something nice.


This is how EA works. It wouldn’t be maximally effective to donate the money now, would it? Compounding interest says you should hoard it until your dying day and only then invest in a charitable cause.


This is the smartest thing for the 99% though. While we're alive, we need the money now. Donate your estate once you're dead.

It has a side benefit of discouraging accelerated inheritance schemes.


What if you have children? You could say they "need the money now" more than you do, and never donate. You could even write a recursive will that defines N=5, and requires your daughter or son, as a condition of inheriting the money, (a) never donate it during their lifetime and (b) compound it for a lifetime in S&P500 (c) write their own will with the exact same text with the only change being N = N-1 (d) donation is only allowed if you are N=0, and only under agreeing to fulfill (a), (b), and (c), do you even get to have the money.


No, you'd need to account for several things (curing someone of malaria has compounding returns, high-impact opportunities may dry up, etc).

There's substantial discussion:

https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/...

I do agree that utilitarianism provides too much room for mental gymnastics which bad actors can use to feign benevolence.


> keep talking but never do anything actually altruistic?

all sorts of interesting slicing and dicing of charitable activities are tracked, and it's really curious.

https://www.philanthropyroundtable.org/almanac/statistics-on...


The argument is akin to saying bank robbery is a feature not a big when the thief donates his or her takes to charities of soup kitchens or what have you.

It’s a ridiculous high school level argument.


His actions made perfect sense from his utilitarian Effective Altruist worldview. He was stealing from rich people, and giving the money to what he saw as "worthy causes."

He was pretty open even before the collapse about how he decided to get as rich as possible, as fast as possible, at all costs, as long as (in his own moral calculus) the net benefits were positive.

EAs are arguably a 'cult' obsessed with AI risk, which they mostly believe will end the world in the next few years. So to them, that pretty much justifies anything that could help mitigate that risk. He would see it as immoral not to become a criminal in order to fund AI risk research.

Personally, I think these AI risk concerns are legitimate, but I don't agree with these methods.


> His actions made perfect sense from his utilitarian Effective Altruist worldview.

They don't. Everyone in EA (AFAICT) has been pretty clear about this. Lying and undermining trust and institutions does tremendous lasting harm.

I am also tired of "people are very concerned about X and think that it's important, so they're basically a cult".


I've been involved (or at least following) the rationalist and EA communities since the beginning. I call it a cult somewhat tongue in cheek, but it certainly has a lot more cult-like aspects beyond just "pretty concerned about AI risk." I mean, they have a charismatic leader whose unusual ideas about everything from AI risk to sexuality are more or less carbon copied, and practiced in group houses, etc. in some pretty creepy ways.

Literally anyone from the outside would easily be convinced they were pretty much your standard apocalyptic sex cult, just from accurately describing it.

I don't really care if people on EA forums don't all agree with SBF, but his type of thinking is standard if you use utilitarianism to make decisions in the real world, and it leads to some pretty horrific stuff. Consequentialism / utilitarianism are widely accepted in EA, and if you take that to the extreme, it can justify things like this.


You won't see me defending EA often. Or ever, actually.

But the fact that a few extremists would take an idea way beyond any measure of reason is a human effect, not an EA effect.

I can't think of any good ideas that haven't been distorted by a few. That's no reason to abandon them.

My personal opinion is that EA is mostly worthless navel gazing, but that doesn't mean that we should dismiss it over the SBFs.


Institutionally, EA was defending SBF up until he was arrested, not up until the point he started saying stupid shit. He was doing that for a long time before he was arrested.

https://time.com/6262810/sam-bankman-fried-effective-altruis...


I disagree with this because I was vocally against SBF in the EA forums prior to him being arrested, as were many others.


Who are you?


Someone who used to be vocal on EA Forums & was around for a couple years. I got less involved for other reasons after the whole SBF stuff.


So I got "guy who posted on forum" on one side and William MacAskill and Nick Beckstead (and, well, SBF himself - he left CEA voluntarily right?) on the other - which of these would you say has the institutional support?


Will was slow to come out against SBF sure, but the majority of people were asking why he was being silent WAY before the arrest. Are you involved in anyway, out of interest? As in, were you there or are you going off of media reports? I don't know many people involved in EA who'd say it was institutionally pro SBF, just that a couple people were silent too long. Interesting if there are some.


I have no connection to CEA or SBF, I'm just someone who's desperate to recover moral philosophy from financialization by wannabe tech bros.

Edit for reply:

> around October/November 2022

i.e. after the run and it's clear he's going to be arrested... Usual EA bullshit.

> Nice moving goal posts :)

This isn't the gotcha you think it is. The relevant time is the ~4y period from 2018 to 2022 while he was running an obvious fraud and totally insane, not the few weeks between the run, collapse, and formal arrest. If you think he only looked bad from late 2022, that's just more proof you EAs are dumb as rocks.


Really?

> Institutionally, EA was defending SBF up until he was arrested,

Nope, here's why & evidence

> i.e. after the run and it's clear he's going to be arrested... Usual EA bullshit.

Nice moving goal posts :)


What is an extremist with respect to an ethical system? An ethical philosophy is not some scripture you can interpret and misinterpret.

The existence of extremists implies the existence of moderates. What are they? When their moral code says "Fraud, in this particular circumstances, is good", are they the ones that can't shake off the vice of honesty?

Or are they the ones who use motivated reasoning to bring their moral philosophy to conformance with their instinctual moral feelings?


> His actions made perfect sense from his utilitarian Effective Altruist worldview. He was stealing from rich people, and giving the money to what he saw as "worthy causes."

Clearly this is a false assumption. He was not stealing from rich people (as if stealing from reach people is a justifiable EA practice), and was not spending the majority of the stolen money on worthy causes (worthy by EA standards). He was buying property for himself and his circle, signing deals with stadiums, covering Alameda Research losses, etc. Even for a deluded man like SBF, don't think it's possible to interpret that as an improvement of the world's conditions.


> He was stealing from rich people

I am not familiar with all of the intimate details, I did not think that the scam was targeted at "rich people" but rather advertised generally and preyed upon get-rich-quick crypto attitudes. Am I mistaken?


I share your assessment, and have further observed that the financially insecure are over-represented among the get-rich-quick crowd. Rich people are by and large good at not losing money.


> He was stealing from rich people

Are you sure? My gut says that he mostly damaged not-rich people. I don't know too much about the specific case, but I know enough about the general crypto space to have an opinion, although not sure how to provide data to support it. You don't provide data either, though.


I have a growing concern that by engaging with this argument we are making it more acceptable. It's not OK to steal from "rich people". It's not OK to blame everything on the 1%. Online platforms are filled with garbage about the 1%, and most posters are Americans, all of whom actually are in this 1% globally.

If you own a car and/or a home - are you rich? Is it acceptable to steal your $1K investment and spend it on pseudo-EA causes? An argument so ethically bankrupt, I can't believe people even type it on a public forum.


My point is not that it's ok to steal from the rich people.

My point is that I suspect that SBF stole mostly from non-rich people, which is also NOT ok.


My frustration about engaging with this argument is in agreement with you—sorry if I wasn't clear. The parent commenter made it seem like FTX only defrauded some abstract rich people, which, according to them, would somehow make it more acceptable.


You still think he was actually going to give away his money?


I know for a fact he did give away money. I did some data analysis voluntary work for a charity aggregator he used and saw records, he absolutely donated sizeable amounts to charity (as he said he did).


he gave away other people’s money to charity, specifically his users’ money that he was entrusted with.


I'm very familiar with the case, I was saying simply that there were donations to EA-aligned charities


In which case the word you're looking for is closer to "laundered", not "donated".


Oh I didn't know that. He also got a lot of praise for saying he'd give all his fortune away someday. They called him there most generous man. Idk, maybe he would have.


I think he definitely went off the rails with spending on himself, but he absolutely donated the way he said he was too. There were a lot of questions from charity founders who received funding wondering if they should give back, and there were much larger organisations where they probably had similar discussions but not in the open so I didn’t see them

Edit: for context this was a large aggregator for “effective charities” with other billionaires and if you sorted by max he was on top for the years I did data analysis


Awesome. Well, I'm still going to rag on him for every other reason.


He did give a lot of money, just not in a way that fit the image of EA.


> I'm not sure how anyone could argue that what SBF was doing fits it any way with that. He's just been found guilty of fraud on multiple counts,

EA strikes me as the same sort of ambiguous slime as "breast cancer awareness."

You read the words and think hey, that sounds right. Benefit others as efficiently as possible. Guy's a Robin Hood type, stealing from the rich to benefit others. Good for you, buddy. Godspeed.

Except you look at what he's actually doing and see he's not stealing from the rich and giving to the poor. He's stealing from the foolish and giving to "others," who turn out to be his friends and associates. $500m to Anthropic? $5b for Twitter? This shit isn't charity.

It's kleptocracy masquerading as charity. I can't see his charitable causes as anything more than an ephemeral funds-parking scheme storing funds in a chain of IOUs.


This leads into a whole "theory vs practice" argument that shows up whenever people start talking about communism. If anybody doing Effective Altruism in the real world fails, we are told that they were not doing EA correctly, and it's simply unfortunate that nobody's done it correctly yet. Thus the movement itself can never be discredited by mere experimental evidence.


Most EA adherents focus on process not outcomes. So from an EA position, the fraud outcome doesn’t matter. Imagine if SBF gambles paid off they might say, and that potential outcome must be probablistically weighed against the negative outcome. Since crypto in most EA is viewed as a sin industry, the negative impact is minor relative to the positive impact of SBF political and charity donations, so even a slim chance of success should be taken.

However EA logic is wrong because utilitarianism is wrong. It doesn’t matter whether stealing and fraud create good outcomes or not. Theft and fraud are evil in themselves irrespective of outcomes. To put it in an extreme sense, even if it would save the planet you should still not steal nor defraud others. The fact that some actions are in essence evil is enshrined in the legal system, is commonly accepted, and philosophically sound.


That description of EA is not even remotely accurate.


Theft or fraud would definitely be preferable to the loss of the planet. Utilitarianism aside, sometimes doing the wrong thing actually is the better thing to do. Anyone who helped slaves escape via the Underground Railroad or smuggled Jews out of Nazi Germany would not be considered evil for their actions, and those could easily be construed as theft (in a gross way) or fraud.


Because humans have a natural right to movement, helping slaves is not theft even if an evil social institution falsely claims it is theft. Similarly regarding eugenics, humans have a natural right to reproduce and to live, so someone assisting someone to escape that situation is in the right no matter what society they live in.

Extreme moral relativism like you’re mentioning here is just as wrong as utilitarianism.

It’s not preferable to steal than to lose the planet. Some things really are worse than death, and committing an evil action is wrong no matter what the justification the evildoer makes.


Evil is not a 100% settled matter. For example, some people believe that saving another person's life if you can is a moral obligation and to not do so would be an evil act. I gather from your refusal to even consider stealing something in order to save the planet you are in the other camp.


For most utilitarians, refusal to do good is evil because they view actions as having a certain number of positive or negative utils based on their outcomes.

For many other ethical systems, good and evil aren’t weighed against each other. Instead evil deeds may be forbidden flat out. Good deeds are optional but commendable. Since evil is forbidden and good is optional, you cannot justify good with evil - you cannot justify the unjustifiable.

Why is deontology a better system than utilitarianism? There’s a lot of reasons, but here’s two.

First deontology solves a lot of thought puzzles much more cleanly than utilitarianism does, and avoids the weirdness such as that torturing one person is ok if it makes everyone else happy (see the short fiction story “The Ones Who Walk Away from Omelas” for a empathetic example). Deontology has a strong philosophical basis outside of theology, especially beginning with Kant.

Second it prevents those in power from justifying evil deeds because of the good outcome. Take politics, where politicians will justify evil actions to their followers by arguing it’s necessary to prevent the other side from winning. If people accept a better theory of ethics, those types of arguments won’t take and politics will be forced to a higher standard.


> On November 11, FTX fell apart and was revealed as a giant scam. Suddenly everyone hated effective altruists. Publications that had been feting us a few months before pivoted to saying they knew we were evil all along. I practiced rehearsing the words “I have never donated to charity, and if I did, I certainly wouldn’t care whether it was effective or not”.

From Scott Alexander’s post on why, as an EA, he donated a kidney. https://www.astralcodexten.com/p/my-left-kidney


This guy has a huge persecution complex, remember how he reacted to a largely positive press profile that was going to publish his name?


I remember! He was happy about it -- until he found out that the NYT was going to doxx him and publish his name, which would've likely had highly negative effects for himself and his psychiatric patients. The NYT didn't care, of course -- and they attempted to cover him a lot more negatively, as a result of the backlash they received.


That's an interesting take. An alternative take is he desperately sought to become a public intellectual for personal reasons- while simultaneously believing this would hurt his patients, and declared this the New York Times's problem.

I thought he regularly preaches the presumption of good faith, even when discussing some of the most radical people in the world, interesting how that's not extended to the NY Times for the sin of... using a real name policy?


Doxxing is not good faith


As someone who creates data and analysis which get used in setting policy I do find a lot of EA spreadsheet analysis of measured "good" to be very niave to the nature of measurement and classification.

That being said, I think this peice is a bit of an overreaction and there seem to be many earnest actors in the EA community really thinking about how they can do good in the world. SBF is very unfortunate for EA, but to jump from him example to saying all EA practitioners care exclusively about the ends over the means is a bit of a leap, imo.


It's just a bunch of privileged armchair humanitarians who never left the confines of their fancy circles, let alone been confronted to the things they're trying to fix. They think they can fix issues better than NGOs which have had boots on the grounds for decades, just because they know python and excel, as if people actually working on humanitarian causes were benevolent r**ards. Of course, it allows for great intellectual masturbation and self-congratulation, as if fixing complex social/ecological issues was just about "cracking a problem" and presenting a neat 12-page PPT presentation, before moving to the next problem.

If any of these people actually walked the talk, we'd see a lot more one-way tickets to Africa for them to finally be able to employ their beautiful minds on real problems.


The best comment I’ve read in HN for a really long time.


For someone outside the space (like me), what’s the big innovation of Effective Altruism? I assume when the rubber hits the road, most people doing big donations have people to look at the effectiveness of that donation.

I guess I’m just suspicious of any community or movement that labels itself as “effective,” because it is hard to believe that they were the first ones to think of the idea of not being ineffective, haha.


Most people doing big donations aren't particularly interested in effectiveness. The Susan G. Komen foundation, still the largest breast cancer charity in the US, had a big controversy about this around the time that Effective Altruism started to get big. According to their annual reports (https://www.komen.org/wp-content/uploads/fy19-20-annual-repo...), if you go to their site and donate $100 towards their promise of "ending breast cancer":

* $5 goes towards breast cancer research. (IIUC, cancer researchers are somewhat skeptical of the idea that cancer could be "ended" as such, but that's a minor quibble.)

* $8 goes towards treatment and screening. Not exactly what was promised, but still saving lives, so close enough.

* $14 goes towards administering the Susan G. Komen foundation.

* $22 goes towards raising funds for the Susan G. Komen foundation.

* $51 goes towards "education". They say this includes patient support services, not just telling people about the Susan G. Komen foundation, but don't offer a further breakdown.

And my understanding is that, in non-EA philanthropic circles, this breakdown isn't considered particularly egregious. At least they're doing something! An ineffective charity would be something like One Laptop per Child, which raised money and press attention from a fake crank-powered laptop and accomplished nothing of note before technological innovation outpaced them.


In the absence of any substantive allegations of misappropriation, be fair to OLPC. They had the challenge of engineering and logistics for a tangible product, not vagaries like "education."

To my neighbor, SJK's efforts yielded as much as OLPC's vaporware. As a career nurse, she's well-educated about the breast cancer she has, that she will soon die from because she can't afford to treat it.

SJK amounts to little more than a goddamn fortune-teller. Not one cent of that $8 has bought her a single extra minute of life.


Yeah, I should be clear, I don't mean to be particularly hard on OLPC. They tried to do a cool-sounding thing, it didn't happen to work out despite real efforts towards it, and as far as tech demos go the crank laptop wasn't egregious. But Susan G Komen isn't really being dishonest either - those numbers are from a nicely designed pie chart in their 2020 annual report! They're just responding to the donor demand for cool events and soaring rhetoric that makes them feel like part of a movement. People who are interested in effectiveness instead donate to organizations like the Breast Cancer Research Foundation, which puts over 70% of donor money into research grants.


EA seeks to measure and compare altruistic endeavors, however imperfectly. For example, measuring the good created by donating to your kids school, to the 9/11 fund, or to bed nets in Africa. An EA would likely say that the good for society created by donating to your kids' school is less than the good provided by donating the same amount of money to bed nets. They might quantify that in lives saved, such as a $1000 donation to bed nets saving about 1/5th of a life in Africa. But maybe the $1000 donation to the school improves the lives of 100 students by 1% each, or something like that.

It really forces us to have hard conversations about how we use our collective effort to help each other, based on more than just feelings in the moment. Feelings are an important part of the end goal, but feelings about some particular intervention are not a good way to evaluate it. We're also forced to be clear about what good we think an intervention will provide, and to whom.


Your answer sums up perfectly why I am so strongly against EA. The idea that you can quantify your donation to bed nets in terms of something so arbitrary as “lives saved”, without taking into account the extremely complex social environment at which that donation is arriving, is completely opposite to my beliefs. It is basically the same as measuring something like “what’s the benefit of an individual vote?”

The idea that you really believe that as an EA you are having “harder” conversations and you are more “forced” to be clear about what you do than other non-effective altruists is just baffling to me.


It seems to make sense to me. You evaluate whether your actions are doing the most good in the world according to your metrics. For example, should I work at a soup kitchen for a day or work in my day job in HFT for a day and donate it all to a soup kitchen.

With the latter I could support quite a few people at the soup kitchen for that day.

You can take that to whatever extent you desire. Should I donate to guineaworm eradication or local libraries? And so on and so forth.

And the general EA community rates lives highly. So max lives saved usually trumps everything else.


Some political/social communities simply kidnap words and terms and use them as if their solution is the only solution to a problem. You are absolutely right to be suspicious; the fact that they are oblivious enough to reality that they really believe themselves to be more effective than others (simply because they hide behind arbitrary numerical computations) is reason enough to suspect that their numbers aren’t really covering everything.

It is a bit like the way feminists think of themselves as the only line of fight for women’s rights, or right-wing extremists and populists keep labelling themselves as “freedom fighters”. All of a sudden if you’re opposed to them you become a woman-hater, or a freedom-hating socialist (because they can’t understand that there are other alternative ways to defend the same ideas). These are just political groups with one specific ideology who are marketing themselves as the solution. Thankfully nowadays with the fall of SBF people are dismissing EA as the fad that it is, but there was a time when opposing EA would elicit reactions such as “oh so you’re against effectiveness/transparency?” or “so you are in favor of corruption?” Sigh.


I guess I’m just suspicious of any community or movement that labels itself as “effective,” because it is hard to believe that they were the first ones to think of the idea of not being ineffective, haha.

What do people donate money to charity for? It's certainly not all to the poorest or desperate people amongst us. It get donated to a church, or to an art museum. Beyond a certain point, they don't really need the money.

Meanwhile, halfway around the world people lives in abject poverty or they're dying to famine or war.

I certainly don't act like an effective altruist. My money goes to things I cared about, like open source projects, but not necessarily to people who need the money to live another day or help people who could help other people live another day.

Let's put it this way. Is it wrong to not save people's lives, especially when it is of no inconvenience to you? I am not talking about donating so much money that I am a beggar on the street, but donating a substantial enough money but still retain a 'middle class' lifestyle.

Then the next question is whatever you doing effective or counterproductive? I think it should be no surprise that a large amount of people don't give such thought to the questions. Imagine the vast scientific illiteracy that pervades our world, like anti-vaxxers asking for money to help spread their messages.


> Is it wrong to not save people's lives, especially when it is of no inconvenience to you?

I used to think so. Later events then taught me that proactively helping people doesn't necessarily keep their knife away from your throat. You see people's true colors when you try to disengage.

I only save the lives of animals these days. People are scum. Animals never did me wrong.


It’s not just donations. It’s living your entire life according to “expected value”, or what the maximum “utility” units (utils) can be created through every action and relationship. It’s an extremely inhuman way of living that goes against ethical norms established since the dawn of civilization. Effective altruists are dangerous and you should not be friends with them, hire them, or associate with them at all if you possibly can. They put your wealth and life at risk.


That is silly. People argued about whether cars are good for the environment or whether we should create a walkable society, or policies about climate change, and so forth. They are always arguing about the consequences of their actions.

At the end of the day, people don't follow some sort of pure moral philosophy like some sort of platonic ideals.


This article makes a fundamental mistake that many who have written about EA make - by treating the philosophical and real-world application of EA as the same thing. EA is such a new philosophy and movement that the philosophy and application of EA are not sufficiently divorced from one another, and the people at the core of "philosophy EA" are also involved in "application EA". So this is an easy mistake to make.

There are people in rooms discussing whether "the ends justify the means" (though I don't think anyone is seriously arguing in favor of SBF-type means). BUT THESE ARE PHILOSOPHICAL DISCUSSIONS.

If you asked 1,000 effective altruists if they think what SBF did was acceptable (or gave a hypothetical ends justify the means at 10% of the severity of SBF), I would wager that 0 would say it was acceptable. SBF used EA as a shield to hide his fraudulent behavior, and EA (both the philosophy and application sides) have taken a hard look at what EA argues for, and to think that EA (even philosophy EA) would approve of SBF's behavior do not understand EA at all.

---

I study EA and so I am loosely connected to the movement, but I don't consider myself an effective altruist.


This article misrepresents what EA is about, and unfairly links SBF's criminal behavior to that philosophy.

SBF is a numerically oriented crook.

EA is about attempting to measure and compare different philanthropic approaches in order to optimize where we spend our money, effort and time to benefit humanity. The author incorrectly implies that EA isn't concerned with ethics, or that EA will justify any means to achieve some perceived benefit - but this is the opposite of true. Ethical and moral behavior are required by EA, and in fact are an important part of the utility measured for some philanthropic activity. That is, ethics and morals are worthy goals (or aspects of worthy goals) for EA in and of themselves.


> unfairly links SBF

This is some grade A No True Scotsmaning.

Sam Bankman-Fried was about as high profile an EA as ever existed, with his personal wealth counted as the bulk of their finances, his FTX Future Fund employing both Nick Beckstead and his old friend William Macaskill, and his political action committee throwing money around Washington to promote crypto and longtermism.

Macaskill himself is probably the most famous EA of them all and was in lockstep with SBF for years, dismissing claims of unethical behavior, vouching for him and hooking him up with other rich people like Elon Musk cashing his checks for the charities he controlled, and of course enjoying the finer things in life that FTX could buy without either of these famously ascetic utilitarians could ever imagine doing for themselves.


When Oppenheimer witnessed the first explosion of a nuclear weapon, he quelled his ethical reservations over the destructive power of his creation with a verse from the Bhagavad Gita [1] often mistranslated as the deity stating "I am death, destroyer of worlds", but more accurately - "I am time, and I will destroy these people with or without your involvement".

Had the scientists of the Manhattan project (Oppenheimer, Fermi, Szilard, etc) subscribed to the EA philosophy, they would have been unlikely to work on nuclear weapons development, and millions more would have likely perished in a land invasion of Japan. However, millions of Southeast Asians and South Americans did perish in the subsequent "proxy wars" of the Cold War era, so you can make a convincing historical "what if" either way.

Effective altruism is not a very useful philosophy if you don't actually know what is best for humanity. Oppenheimer's philosophy (the Gita philosophy) was to simply do his job without being attached to the outcome.

1. https://www.holy-bhagavad-gita.org/chapter/11/verse/32


That's incorrect. The translation quoted by Oppenheimer is actually more accurate than yours. The other two, more popular are:

"The Supreme Lord said: I am mighty Time, the source of destruction that comes forth to annihilate the worlds. Even without your participation, the warriors arrayed in the opposing army shall cease to exist." [0]

"Bhagavān Śrī Kṛṣṇa said: Time I am, the mighty destroyer of worlds, and I come to vanquish all living beings. Even without your participation, all the warriors on the opposite side of the battlefield will be killed" [1]

[0] - https://www.holy-bhagavad-gita.org/chapter/11/verse/32

[1] - https://asitis.com/11/32.html


To my ear, the two translations you listed sound very similar to primitivesuave’s “I am time…” translation and dissimilar to Oppenheimer’s “I am death…” translation. Can you explain why you disagree?


I understand the point you're making, but the statement that

> they would have been unlikely to work on nuclear weapons development, and millions more would have likely perished in a land invasion of Japan

despite being constantly repeated, is not reflected by contemporary documents and later historical analysis of decision making among Pentagon and White House officials.

The threat of an impending land invasion was not a consideration at the time when it was decided to attack Japanese civilian centres with nuclear weapons. The primary factor in the decision for their use had far more to do with the risk of Stalin joining the fight on the eastern front and thus securing a claim for territory following the inevitable axis surrender, as well as a desire for US power projection from the demonstration of an atomic weapon in War. The primary delay in Japanese surrender was the question of the fate of Empreror Hirohito, who the US ended up protecting anyway.


This essay is a mess. I won't flag it, but I doubt with such poor definitions it'll make much of a useful conversation on HN.

I counted four topics in the first few paragraphs that the author defined in a poor, self-serving way. Any one of these topics and associated definitions would be interesting to talk about. Put them all together and it's just too much to clean up (for folks taking any kind of issue at all with the thesis or conclusion.)

It was well-structured and cogent, though. Kudos to the author for that. That puts them well above other essays of this type.


The author lost me at the title, tbh.


What fucking moral boundaries am I overstepping if I donate via EA to get some mosquito nets put up in Africa??


One of the problems with effective altruism (and consequentialism more generally) is that it's quite hard to look at SBF and say definitively that his actions had net negative consequences.

Maybe his donations saved lives. Maybe anthropic (which he famously funded) will save the world. Maybe by discrediting EA, SBF saved the world from EA fanatics. You could ennumerate hypotheticals like this forever, positive and negative. It's for this reason that we have to rely on intuitive moral feelings or there's no way to confidently say that anything is good or bad.

That said, I view EA as a call to think more carefully and analytically about our actions and how they affect the world. There's certainly nothing wrong with that as long as it's not taken to bizarre extremes.


This is mentioned in passing in TFA but the fundamental problem of EA, and all charity in general, is that it ignores and often even perpetuates the societal structures that create most of the problems that charity tries to patch up.

Of course if it would try to address the structural problems, it wouldn't be charity but politics. And politics are bad because it could change the structure.


I disagree with this - traditional philanthropy does ignore systematic/societal problems, for many reasons it is simply unable to.

EA tries to look at the bigger picture of effectiveness, and many within EA do believe that political solutions are a good use of resources. For example, many of the new charities created by Charity Entrepreneurship spend their time lobbying governments. Relative to traditional philanthropy, I think EA has a real shot at the systemic changes necessary to make real change.


The biggest logical flaw of effective altruism is valuing potential future lives the same as the lives of existing people. Taking that logic to the extreme it would be okay to kill a person for two additional people being born.


It's way easier to point at SBF as a fraudster within the larger group of crypto fraudsters than a fraudster within the larger group of effective altruist fraudsters.


In the beginning, SBF probably believed in EA. It helped him recruit the executives of AR and FTX.

As FTX experienced the unprecedented growth to fantastical scale, SBF was at the center of it. I strongly suspect he felt deified by it, felt that the market was giving him unqualified approval for his every thought and method.

Somewhere in his ascendancy, I suspect that EA became merely a vocabulary of stock responses that he used to explain his decisions and to frame his public image.

The immorality began when he chose to ignore his fiduciary duty to his depositors, and instead used their funds as if they were VC money available to fund his ideas. The immorality continued when he gave false financial statements to the AR lenders. It culminated when he tweeted "everything is fine" when the withdrawal rush began.

Was he using EA theory to justify these unethical choices? Caroline Ellison thought he was but that was because she was in thrall to his personality.

I would be immensely surprised if EA goals ever crossed his mind when he made these decisions. I suspect he was in empire building mode aiming to enter the pantheon of SV tech titans.

The WSJ had a chart of "where did the money go" showing that only a miniscule slice of the $16B was donated to philanthropic organizations. It was less than $100M.

You are correct that EA has been unmasked as a philosophy unburdened by ethics. However, my view is that SBF only used EA as a convenient label for his motives, when his goals were consolidating his power.


I think this article assigns a lot of blame to Effective Altruism that really belongs in classic narcissism and power tripping.

SBF wasn’t even an idealist’s version of an effective altruist, he basically lied and told everyone that he was one, probably in a vain attempt to explain where all the money went.

That’s not to say that EA doesn’t deserve its own criticism, but SBF was only pretending to be one on TV.


The author would've had a point if SBF actually donated all the stolen money to effective charities. But he didn't, he just appropriated it.

So his actions are in no way consistent with EA, and shouldn't be considered indicative of it.


> Michael Lewis, along with a cadre of others, have astonishingly aligned themselves with the EA bamboozlers, steadfastly standing by their erstwhile idol

I read his book and some interviews and this is hyperbole. And poor use of “erstwhile”.


Zvi has an extremely long review of the Michael Lewis book which makes it clear to me that you're right, this is hyperbole.

https://thezvi.wordpress.com/2023/10/24/book-review-going-in...


Yeah, the way I read it Lewis comes down on EA fairly hard. Rather his conclusion boils down to seeing EA and SBF's actions as misguided/naive, not intentionally fraudulent (which I disagree with).


Stated without argument: "EA, in its cold calculus, can justify the unjustifiable in pursuit of an ill-defined "greater good." I'd love to hear an actual argument for that. What sort of cold calculus can justify the unjustifiable? Isn't that a contradiction in terms?

I'd love to hear an actual argument for it. I don't want to think that Joan Westenberg (whoever that is) is a purveyor of twisted words.

(There are more examples in the article, I picked one because I like examples.)


EA’s primary philosophical foundation is utilitarianism. Utilitarianism in its standard form only cares about outcomes that maximize global utility for humanity. It does not accept any deontological ethics.

Utilitarianism is evil because it allows evil actions to be taken as long as that maximizes utility, thus justifying the unjustifiable. Utilitarianism is wrong because it is still evil to commit fraud even to save human lives. However utilitarianism is popular for people with power, because they can use it to justify their evil actions as for the greater good. Even if they are sincere, their actions are still evil; even if SBF succeeded instead of failing, and sincerely wanted to stop climate change, he would still and rightfully be a criminal.


Aren't you saying that the unjustifiable is justifiable given the right cost/effect tradeoff? But that the same time saying that doing so is wrong because the unjustifiable is unjustifiable?


No, I’m contrasting utilitarianism with deontology and explaining why deontological ethics are preferable. Utilitarianism is concerned with effects while deontology is concerned with actions and/or intents.

That said there are several arguments from within utilitarianism that support deontology. For instance Bentham mugging indicates a utilitarian should pretend to support a deontology.


God this “X is a feature not a bug” has become so cringe


> But in neither camp are we seeing the degree of self-reflection that the immolation of their champion should call for.

If you watch a herd of gazelles get one of their own munched by a lion, they get the hell out of there.

It seems as if these ones are saying "Well, she's got her fill with poor old Sam, over there, so we're all right to keep eating this sweet grass."


At what point was the “feature not bug” argument defended? I missed that on first read and can’t be bothered to spin through again. I’m all for the condemnation of fad ideology on the basis of strong arguments, and SBF fucked up, and EA seems dubious,

But this article seems like it’s not achieving anything


"I work with companies that are $1,000,000+ in revenue."

https://joanwestenberg.com/about

Why not not $1,000,000+ in profit.


Joan has become something of a buttcoiner (if you understand the reference) in recent years after not achieving the success she was looking for with NFT projects.


This is actually good for bitcoin^Weffective altruism.


The true feature was the political donations making their way back into the accounts of politicians.


Most philosophies fail in the execution, typically because the execution is handled by humans.


> promising to solve the world's woes through a calculator.

SBF || !SBF that was the problem


I will never cease to wonder at how so many people can blame so much on people trying to take a rigorous approach to world improvement, up to and including "a narcissistic con-man claimed to do trying to do X, and I can imagine a scenario where someone could justify doing the shitty things he did to justify X, so therefore everyone trying to do X must also suck and be complicit in fraud and assorted sins".


If the altruistic narrative had any truth, SBF et al would have been outraged at any one of ten acts, behaviors, thefts, practices, and their underlying attitudes portrayed in his unmasking. He even set his parents up to steal, and they complained when the checks were late, and that was a tiny, but diagnostic part of the scam. More like Madoff II than anything else.


Effective Altruism is nth degree utilitarianism which even a college freshman can tell you has very unsatisfying answers to all kinds of fairly trivial ethical puzzles.

Not even remotely surprising that a sociopathic fraudster subscribed to these ideas. It's the same kind of ethical framework that early 20th century eugenicists were quite fond of.


In other news, real communism has never been tried


A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not overwell built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind, and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and and refitted, even though this should put him at great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.

What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship; but the sincerity of his conviction can in no wise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts. And although in the end he may have felt so sure about it that he could not think otherwise, yet inasmuch as he had knowingly and willingly worked himself into that frame of mind, he must be held responsible for it.

W.K. Clifford

https://human.libretexts.org/Bookshelves/Philosophy/Philosop...


"[Stockton] Rush said in a 2021 interview, "I've broken some rules to make [the OceanGate Titan]. I think I've broken them with logic and good engineering behind me. The carbon fiber and titanium, there's a rule you don't do that. Well, I did.""

"Numerous industry experts had raised concerns about the safety of the vessel. OceanGate executives, including Rush, had not sought certification for Titan, arguing that excessive safety protocols and regulations hindered innovation."

"Rush replied that he was "tired of industry players who try to use a safety argument to stop innovation ... We have heard the baseless cries of 'you are going to kill someone' way too often. I take this as a serious personal insult"."

- https://en.wikipedia.org/wiki/Titan_submersible_implosion

"In particular, he was critical of the Passenger Vessel Safety Act of 1993, a United States law which regulated the construction of ocean tourism vessels and prohibited dives below 150 feet, which Rush described as a law which "needlessly prioritized passenger safety over commercial innovation"."

- https://en.wikipedia.org/wiki/Stockton_Rush


> needlessly prioritized passenger safety over commercial innovation

There's a phrase that should go down in infamy. "My desire to make money is more important than your life".


A prescient example from this thought exercise of The Ethics of Belief.


This is a good pull. I think about The Ethics of Belief a lot.


When reading teardowns of Effective Altruism, I always find them light on substance and preferable alternatives.

I see two typical failure modes. Half of the time, they fall into a

> cannot separate the man from the idea

trap of “someone with a corrupted moral compass once believed this, ergo the idea itself is bad”, which doesn’t follow.

The other half does engage with the idea, but seemingly only ever half of it. They tend make the first step successfully,

> People can use a veneer of charity and rationality to ignore ethical issues

Which is definitely true, but don’t seem to actually take on the ideas purported main thrust, that “if one intends to give charitably, one ought to bias their choices towards causes that give money as effectively as possible at meeting your goals”, which frankly seems pretty air-tight

I think the core is that a lot of people have a problem with admitting someone they dislike had a pretty sensible idea.


The points that you (and other commenters) make is ultimately related to the fact that when a person being shunned belongs to an outgroup, they are categorized as a representative sample of the group. In other words, the entire group is assumed to be consisted of homogeneous individuals, exhibiting roughly the same behavior as the one being discredited.

However, if said person were to belong to an ingroup, then said person is categorized as an outlier by its members.

We can see lots of such examples in politics, such as people casting similar aspersions about how the recent action of a single person in an outgroup proves the larger aspirations of said group, about how "they're out to destroy society", and the like.


> I think the core is that a lot of people have a problem with admitting someone they dislike had a pretty sensible idea.

He had a very twisted interpretation of what is, arguably, a pretty sensible idea.

Or maybe not so sensible. EA may be an idea that could be a sensible one, but in practice may be too easy to get twisted and wind up in some really insane places, and therefore may not be so sensible in practice.


Simplest teardown of EA: it's heavily derived from/based on utilitarianism, a moral philosophy that very few people actually believe in and one that leads to monstrously evil results if followed to its logical conclusion in certain circumstances.


Utilitarianism, the principle of maximizing good for the greatest number of people, is fairly common in informing social norms and policy-making.

It's a guideline, but not an absolute rule. We can acknowledge its value as a general approach yet also recognize situations where a strictly utilitarian choice would be abhorrent.

Life is not so black-and-white.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: