Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Effective Altruism Funds (YC W17 Nonprofit)
178 points by tmacaulay on Mar 16, 2017 | hide | past | web | favorite | 66 comments
Hi HN! We’re the founders of the Centre for Effective Altruism, a nonprofit from Oxford, UK in the YC W17 batch. We’re the creators of Effective Altruism (EA) Funds, (https://app.effectivealtruism.org/funds/) high-impact, individual giving portfolios, managed by expert researchers. It's like Vanguard for charity.

We created EA Funds because we became frustrated with how difficult it was to find the best giving opportunities as an individual donor. Using EA Funds, you choose a problem area, and our fund managers find the best giving opportunities. Our initial funds are all managed by Program Officers at the Open Philanthropy Project, a $10B private foundation. In the future, we hope to expand the number of funds, building a competitive marketplace for giving, where the amount of funding a charity receives is directly correlated with the amount of good they do.

We started out 7 years ago, when 23 of our friends pledged to give 10% of their income to the most effective charities they could find. Since then, 2700 people have joined us, donating $33M, including $17M last year. We’ve found that some of the most cost-effective charities can buy one year of perfect health (a DALY) for as little as $80, while others have no effect. A study in the US found that $10B was spent each year on social programs that had been proven to have no effect on outcomes, and a further $1B was spent on programs that were actively harmful. We built EA Funds to replicate the success of groups like the Gates Foundation and GiveWell. By pooling our resources and utilising the expertise of our research community, we can give individual donors the same impact per dollar as billion dollar foundations.

We’d love to hear your feedback and answer questions about effective altruism, donating effectively or hear your stories about the charity sector!

(Edit: originally said QALY instead of DALY, but that was a typo, as commenters pointed out.)




Congrats on the launch!

> We’ve found that some of the most cost-effective charities can buy one year of perfect health (a QALY) for as little as $80

Which was that? IIRC GiveWell only claim $100/QALY for the AMF - have you found a charity you believe is 20% more efficient than that, or is this just a difference in how you measure cost per qaly?

[Edit] another question -- there's recently been criticism[0] of the poor quality of data and effectiveness research in animal welfare EA, compared with human welfare EA. Would the animal welfare fund be doing things like actively commissioning new research into intervention effectiveness? Or would it be more hands-off, just limited to picking charities based on the data that's out there at the moment?

[0] https://medium.com/@harrisonnathan/the-actual-number-is-almo...


It's just that it's an estimate that hinges on a few judgement calls (e.g. whether or not to cost services that AMF receive pro bono) — we're in agreement with Givewell's estimate. For comparison, the WHO think that LLINs work out to be around ~$30/DALY and AMF run a pretty tight ship.

Given the tricky nature of cost-effectiveness estimates I don't think the numbers are hard enough that it'd be wise to talk about the difference implying anything as exact as an intervention being '20% more efficient'. It's more to get a reasonable ballpark for comparisons between other interventions.

(Also, seems there was a typo in the OP, should be DALY not QALY).


QALY and DALY are different.

http://www.qalibra.eu/tool/support/page8.cfm


.. which doesn't preclude someone having typoed one for the other, right?


Thanks for pointing that out - AMF is indeed one of the charities that our fund will support. GiveWell staff cost-effectiveness estimates vary widely[0], you can play around with their model, put in your own parameters and then see what figures you end up with.

To answer your second point, the state of evidence in the effective animal activism community is indeed poor. Our fund is likely to support programs which have a good track record or seem like promising bets. We don't want to be limited to projects which have rigorous data to support their activities, but we will encourage new projects to test their approaches, and collect data in order to prove their impact. The Animal Welfare Fund is managed by Lewis Bollard, Farm Animal Welfare Program Officer at the Open Philanthropy Project, Lewis has made grants in the past to organizations working on corporate cage-free campaigns, and clean meat initiatives. Some of these initiatives are inherently more speculative but have high expected value. You can read more about Lewis' grant history and his reasoning behind the grants on the Animal Welfare Fund page[1]. In terms of new research, Animal Charity Evaluators has created an Animal Advocacy Research Fund[2] which funds research into social and behavioural intervention cost-effectiveness. They've made a few small grants, and are looking to scale up this year. We're excited to see the animal welfare community become more evidence-based, though we also want to make sure we fund the activities that are most likely to help animals, given what we know right now.

[0] http://www.givewell.org/how-we-work/our-criteria/cost-effect... [1] https://app.effectivealtruism.org/funds/animal-welfare [2] http://researchfund.animalcharityevaluators.org/


This is really interesting and a great idea; however, I'd almost trust charities more if they listed how they had failed each year and where delivery and services went wrong and what mitigation has been put in place.

It would be good to have a system where charities were allowed to fail like bad companies as well, but it's difficult to be this honest.


Thanks, we completely agree!

Right now, the incentives in the charity sector are totally screwed. The charities which get the biggest are the ones with the best marketing, not the ones which have the most impact. That means that big charities often hide their mistakes and focus on maintaining a wholesome image.

In contrast, many of the organizations we work with regularly publish their mistakes and lessons. By sharing this kind of information, the whole sector can learn and improve. In particular, GiveWell (http://www.givewell.org/about/our-mistakes) has a great page listing their mistakes.

We've been especially impressed with charities like New Incentives, who realised that the original target population they were trying to serve (pregnant women with HIV) wasn't big enough, so they pivoted to focus on incentivizing mothers to vaccinate their children so that they could gather more evidence and have an even bigger impact.

With EA Funds, donors pick which problems they want their donation to solve, and we find the best giving opportunities. We will fund both new start-up style charities and larger more validated approaches. We will fund charities which have run failed programs in the past, provided that they have updated their approach.


So New Incentives changed what they wanted to do because they would get paid more that way?

I know this sounds harsh. This is the incentive you create: Pushing people who initially wanted to help where they see need to instead focus on helping where they get more money for do so.

And a sufficient amount of free money is mostly available to a very small group of people.

This does not mean it’s necessarily bad. It just means that its incentives are skewed, too: The charities are pushed to become interest groups of the rich (to some degree this is also the case today, but stronger quality assurance also means more control to follow the largest donors' wishes).


I think that's exactly wrong, I'm afraid.

In traditional charity, the incentives that charities have are to do whatever is going to fundraise the most. So the charities that get biggest are those that are best at looking good, rather than doing good.

The solution is to have a set of donors who really care about funding whatever does the most good. That means that a charity's fundraising incentives line up with what's actually best for the world. And that set of impact-motivated donors is exactly what we're trying to create with the effective altruism movement.

New Incentives is a great example of that working. In just the same way that a startup will pivot if it thinks it could be working on something else that's more profitable, because of the existence of the EA community New Incentives is able to pivot to a different approach that it thinks will do more good per dollar and knows that, if it succeeds at doing that, it will be able to grow more.


Thanks for your feedback, the dynamic you're talking about is exactly the problem we're trying to solve. We want the best charities, which target the biggest needs, in a cost-effective way, to get the most funding.

New Incentives changed what they were focusing on because their initial program was proven to be less effective. They got some initial results from their study, and it showed that because they could only reach a smaller number of people, their program didn't hit the threshold they were targeting for cost-effectiveness.

Right now charities are incentivised to skew their programs towards areas in which they can get the most funding. We're trying to fund programs based on effectiveness, and build a community of donors who will donate to whatever programs are proven to be the most effective. If people founding non-profits know that there is a community of donors who will fund programs that work, we fix these incentives, and we hope to see many more effective programs launched. This should make it easier for effective charities to get the funding they need to grow.

We don't focus on special interest groups, and we don't fund whatever our largest donors are most interested in, we only fund programs that are highly effective.


The charities themselves rarely do that directly (unfortunately, as the OP said, the incentives are really messed-up so they would never do so), but GiveWell does it "by proxy" every year, so if you donate to GiveWell's "general fund" you can be sure that they're iterating and correcting for failures.

In terms of other feedback, I have to say I really like the clean separation of "ends" and how you can pick which one(s) you want. TBH, I've been getting increasingly annoyed at how the X-risk people are gradually taking over EA, but I don't want to just quit or blow up at them either (they mean well even if I think they're misguided). This looks like a way that everyone can be happy. Please don't let them needle you into adding that stuff into the Global Development or Animal Welfare funds.


I agree with you about X-risk, but I'm not sure the separation is so clean. For example looking at the EA Community fund page it isn't at all clear to me whether or not some of the money will end up in the hands of the AI Safety people. To be fair this is openly disclosed as a risk, which is great transparency.

Second, donors may choose not to support the Movement Building Fund if they do not wish to indirectly support all of the problem areas that the EA community is likely to support. This includes areas like Global Health and Development, Animal Welfare, Long-Term Future, and any future problem areas that the community deems effective to address. For example, Dylan Matthews criticized the EA community in 2015 for being overly self-promoting and overly concerned about risks from advanced artificial intelligence (one response to this criticism here). Those with strong views about which problem areas they do not wish to support might avoid movement building as a result.

However, admirable transparency or not, the only fund I'd really be interested in allocating money to would be the Global Health and Development Fund. Given that, for me personally, the paragraphs about differentiating vs GiveWell are critical.


It's really great to hear this view. While lots of donors split their donation between our funds, many people choose to allocate 100% of their donation to a single fund. At a later date, we want to allow people to share their fund allocations, so donors can compare allocations and discuss differences in cause prioritization.

Right now, our Global Health and Development Fund has 61% of all donations by value, with the Long-Term future fund coming in at 22%. It will be exciting to see how this changes over time, and whether there are differences in fund allocation by geographic area or demographics.


For sure, we want to represent a broad range of views within areas that are potentially high impact. We chose these as our initial funds partly because they seem like cause areas that reasonable people can disagree on (both on questions of values, and on empirical questions about relative risks and how to solve problems).


Yeah! Givewell does exactly that

http://www.givewell.org/about/our-mistakes


Since Givewell has come up a number of times here, the obvious question to me, as someone completely unversed in how these things work, is how does contributing to your fund compare to contributing to Givewell? It seems like your goals are similar. Is your claim that dollars contributed to your fund will ultimately be more effective? Or are you differentiating yourselves in a different way (different priorities, improved transparancy, something else)? I realize the model is somewhat different, but the comparison I would ultimately care about is in the end results - the effects that my contributions would ultimately have.


Our baseline for effectiveness (specifically with regard to the Global Health and Development Fund, which is managed by GiveWell founder Elie Hassenfeld) is Givewell's top charities. So, it's entirely possible that Elie will choose to grant to these, in which case your donation will be exactly as effective as donating to Givewell, but you'll get the advantage of using the EA Funds platform — e.g. single tax receipt (donating to one org instead of 3-4), easy recurring donations, the ability to change your donation preferences later, allocating to causes outside of global health, donation tracking etc. etc.

However, we do think there's a good chance of getting higher returns. Givewell recommendations skew towards what makes sense for an average, individual donor with a particular risk profile. By having access to a larger pool of funds, you could seed new high-expected-value charities that wouldn't necessarily make it onto Givewell's recommendation list, but are nonetheless potentially high impact (examples of recent things in this space — New Incentives, Charity Science Health etc).


    single tax receipt (donating to one org instead of 3-4)
GiveWell offers this as well: https://secure.givewell.org/

You can donate to GiveWell for distribution to their to charities, and you can either choose the breakdown or ask them to allocate it as they think best.


That's true. You can donate to GiveWell's top charities directly through GiveWell. For many donors, especially those who only want to donate to proven charities with a good track record, this is a great choice.

With our Global Health and Development Fund, we hope to also make small, seed grants to promising new initiatives, to help them build evidence to support their program, or to replicate a promising intervention in a new geographic region, to figure out if the program can scale. Many of these donation opportunities are small, and individual donors won't necessarily hear about them. With Elie managing this fund, we hope to be able to quickly fund promising new projects, so they spend less time on PR and fundraising, and more time doing good work.


Good point — a nice feature but not a point of differentiation from Givewell's donation page... I think the others still stand – in general you gain additional flexibility with the Funds without compromising effectiveness.


Why is there a "long term future" fund addressing the theoretical risk of AI but not the very well understood and documented future risks of climate change?


The Long-Term future fund is designed to be fairly broad at some point, and it may well support some climate change initiatives in the future, as well as addressing potential risks from advanced artificial intelligence, and any other potential global catastrophic risks. Nick Beckstead has previously expressed interest in funding more research to quantify the tail risks associated with runaway climate change in particular.

Having said that, one of the reasons that EA has not focused as heavily on climate change historically is that climate change is not as neglected as potential risks from AI, or biotechnology. We are happy to see a lot of funding going into climate research and modelling, and a lot of grassroots activism. While climate change is far from solved, and remains one of the most important problems of our time, many EAs think that we should first focus on problems which receive less media attention but are plausibly just as serious.


To add some numbers to this, according to http://www.climatefinancelandscape.org/ $392 billion was spent on various aspects of tackling climate change in 2014. In contrast, under $10 million per year is being spent on potential risks from AI (https://80000hours.org/problem-profiles/artificial-intellige...).

That doesn't mean there's nothing neglected in climate change (e.g. negative emissions technology) and it would be good to see some more investigation into this.


> many EAs think that we should first focus on problems which receive less media attention but are plausibly just as serious.

How is that effective? How could you even measure the effectiveness of focusing on problems that don't currently exist but that you happen to consider "plausible"? (Global warming, in contrast, is quite measurable.)

I'm looking at the list of recipients of that fund and it looks like self-dealing: people on the futurist side of EA would like to encourage people to donate to people on the futurist side of EA.


The Long-Term Future Fund is for ensuring the continued survival and flourishing of future generations. Climate change is one future risk, but there are many others — including ensuring that the risks posed by smarter-than-human artificial intelligence are understood and mitigated, but also e.g. mitigating pandemic risk from genetically engineered pathogens.

As with all the Funds, choosing organisations to grant to will involve asking questions about scale, crowdedness, and tractability (how big is the problem, how many people are already working on the problem, and how hard is the problem to solve). For more on this see [1],[2].

Climate change is clearly an important future risk, but it's a very hard problem to solve (requiring massive international cooperation), and there are already lots of people working in this space (while I wouldn't necessarily agree that all future risks of climate change are 'very well understood' because of nonlinearities in modelling complex systems, part of the reason it's well-documented is because there has been considerable research effort expended over the last 20-30 years). By contrast, there are many future risks that have less attention, but may have similarly severe consequences for future generations. It's important we aren't blindsided by one of these risks just because we have a better understanding of one particular area.

To be clear, none of the above precludes that the Fund could donate to orgs working on climate change research/mitigation/prevention, just that there might be reasons to look at other risks as well, even if they're less well understood.

For more on research into effective giving opportunities in climate change, see https://www.givingwhatwecan.org/cause/climate-change/

[1] https://www.youtube.com/watch?v=67oL0ANDh5Y [2] https://80000hours.org/2014/01/which-cause-is-most-effective...


How would you tell when you are mitigating these far-future risks, instead of taking irrelevant actions or making them worse?

What makes you think that AI risk is a tractable problem, or even a well-defined one?


I disagree with just maximizing QALYs as the main measure of effective altruism. See my post:

http://magarshak.com/blog/?p=216


Hi, thanks for your post. I think that many EAs would agree with you here. Effective Altruism is not just about maximizing QALYs. It's about figuring out what doing good even means, figuring out how to improve the world, and then actually doing it. We're not just about educating people on one narrow conception of what doing good means, we know that doing good is hard and complex, so we're trying to build a community of people focused on figuring that out. Our community debates what it means to do the most good endlessly, and there is a lot of nuance, that's what we love about it! You mentioned some interesting issues in population ethics in your post, it appears you take the person-affecting view. Many EAs who take this view prefer to support global health, or animal welfare charities, as they do not think it is beneficial to maximise the number of happy people in the world. Other people in our community think that a world with more people is better, provided that adding more people does not reduce the overall total happiness. In our summary of the Long-Term future Fund, you can read Nick's take on the person-affecting view[0] and see some more links to discussion of this issue. We love getting into these debates and seeing lots of different perspectives. [0] https://app.effectivealtruism.org/funds/far-future


Congrats on your venture!

Will you give more detail on how you address the following four steps mentioned in your website:

1. Which problem areas are most important?

Metrics used for each fund (DALYs, etc.)? Sources used? etc.

2. Which interventions are likely to make progress in solving the problem?

Sources used? How do you evaluate interventions with less predictable outcomes (such as clean energy research)? etc.

3. Which charities executing those interventions are most effective?

Indicators used? Type of due diligence? Do you rely only on reports or do you also regularly check on-the-ground reality? etc.

4. Which charities have a funding gap that is unlikely to be filled elsewhere?

Methodology (funding inflow/outflow models and/or constant dialogue with organisations ...)?

Again, really great initiative; would be interested to understand more in detail.


Thanks for the enthusiasm!

I think a lot of your questions can be answered here (including the other pages that these pages link to): http://www.openphilanthropy.org/research/our-process and http://www.givewell.org/how-we-work/process

Because our fund managers are all GiveWell and Open Philanthropy staff, we inherit their methodology. If in the future we move beyond GiveWell and Open Philanthropy fund managers, we'll need to have pages on their process too (and how it potentially differs from GW / OP).

A more accessible introduction to how to assess which problem area is most important is here: https://80000hours.org/career-guide/most-pressing-problems/


I'd add that these things will differ between cause areas, so it's hard to give a neat, roll-up answer. DALY's and NPV income estimates are fairly good in the global health space, but other cause areas may need to make more speculative comparisons because they deal with things that are harder to measure.

In all cases, when a Fund makes a grant, the fund manager will provide a write-up outlining their rationale. For now, each fund manager has a list of previous grants they've made, so you can get a sense of the organisations they are likely to grant to (though by no means does that imply that they necessarily /will/ grant to these specific orgs in future).

One of the key advantages of the Funds model is not locking you in to a particular charity up front, which means that as more information about funding gaps, intervention effectiveness etc comes to light, fund managers can respond accordingly.


Hi Tara, great work on the fund - and I'm a big fan of EA / GiveWell / Doing Good Better and all this "ecosystem". I'm curious about the scalability of EA funds. From what I know, some of the major areas that EA and GiveWell focus on are neglected causes. If enough people are giving to neglected causes, would the movement lose its essence?

Also, individual donors in general are impact givers who make a donation because their friends told them about a non-profit, or they paid for a gala-dinner ticket for social reasons. How do you plan to transform the general public from being impulse givers into "effective altruists" ?


Congrats!

My genuine feeling - I'm not helping children in Africa, because that would mean I'm not helping rebuild Nepal earthquake.

Instead I'm focusing on healing the planet - global warming - leading to famine, migration to cities, civil warfare, refugee crisis etc...

Oxford, UK - come to North Wales - https://astralship.org - read more about our MISSION + FLOW + VOYAGES... Because our chapel is located in the secluded nature we plan intense voyages - total immersion, no distractions, peak performance, flow state.

Looking forward to hearing from you. Namaste.


Great idea and I wish you all the best.

Assuming this becomes very successful, what effect will this have on new non-profits if most people adopt this approach? Will it make it difficult for them to raise funds?


Thanks! Ideally this makes it significantly easier for effective non-profits to raise money. We're not locking in the set of organizations we fund. Being flexible about which cause you support at any given time, given the available facts is a core part of effective altruism.

We're especially excited about funding new nonprofits that seem like they're going to be able to have a big impact, because at the moment they can spend north of 30-40% of their time fundraising (which, especially when they're new, would be time better spent on validating their programs and scaling up).


Caveat: I know little about this area, so my question may be rooted in naivete.

Q: Why are you operating as a nonprofit? I would be willing to pay a reasonable management fee to you in exchange for the assurance that my money will likely have a greater social impact. If you were able to charge management fees, wouldn't that give you a greater ability to grow and attract talent?


Haha, this is what the YC partners ask us as well!

We see this service as a public good, we want anyone to be able to donate effectively, no matter whether they are donating $10 or $10 million. We prefer to offer our service completely free of charge, but let people optionally tip us, or donate to us directly, if they find the work we do valuable. Right now, we have sufficient funding for our operations to support us until the end of the year, this will let us grow and attract top talent. If this product really does take off, we might play around with different funding models, including asking for an optional 'tip'.

As well as building EA Funds, we run local Effective Altruism community groups and conferences. Many of our members give back to us to support our activities because historically, we've been able to generate returns on their donation of between $10 and $100 for every dollar we spend on our operations. I should also note that there is some chance that donations to the EA Community Fund may be regranted to us if Nick Beckstead decides that we are a good giving opportunity. One way to support us it to allocate some of your donation to the EA Community Fund. We like this funding model because it incentivises us to focus on high impact activities. If the community evaluates the work we do positively, we'll get funding, and if not, we'll be forced to reassess our priorities.


Congratulations on your launch! Well done. A great idea that needed to happen. Looking forward to kicking the tires on it.


Thanks, please do let us know if you spot any issues, want to suggest any new features or have any other feedback!


+ 1000 For the animals welfare. The animals that can't blog, tweet, or complain. Wildlife preservation and care for animals is as important as other common non-profit and foundations goals, but often neglected


We agree, effective altruism attempts to focus on problems that are important, tractable and neglected. Animal welfare fits squarely into this bucket for all the reasons you mentioned. In particular, we often focus on farm animal welfare, as this is an area that is particularly neglected. I love this post from Animal Charity Evaluators[0] which explains that while farmed animals account for 99% of animals killed and used by humans in the US, the vast majority of donations go to animal shelters, with only 1% going to charities which help reduce the suffering of farmed animals. [0] https://animalcharityevaluators.org/blog/why-farmed-animals/


Actually, most "effective altruists" support habitat destruction because it reduces the number of animals, and therefore also the amount of suffering.

I find this very disturbing. If you want details, see my other comment on this item: https://news.ycombinator.com/item?id=13886954

Of course, EAs won't tell you this up front, as they know it would be bad PR. (cf. https://medium.com/@jacobfunnell/a-year-in-effective-altruis...: "Some effective altruists hold views that would be strongly controversial to most people outside of EA. EA does a pretty good job of either not mentioning (or actively avoiding) these conclusions in its public-facing literature. Examples include the moral imperative to destroy habitat in order to reduce wild animal suffering, the need to divert funds away from causes like poverty relief and towards artificial intelligence safety research, or the extreme triviality of aesthetic value.")


While there are people who identify as effective altruist who hold views that are controversial, I think it's important to not paint with such a broad brush. As your linked article notes, there is wide disagreement on a range of thorny philosophical issues across a range of cause areas, but the scare quotes/selective quoting makes it seem like these views are unquestioningly accepted by a plurality of people in the community.

Effective altruism is a broad church. The unifying themes are trying to make the world a better place, using reason and evidence as tools for making decision, expanding our circle of compassion, and having epistemic humility (i.e. knowing that we could be wrong about things and being open to changing our minds in proportion to the strength of new evidence). The conclusions people draw can hinge on deep and ultimately irreducible value judgements — a strength of the community is that we can (in general) have these disagreements respectfully and work together productively where there is common ground.


It's not letting me edit my comment, so I'll post this separately.

I can't help but feel that accusing me of "selective quoting" is just a tactic used to deflect a true observation. As far as I can tell, the most well-known EAs who have commented on wild animal suffering have unilaterally come out in support of Tomasik-like conclusions, e.g. CEA's CEO William MacAskill has written an article called "To truly end animal suffering, the most ethical choice is to kill wild predators (especially Cecil the lion)", and 80,000 Hours director Rob Wiblin has written a blog post called "Why improve nature when destroying it is so much easier?" (https://archive.fo/HbE2a). This has also been my experience in online EA communities, such as the EA Facebook group. I can think of dozens of articles written by prominent EAs supporting habitat destruction, but only one opposing it (http://effective-altruism.com/ea/14l/the_unproven_and_unprov...).

Since there's no robust evidence (eg surveys) on EA views of wild animal suffering, this is the best evidence available.


Thank you for the response and I'm sorry for the tone of my earlier comment.

> As your linked article notes, there is wide disagreement on a range of thorny philosophical issues across a range of cause areas, but the scare quotes/selective quoting makes it seem like these views are unquestioningly accepted by a plurality of people in the community.

There is disagreement within the EA community on ethics. However, almost all the disagreements are between different 'denominations' within the church of consequentialism -- questions like population ethics (total vs. person-affecting vs. negative), theories of well-being (hedonistic vs. preference), and distributional justice (utilitarian vs. prioritarian). The fundamental theory of consequentialism is taken for granted by most EAs. As my linked article says, "ultimately, only people who have a good majority of utilitarians in their moral parliaments are going to be able to get on-board with EA." I think this is true to some extent. While there are non-consequentialist EAs, it's hard to deny that the culture of EA is extremely consequentialist.

Even though I like the abstract idea of effective altruism, my value disagreements make me hesitant to trust certain EA organizations. I'm personally a deontological vegan and very concerned about animals, but if I donate to ACE (see section 7 of https://medium.com/@harrisonnathan/the-actual-number-is-almo...) or the CEA Animal Welfare fund, how do I know the money won't go to something I ethically oppose (like pro-habitat destruction advocacy)?


Yeah, I agree that it's much easier to get on board with the fundamental proposition of EA if you're of a consequentialist disposition, because the 'most' in 'do the most good' implies a maximising view. I don't think that it's inherently antithetical to other value systems, but agree that because there are more consequentialists, it's more culturally consequentialist.

The reason we publish fund manager grant history/writeups is so that you can have a sense of what their values are, and can make some calls about whether that accords with your views. Without presuming to speak for him or pre-empt any decisions, I strongly suspect that Lewis is unlikely to grant to anything on the more speculative/controversial side of animal welfare (in general I think it's more likely to focus on corporate cage-free programs and meat replacement tech). We think there are a lot of good reasons not to use the Funds[1], and if you're worried that you're going to end up funding something that's harmful, you shouldn't donate to that Fund.

[1] https://app.effectivealtruism.org/funds/faq?tag=why-use-ea-f...


Just to add my $0.02, my impression is that while many EAs enjoy discussing the thorny philosophical issues like whether we should be concerned about insect-suffering, or wild-animal suffering, very few would advocate that we actually support habitat destruction or massive interventions in nature. Even groups like FRI, who are heavily focused on suffering, promote the idea of moral uncertainty. They believe that we should avoid drastic actions based on a narrow ethical view, due to uncertainty about which ethical views are more valid.

Like with everything, more controversial issues are more likely to be picked up by the media and blown out of proportion, relative to the actual level of support they receive. I would be extremely surprised if any money from the CEA Animal Welfare Fund went to support habitat destruction to reduce wild-animal suffering. I would be less surprised if money from the fund went to support research into animal consciousness, to help us better compare different types of animal welfare interventions.


I'm not from the media and I am campaigning against "effective altruism" because it promotes eco-terrorism, habitat destruction, call it what you want. I am compelled to do this to protect public safety.

There's no point in denying that a large portion of self-identified EAs are for eco-terrorism, it's all over the internet. There is no gray area here, you are either with the terrorists (strong negative utilitarians) are against them.


What's the process to propose a charitable organization for inclusion in one of your funds? Who would be the right person at CEA to talk to about that, and what information would you need?


Hi, thanks for your interest!

Our fund managers work at the Open Philanthropy project, where they work full-time, finding the very best donation opportunities. They look for charities working in their key focus areas, which have a good track record and a robust evidence base to support their chosen intervention. If the program you'd like to recommend is within one of their key focus areas, run some quick calculations to check if the charity you'd like to recommend is within the same range of cost-effectiveness as the previous grantees. If so - great! we'd love to hear about it, and I'd recommend you get in touch with the program officer in charge of the program area your non-profit targets. We recommend running this quick test because while many non-profit programs have a plausible story for impact, very few non-profits surpass our stringent bar for effectiveness.

GiveWell is especially excited to see non-profits working on these intervention areas http://blog.givewell.org/2015/10/15/charities-wed-like-to-se...


Oh boy! I'm going to say some unpopular things here, so please remember you asked for feedback.

I immediately click on Funds->Animal Welfare since my company, Pembient, deals in wildlife and I've had a lot of (negative) interactions with the conservation industry. I see that the fund targets "farmed animals" and even then carries a warning:

"Risk-averse donors might choose not to focus on animal welfare because the evidence base for the most effective interventions is not yet as strong as in areas like global health and development."

Excellent! But then I look at the manager's background. It seems he has extensive dealings with The Humane Society of the United States (HSUS). Further, at least one of the grants from the fund goes towards a HSUS project on broiler chicken welfare. OK, that's fine; however, digging deeper I find that at least 5% of the grant covers administrative costs (i.e., $50k). Now, money being fungible, that means money has been freed up for other activities. From my perspective, that would include HSUS's interference in The Black Rhino Genome Project:

https://experiment.com/u/yHldOQ

And that's the problem with donating to large non-profits! They have so much going on that it is hard to track all the externalities.

Let me add that this issue, to me, is the biggest problem with Effective Altruism (EA). It says, "You're 'smart,' go out and become a wealthy hedge fund manager. That's the best use of your time, and then donate the proceeds to things that matter." I would counter and say that if you're truly 'smart,' you should be directly working on problems that matter. Doing something ancillary and then giving to a group of people who might be viewed as less 'smart' because they didn't follow the EA path is internally inconsistent. I think it reveals what EA is: A moral salve to justify forgoing work on hard problems to accumulate wealth for selfish purposes. Not that there is anything wrong with that per se, I just don't like the marketing on top of it.

I said I would be harsh. Apologies, if I've been too harsh.


Thanks for the comment — we welcome feedback, even if it's harsh, so no worries!

That said, I think some of this hinges on a naïve interpretation of what effective altruism is about. The idea of 'earning to give' is counterintuitive ('do more good by working in finance - whut?'), and so has been one that the media has run with. Accordingly, I think it's considerably overrepresented in many people's minds compared to how most people in the EA community actually think about things[1]. We're always looking for talented people to do direct work, and we see one of our missions as finding and attracting people to work on important issues — whether that be in animal welfare, global development, politics, research etc. etc. Indeed, nearly everyone working on this project left significantly higher-paying jobs to come work at CEA because we think it offers the best chance for us to have a positive impact.

Agree that donating to large non-profits does have the problem of the money displacing unrestricted funds (in effect, subsidising other projects within the org). That's part of the reason most of the non-profits we end up supporting are fairly small and tightly focused on a specific, well-validated intervention. We're putting previous grant history up is so that donors can make an informed decision about whether or not they think a fund manager will represent their values, but it's not necessarily an indication of where grants will be made in future.

[1] E.g. see https://80000hours.org/2015/07/80000-hours-thinks-that-only-...


Thanks for the reply. It does sound like you're thinking these things through. Good luck!


Do you think that rather than considering smartness as a linear scale, it might make more sense to consider different people as having aptitudes in different areas? I could see one person being an excellent and effective animal welfare activist and another being a top performing hedge fund manager; it's unlikely the two could switch roles and perform equally well.

Regarding HSUS, I'm not sufficiently well versed to have an opinion there, but it is theoretically possible for an organization to do things you don't like, but still be more of a net positive than the available alternatives, especially when the bulk of the donation will go toward a specific initiative. Maybe they believe that's the case here?


We do need many more talented people to found and work for effective non-profits. At CEA, we've found it hard to hire extremely talented people, as many of the people we want to hire want to continue donating instead! Non-profits often find it hard to attract top talent, in part because they tend to pay lower salaries, working at a non-profit is less prestigious, and because talented people tend to want to work with other talented people. We have been impressed with Y Combinator's approach here, they're trying to incentivize talented people to found really effective non-profits, and then help them scale. We're hoping that if donors are willing to fund these new promising projects, many more people will be drawn to the non-profit world. On the other hand, just like founding a company is not for everyone, neither is working at a non-profit. We should each consider our comparative advantage. As tempestn mentioned, some people might be extremely good at their day-job and be well compensated for it, but might not necessarily make excellent activists. In that case, that person might be able to help the causes they care about most by donating to effective charities. We need all kinds of people!


Is there an EA job board somewhere? Maybe people aren't interested due to pay, but I doubt many people are aware the jobs exist in the first place. I was under the impression it was a small pool of applicants but a much smaller pool of job openings.


80,000 Hours has a very simple job board (https://80000hours.org/job-board/)

While there aren't many job openings posted, many effective organizations are often open to hiring talented individuals at other times throughout the year. It's definitely worth dropping organizations a quick email to let them know you're interested in future opportunities. If any of you are interested in working at CEA in the future, head to our website and ask to be added to our recruitment email list. We'll be hiring for 4-6 positions later in the year.


There's this facebook group (https://www.facebook.com/groups/1062957250383195/) where EA orgs tend to post their jobs, along with other jobs from orgs that are interesting to EAs even if not explicitly EA.


I'm not going to deny the existence of comparative advantage. At the same time, I'm not going to claim we live in a passionless universe either. If a hedge fund manager desires to beat the market, she is going to study the financials, etc. If she is also passionate about boiler chickens, why would she outsource that to someone else? Unless, of course, her passion for the market is greater than her love for boiler chickens. And, if that's the case, can you say she cares for boiler chickens or only wishes to care for boiler chickens. There is a huge difference. As they say, "The road to hell is paved with good intentions."

As far as HSUS is concerned, I'm viewing the issue from my context. EAF wanted feedback, and I gave it. More than likely I'm wrong and perhaps the greater good is being served; although, it would help if that were being measured in some way.

Finally, from what I've seen, I do believe many non-profits could use a hand from people with quantitative backgrounds. If those people ascribe to EA, they'll never join a non-profit.


FWIW, about 11% of people at our last conference identified themselves as following an earn-to-give path, and we think that's about the right proportion. We don't think everyone should earn-to-give, but we do think it's a path that some people should consider.

I spent about a year earning-to-give as a pharmacist myself before I realised that I could probably do much more good by joining CEA and doing direct work. I gained lots of valuable, real-world experience working in hospitals, and also at big organizations like the Red Cross, this experience not only helped me donate to charity in the short-term, but also to use that experience in my direct work now. I hope that many people will follow a similar path.


> Let me add that this issue, to me, is the biggest problem with Effective Altruism (EA). It says, "You're 'smart,' go out and become a wealthy hedge fund manager. That's the best use of your time, and then donate the proceeds to things that matter."

I think there's kind of a spectrum in EA, where you can choose your preferred intensity level. There's "soft EA", which is the basic "I'm going to evaluate these different charities and donate to the most effective ones, so I'm doing the most good with my money". The things like "I'm going to choose my career such that I make the most money with which to donate" are more "hard EA", which is an option but certainly not a requirement. I'm more on the soft side myself and never really found it lacking, nor have I ever felt criticized for that position.


I'm glad to hear this! We want to encourage people to incorporate EA into their life as much as they feel comfortable with, and everyone's circumstances are different.

And note you can be a 'hard EA' without earning to give. Research, policy/politics, founding or working for a valuable organisation - these are all great career paths, depending on your particular circumstances and skills. See https://80000hours.org/career-guide/ for more!


What's the overhead? What are the management fees?


We don't charge any management fees, you can donate through EA funds, and we'll pass on 100% of the money we receive to the charities.

We don't consider a charities' overhead when evaluating effectiveness, in the same way that you don't consider how much Tim Cook gets paid when you decide whether to buy an iPhone. Historically, groups like Charity Navigator have looked at overhead ratios for one simple reason - they're much easier to measure. Unfortunately, overhead ratios are simply not a useful measure for evaluating charity effectiveness, and they are easily gamed by unscrupulous charities trying to raise funding. Instead, we look at the total costs a charity incurs, and add in any costs that they don't include in their budget, but are necessary to deliver their program. Then we look at the outcomes the charity achieves for that funding as a whole. I'd check out Dan Pallotta's TED talk if you want to know more the overhead myth. https://www.ted.com/talks/dan_pallotta_the_way_we_think_abou...


Would YC fund a centre for egoism, for-profit?


Why does CEA openly support the Foundational Research Institute even though their fringe negative utilitarian value system is dangerous?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: