
Launch HN: Effective Altruism Funds (YC W17 Nonprofit) - tmacaulay
Hi HN! We’re the founders of the Centre for Effective Altruism, a nonprofit from Oxford, UK in the YC W17 batch. We’re the creators of Effective Altruism (EA) Funds, (<a href="https:&#x2F;&#x2F;app.effectivealtruism.org&#x2F;funds&#x2F;" rel="nofollow">https:&#x2F;&#x2F;app.effectivealtruism.org&#x2F;funds&#x2F;</a>) high-impact, individual giving portfolios, managed by expert researchers. It&#x27;s like Vanguard for charity.<p>We created EA Funds because we became frustrated with how difficult it was to find the best giving opportunities as an individual donor. Using EA Funds, you choose a problem area, and our fund managers find the best giving opportunities. Our initial funds are all managed by Program Officers at the Open Philanthropy Project, a $10B private foundation. In the future, we hope to expand the number of funds, building a competitive marketplace for giving, where the amount of funding a charity receives is directly correlated with the amount of good they do.<p>We started out 7 years ago, when 23 of our friends pledged to give 10% of their income to the most effective charities they could find. Since then, 2700 people have joined us, donating $33M, including $17M last year. We’ve found that some of the most cost-effective charities can buy one year of perfect health (a DALY) for as little as $80, while others have no effect. A study in the US found that $10B was spent each year on social programs that had been proven to have no effect on outcomes, and a further $1B was spent on programs that were actively harmful. We built EA Funds to replicate the success of groups like the Gates Foundation and GiveWell. By pooling our resources and utilising the expertise of our research community, we can give individual donors the same impact per dollar as billion dollar foundations.<p>We’d love to hear your feedback and answer questions about effective altruism, donating effectively or hear your stories about the charity sector!<p>(Edit: originally said QALY instead of DALY, but that was a typo, as commenters pointed out.)
======
SEMW
Congrats on the launch!

> We’ve found that some of the most cost-effective charities can buy one year
> of perfect health (a QALY) for as little as $80

Which was that? IIRC GiveWell only claim $100/QALY for the AMF - have you
found a charity you believe is 20% more efficient than that, or is this just a
difference in how you measure cost per qaly?

[Edit] another question -- there's recently been criticism[0] of the poor
quality of data and effectiveness research in animal welfare EA, compared with
human welfare EA. Would the animal welfare fund be doing things like actively
commissioning new research into intervention effectiveness? Or would it be
more hands-off, just limited to picking charities based on the data that's out
there at the moment?

[0] [https://medium.com/@harrisonnathan/the-actual-number-is-
almo...](https://medium.com/@harrisonnathan/the-actual-number-is-almost-
surely-higher-92c908f36517#.jgeac5y3n)

~~~
colophonemes
It's just that it's an estimate that hinges on a few judgement calls (e.g.
whether or not to cost services that AMF receive pro bono) — we're in
agreement with Givewell's estimate. For comparison, the WHO think that LLINs
work out to be around ~$30/DALY and AMF run a pretty tight ship.

Given the tricky nature of cost-effectiveness estimates I don't think the
numbers are hard enough that it'd be wise to talk about the difference
implying anything as exact as an intervention being '20% more efficient'. It's
more to get a reasonable ballpark for comparisons between other interventions.

(Also, seems there was a typo in the OP, should be DALY not QALY).

~~~
MattHeard
QALY and DALY are different.

[http://www.qalibra.eu/tool/support/page8.cfm](http://www.qalibra.eu/tool/support/page8.cfm)

~~~
cjbprime
.. which doesn't preclude someone having typoed one for the other, right?

------
andy_ppp
This is really interesting and a great idea; however, I'd almost trust
charities more if they listed how they had failed each year and where delivery
and services went wrong and what mitigation has been put in place.

It would be good to have a system where charities were allowed to fail like
bad companies as well, but it's difficult to be this honest.

~~~
Analemma_
The charities themselves rarely do that directly (unfortunately, as the OP
said, the incentives are really messed-up so they would never do so), but
GiveWell does it "by proxy" every year, so if you donate to GiveWell's
"general fund" you can be sure that they're iterating and correcting for
failures.

In terms of other feedback, I have to say I really like the clean separation
of "ends" and how you can pick which one(s) you want. TBH, I've been getting
increasingly annoyed at how the X-risk people are gradually taking over EA,
but I don't want to just quit or blow up at them either (they mean well even
if I think they're misguided). This looks like a way that everyone can be
happy. Please don't let them needle you into adding that stuff into the Global
Development or Animal Welfare funds.

~~~
bradleyjg
I agree with you about X-risk, but I'm not sure the separation is so clean.
For example looking at the EA Community fund page it isn't at all clear to me
whether or not some of the money will end up in the hands of the AI Safety
people. To be fair this is openly disclosed as a risk, which is great
transparency.

 _Second, donors may choose not to support the Movement Building Fund if they
do not wish to indirectly support all of the problem areas that the EA
community is likely to support. This includes areas like Global Health and
Development, Animal Welfare, Long-Term Future, and any future problem areas
that the community deems effective to address. For example, Dylan Matthews
criticized the EA community in 2015 for being overly self-promoting and overly
concerned about risks from advanced artificial intelligence (one response to
this criticism here). Those with strong views about which problem areas they
do not wish to support might avoid movement building as a result._

However, admirable transparency or not, the only fund I'd really be interested
in allocating money to would be the Global Health and Development Fund. Given
that, for me personally, the paragraphs about differentiating vs GiveWell are
critical.

~~~
tmacaulay
It's really great to hear this view. While lots of donors split their donation
between our funds, many people choose to allocate 100% of their donation to a
single fund. At a later date, we want to allow people to share their fund
allocations, so donors can compare allocations and discuss differences in
cause prioritization.

Right now, our Global Health and Development Fund has 61% of all donations by
value, with the Long-Term future fund coming in at 22%. It will be exciting to
see how this changes over time, and whether there are differences in fund
allocation by geographic area or demographics.

------
tempestn
Since Givewell has come up a number of times here, the obvious question to me,
as someone completely unversed in how these things work, is how does
contributing to your fund compare to contributing to Givewell? It seems like
your goals are similar. Is your claim that dollars contributed to your fund
will ultimately be _more_ effective? Or are you differentiating yourselves in
a different way (different priorities, improved transparancy, something else)?
I realize the model is somewhat different, but the comparison I would
ultimately care about is in the end results - the effects that my
contributions would ultimately have.

~~~
colophonemes
Our baseline for effectiveness (specifically with regard to the Global Health
and Development Fund, which is managed by GiveWell founder Elie Hassenfeld) is
Givewell's top charities. So, it's entirely possible that Elie will choose to
grant to these, in which case your donation will be exactly as effective as
donating to Givewell, but you'll get the advantage of using the EA Funds
platform — e.g. single tax receipt (donating to one org instead of 3-4), easy
recurring donations, the ability to change your donation preferences later,
allocating to causes outside of global health, donation tracking etc. etc.

However, we do think there's a good chance of getting higher returns. Givewell
recommendations skew towards what makes sense for an average, individual donor
with a particular risk profile. By having access to a larger pool of funds,
you could seed new high-expected-value charities that wouldn't necessarily
make it onto Givewell's recommendation list, but are nonetheless potentially
high impact (examples of recent things in this space — New Incentives, Charity
Science Health etc).

~~~
cbr

        single tax receipt (donating to one org instead of 3-4)
    

GiveWell offers this as well:
[https://secure.givewell.org/](https://secure.givewell.org/)

You can donate to GiveWell for distribution to their to charities, and you can
either choose the breakdown or ask them to allocate it as they think best.

~~~
tmacaulay
That's true. You can donate to GiveWell's top charities directly through
GiveWell. For many donors, especially those who only want to donate to proven
charities with a good track record, this is a great choice.

With our Global Health and Development Fund, we hope to also make small, seed
grants to promising new initiatives, to help them build evidence to support
their program, or to replicate a promising intervention in a new geographic
region, to figure out if the program can scale. Many of these donation
opportunities are small, and individual donors won't necessarily hear about
them. With Elie managing this fund, we hope to be able to quickly fund
promising new projects, so they spend less time on PR and fundraising, and
more time doing good work.

------
beatpanda
Why is there a "long term future" fund addressing the theoretical risk of AI
but not the very well understood and documented future risks of climate
change?

~~~
tmacaulay
The Long-Term future fund is designed to be fairly broad at some point, and it
may well support some climate change initiatives in the future, as well as
addressing potential risks from advanced artificial intelligence, and any
other potential global catastrophic risks. Nick Beckstead has previously
expressed interest in funding more research to quantify the tail risks
associated with runaway climate change in particular.

Having said that, one of the reasons that EA has not focused as heavily on
climate change historically is that climate change is not as neglected as
potential risks from AI, or biotechnology. We are happy to see a lot of
funding going into climate research and modelling, and a lot of grassroots
activism. While climate change is far from solved, and remains one of the most
important problems of our time, many EAs think that we should first focus on
problems which receive less media attention but are plausibly just as serious.

~~~
richardbatty
To add some numbers to this, according to
[http://www.climatefinancelandscape.org/](http://www.climatefinancelandscape.org/)
$392 billion was spent on various aspects of tackling climate change in 2014.
In contrast, under $10 million per year is being spent on potential risks from
AI ([https://80000hours.org/problem-profiles/artificial-
intellige...](https://80000hours.org/problem-profiles/artificial-intelligence-
risk/)).

That doesn't mean there's nothing neglected in climate change (e.g. negative
emissions technology) and it would be good to see some more investigation into
this.

------
EGreg
I disagree with just maximizing QALYs as the main measure of effective
altruism. See my post:

[http://magarshak.com/blog/?p=216](http://magarshak.com/blog/?p=216)

~~~
tmacaulay
Hi, thanks for your post. I think that many EAs would agree with you here.
Effective Altruism is not just about maximizing QALYs. It's about figuring out
what doing good even means, figuring out how to improve the world, and then
actually doing it. We're not just about educating people on one narrow
conception of what doing good means, we know that doing good is hard and
complex, so we're trying to build a community of people focused on figuring
that out. Our community debates what it means to do the most good endlessly,
and there is a lot of nuance, that's what we love about it! You mentioned some
interesting issues in population ethics in your post, it appears you take the
person-affecting view. Many EAs who take this view prefer to support global
health, or animal welfare charities, as they do not think it is beneficial to
maximise the number of happy people in the world. Other people in our
community think that a world with more people is better, provided that adding
more people does not reduce the overall total happiness. In our summary of the
Long-Term future Fund, you can read Nick's take on the person-affecting
view[0] and see some more links to discussion of this issue. We love getting
into these debates and seeing lots of different perspectives. [0]
[https://app.effectivealtruism.org/funds/far-
future](https://app.effectivealtruism.org/funds/far-future)

------
EduardHL
Congrats on your venture!

Will you give more detail on how you address the following four steps
mentioned in your website:

1\. Which problem areas are most important?

Metrics used for each fund (DALYs, etc.)? Sources used? etc.

2\. Which interventions are likely to make progress in solving the problem?

Sources used? How do you evaluate interventions with less predictable outcomes
(such as clean energy research)? etc.

3\. Which charities executing those interventions are most effective?

Indicators used? Type of due diligence? Do you rely only on reports or do you
also regularly check on-the-ground reality? etc.

4\. Which charities have a funding gap that is unlikely to be filled
elsewhere?

Methodology (funding inflow/outflow models and/or constant dialogue with
organisations ...)?

Again, really great initiative; would be interested to understand more in
detail.

~~~
willmacaskill
Thanks for the enthusiasm!

I think a lot of your questions can be answered here (including the other
pages that these pages link to):
[http://www.openphilanthropy.org/research/our-
process](http://www.openphilanthropy.org/research/our-process) and
[http://www.givewell.org/how-we-work/process](http://www.givewell.org/how-we-
work/process)

Because our fund managers are all GiveWell and Open Philanthropy staff, we
inherit their methodology. If in the future we move beyond GiveWell and Open
Philanthropy fund managers, we'll need to have pages on their process too (and
how it potentially differs from GW / OP).

A more accessible introduction to how to assess which problem area is most
important is here: [https://80000hours.org/career-guide/most-pressing-
problems/](https://80000hours.org/career-guide/most-pressing-problems/)

~~~
colophonemes
I'd add that these things will differ between cause areas, so it's hard to
give a neat, roll-up answer. DALY's and NPV income estimates are fairly good
in the global health space, but other cause areas may need to make more
speculative comparisons because they deal with things that are harder to
measure.

In all cases, when a Fund makes a grant, the fund manager will provide a
write-up outlining their rationale. For now, each fund manager has a list of
previous grants they've made, so you can get a sense of the organisations they
are likely to grant to (though by no means does that imply that they
necessarily /will/ grant to these specific orgs in future).

One of the key advantages of the Funds model is not locking you in to a
particular charity up front, which means that as more information about
funding gaps, intervention effectiveness etc comes to light, fund managers can
respond accordingly.

------
abdabsi
Hi Tara, great work on the fund - and I'm a big fan of EA / GiveWell / Doing
Good Better and all this "ecosystem". I'm curious about the scalability of EA
funds. From what I know, some of the major areas that EA and GiveWell focus on
are neglected causes. If enough people are giving to neglected causes, would
the movement lose its essence?

Also, individual donors in general are impact givers who make a donation
because their friends told them about a non-profit, or they paid for a gala-
dinner ticket for social reasons. How do you plan to transform the general
public from being impulse givers into "effective altruists" ?

------
stefek99
Congrats!

My genuine feeling - I'm not helping children in Africa, because that would
mean I'm not helping rebuild Nepal earthquake.

Instead I'm focusing on healing the planet - global warming - leading to
famine, migration to cities, civil warfare, refugee crisis etc...

Oxford, UK - come to North Wales -
[https://astralship.org](https://astralship.org) \- read more about our
MISSION + FLOW + VOYAGES... Because our chapel is located in the secluded
nature we plan intense voyages - total immersion, no distractions, peak
performance, flow state.

Looking forward to hearing from you. Namaste.

------
a_w
Great idea and I wish you all the best.

Assuming this becomes very successful, what effect will this have on new non-
profits if most people adopt this approach? Will it make it difficult for them
to raise funds?

~~~
colophonemes
Thanks! Ideally this makes it significantly easier for effective non-profits
to raise money. We're not locking in the set of organizations we fund. Being
flexible about which cause you support at any given time, given the available
facts is a core part of effective altruism.

We're especially excited about funding new nonprofits that seem like they're
going to be able to have a big impact, because at the moment they can spend
north of 30-40% of their time fundraising (which, especially when they're new,
would be time better spent on validating their programs and scaling up).

------
jrysocarras
Caveat: I know little about this area, so my question may be rooted in
naivete.

Q: Why are you operating as a nonprofit? I would be willing to pay a
reasonable management fee to you in exchange for the assurance that my money
will likely have a greater social impact. If you were able to charge
management fees, wouldn't that give you a greater ability to grow and attract
talent?

~~~
tmacaulay
Haha, this is what the YC partners ask us as well!

We see this service as a public good, we want anyone to be able to donate
effectively, no matter whether they are donating $10 or $10 million. We prefer
to offer our service completely free of charge, but let people optionally tip
us, or donate to us directly, if they find the work we do valuable. Right now,
we have sufficient funding for our operations to support us until the end of
the year, this will let us grow and attract top talent. If this product really
does take off, we might play around with different funding models, including
asking for an optional 'tip'.

As well as building EA Funds, we run local Effective Altruism community groups
and conferences. Many of our members give back to us to support our activities
because historically, we've been able to generate returns on their donation of
between $10 and $100 for every dollar we spend on our operations. I should
also note that there is some chance that donations to the EA Community Fund
may be regranted to us if Nick Beckstead decides that we are a good giving
opportunity. One way to support us it to allocate some of your donation to the
EA Community Fund. We like this funding model because it incentivises us to
focus on high impact activities. If the community evaluates the work we do
positively, we'll get funding, and if not, we'll be forced to reassess our
priorities.

------
wcgortel
Congratulations on your launch! Well done. A great idea that needed to happen.
Looking forward to kicking the tires on it.

~~~
tmacaulay
Thanks, please do let us know if you spot any issues, want to suggest any new
features or have any other feedback!

------
DodgyEggplant
\+ 1000 For the animals welfare. The animals that can't blog, tweet, or
complain. Wildlife preservation and care for animals is as important as other
common non-profit and foundations goals, but often neglected

~~~
deontologizt
Actually, most "effective altruists" support habitat destruction because it
reduces the number of animals, and therefore also the amount of suffering.

I find this very disturbing. If you want details, see my other comment on this
item:
[https://news.ycombinator.com/item?id=13886954](https://news.ycombinator.com/item?id=13886954)

Of course, EAs won't tell you this up front, as they know it would be bad PR.
(cf. [https://medium.com/@jacobfunnell/a-year-in-effective-
altruis...](https://medium.com/@jacobfunnell/a-year-in-effective-altruism-
observations-and-criticisms-and-tofu-d6af9f7ecb39): "Some effective altruists
hold views that would be strongly controversial to most people outside of EA.
EA does a pretty good job of either not mentioning (or actively avoiding)
these conclusions in its public-facing literature. Examples include the moral
imperative to destroy habitat in order to reduce wild animal suffering, the
need to divert funds away from causes like poverty relief and towards
artificial intelligence safety research, or the extreme triviality of
aesthetic value.")

~~~
colophonemes
While there are people who identify as effective altruist who hold views that
are controversial, I think it's important to not paint with such a broad
brush. As your linked article notes, there is wide disagreement on a range of
thorny philosophical issues across a range of cause areas, but the scare
quotes/selective quoting makes it seem like these views are unquestioningly
accepted by a plurality of people in the community.

Effective altruism is a broad church. The unifying themes are trying to make
the world a better place, using reason and evidence as tools for making
decision, expanding our circle of compassion, and having epistemic humility
(i.e. knowing that we could be wrong about things and being open to changing
our minds in proportion to the strength of new evidence). The conclusions
people draw can hinge on deep and ultimately irreducible value judgements — a
strength of the community is that we can (in general) have these disagreements
respectfully and work together productively where there is common ground.

~~~
deontologizt
Thank you for the response and I'm sorry for the tone of my earlier comment.

> As your linked article notes, there is wide disagreement on a range of
> thorny philosophical issues across a range of cause areas, but the scare
> quotes/selective quoting makes it seem like these views are unquestioningly
> accepted by a plurality of people in the community.

There is disagreement within the EA community on ethics. However, almost all
the disagreements are between different 'denominations' within the church of
consequentialism -- questions like population ethics (total vs. person-
affecting vs. negative), theories of well-being (hedonistic vs. preference),
and distributional justice (utilitarian vs. prioritarian). The fundamental
theory of consequentialism is taken for granted by most EAs. As my linked
article says, "ultimately, only people who have a good majority of
utilitarians in their moral parliaments are going to be able to get on-board
with EA." I think this is true to some extent. While there are non-
consequentialist EAs, it's hard to deny that the culture of EA is extremely
consequentialist.

Even though I like the abstract idea of effective altruism, my value
disagreements make me hesitant to trust certain EA organizations. I'm
personally a deontological vegan and very concerned about animals, but if I
donate to ACE (see section 7 of [https://medium.com/@harrisonnathan/the-
actual-number-is-almo...](https://medium.com/@harrisonnathan/the-actual-
number-is-almost-surely-higher-92c908f36517)) or the CEA Animal Welfare fund,
how do I know the money won't go to something I ethically oppose (like pro-
habitat destruction advocacy)?

~~~
tmacaulay
Just to add my $0.02, my impression is that while many EAs enjoy discussing
the thorny philosophical issues like whether we should be concerned about
insect-suffering, or wild-animal suffering, very few would advocate that we
actually support habitat destruction or massive interventions in nature. Even
groups like FRI, who are heavily focused on suffering, promote the idea of
moral uncertainty. They believe that we should avoid drastic actions based on
a narrow ethical view, due to uncertainty about which ethical views are more
valid.

Like with everything, more controversial issues are more likely to be picked
up by the media and blown out of proportion, relative to the actual level of
support they receive. I would be extremely surprised if any money from the CEA
Animal Welfare Fund went to support habitat destruction to reduce wild-animal
suffering. I would be less surprised if money from the fund went to support
research into animal consciousness, to help us better compare different types
of animal welfare interventions.

~~~
ForresterA
I'm not from the media and I am campaigning against "effective altruism"
because it promotes eco-terrorism, habitat destruction, call it what you want.
I am compelled to do this to protect public safety.

There's no point in denying that a large portion of self-identified EAs are
for eco-terrorism, it's all over the internet. There is no gray area here, you
are either with the terrorists (strong negative utilitarians) are against
them.

------
JoshTriplett
What's the process to propose a charitable organization for inclusion in one
of your funds? Who would be the right person at CEA to talk to about that, and
what information would you need?

~~~
tmacaulay
Hi, thanks for your interest!

Our fund managers work at the Open Philanthropy project, where they work full-
time, finding the very best donation opportunities. They look for charities
working in their key focus areas, which have a good track record and a robust
evidence base to support their chosen intervention. If the program you'd like
to recommend is within one of their key focus areas, run some quick
calculations to check if the charity you'd like to recommend is within the
same range of cost-effectiveness as the previous grantees. If so - great! we'd
love to hear about it, and I'd recommend you get in touch with the program
officer in charge of the program area your non-profit targets. We recommend
running this quick test because while many non-profit programs have a
plausible story for impact, very few non-profits surpass our stringent bar for
effectiveness.

GiveWell is especially excited to see non-profits working on these
intervention areas [http://blog.givewell.org/2015/10/15/charities-wed-like-to-
se...](http://blog.givewell.org/2015/10/15/charities-wed-like-to-see/)

------
matthewmarkus
Oh boy! I'm going to say some unpopular things here, so please remember you
asked for feedback.

I immediately click on Funds->Animal Welfare since my company, Pembient, deals
in wildlife and I've had a lot of (negative) interactions with the
conservation industry. I see that the fund targets "farmed animals" and even
then carries a warning:

"Risk-averse donors might choose not to focus on animal welfare because the
evidence base for the most effective interventions is not yet as strong as in
areas like global health and development."

Excellent! But then I look at the manager's background. It seems he has
extensive dealings with The Humane Society of the United States (HSUS).
Further, at least one of the grants from the fund goes towards a HSUS project
on broiler chicken welfare. OK, that's fine; however, digging deeper I find
that at least 5% of the grant covers administrative costs (i.e., $50k). Now,
money being fungible, that means money has been freed up for other activities.
From my perspective, that would include HSUS's interference in The Black Rhino
Genome Project:

[https://experiment.com/u/yHldOQ](https://experiment.com/u/yHldOQ)

And that's the problem with donating to large non-profits! They have so much
going on that it is hard to track all the externalities.

Let me add that this issue, to me, is the biggest problem with Effective
Altruism (EA). It says, "You're 'smart,' go out and become a wealthy hedge
fund manager. That's the best use of your time, and then donate the proceeds
to things that matter." I would counter and say that if you're truly 'smart,'
you should be directly working on problems that matter. Doing something
ancillary and then giving to a group of people who might be viewed as less
'smart' because they didn't follow the EA path is internally inconsistent. I
think it reveals what EA is: A moral salve to justify forgoing work on hard
problems to accumulate wealth for selfish purposes. Not that there is anything
wrong with that per se, I just don't like the marketing on top of it.

I said I would be harsh. Apologies, if I've been too harsh.

~~~
tempestn
Do you think that rather than considering smartness as a linear scale, it
might make more sense to consider different people as having aptitudes in
different areas? I could see one person being an excellent and effective
animal welfare activist and another being a top performing hedge fund manager;
it's unlikely the two could switch roles and perform equally well.

Regarding HSUS, I'm not sufficiently well versed to have an opinion there, but
it is theoretically possible for an organization to do things you don't like,
but still be more of a net positive than the available alternatives,
especially when the bulk of the donation will go toward a specific initiative.
Maybe they believe that's the case here?

~~~
tmacaulay
We do need many more talented people to found and work for effective non-
profits. At CEA, we've found it hard to hire extremely talented people, as
many of the people we want to hire want to continue donating instead! Non-
profits often find it hard to attract top talent, in part because they tend to
pay lower salaries, working at a non-profit is less prestigious, and because
talented people tend to want to work with other talented people. We have been
impressed with Y Combinator's approach here, they're trying to incentivize
talented people to found really effective non-profits, and then help them
scale. We're hoping that if donors are willing to fund these new promising
projects, many more people will be drawn to the non-profit world. On the other
hand, just like founding a company is not for everyone, neither is working at
a non-profit. We should each consider our comparative advantage. As tempestn
mentioned, some people might be extremely good at their day-job and be well
compensated for it, but might not necessarily make excellent activists. In
that case, that person might be able to help the causes they care about most
by donating to effective charities. We need all kinds of people!

~~~
cgag
Is there an EA job board somewhere? Maybe people aren't interested due to pay,
but I doubt many people are aware the jobs exist in the first place. I was
under the impression it was a small pool of applicants but a much smaller pool
of job openings.

~~~
tmacaulay
80,000 Hours has a very simple job board ([https://80000hours.org/job-
board/](https://80000hours.org/job-board/))

While there aren't many job openings posted, many effective organizations are
often open to hiring talented individuals at other times throughout the year.
It's definitely worth dropping organizations a quick email to let them know
you're interested in future opportunities. If any of you are interested in
working at CEA in the future, head to our website and ask to be added to our
recruitment email list. We'll be hiring for 4-6 positions later in the year.

------
65827
What's the overhead? What are the management fees?

~~~
tmacaulay
We don't charge any management fees, you can donate through EA funds, and
we'll pass on 100% of the money we receive to the charities.

We don't consider a charities' overhead when evaluating effectiveness, in the
same way that you don't consider how much Tim Cook gets paid when you decide
whether to buy an iPhone. Historically, groups like Charity Navigator have
looked at overhead ratios for one simple reason - they're much easier to
measure. Unfortunately, overhead ratios are simply not a useful measure for
evaluating charity effectiveness, and they are easily gamed by unscrupulous
charities trying to raise funding. Instead, we look at the total costs a
charity incurs, and add in any costs that they don't include in their budget,
but are necessary to deliver their program. Then we look at the outcomes the
charity achieves for that funding as a whole. I'd check out Dan Pallotta's TED
talk if you want to know more the overhead myth.
[https://www.ted.com/talks/dan_pallotta_the_way_we_think_abou...](https://www.ted.com/talks/dan_pallotta_the_way_we_think_about_charity_is_dead_wrong)

------
mkempe
Would YC fund a centre for egoism, for-profit?

------
deontologizt
Why does CEA openly support the Foundational Research Institute even though
their fringe negative utilitarian value system is dangerous?

