
Open Philanthropy Project awards a grant of $30M to OpenAI - MayDaniel
http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support
======
hawkice
There's lots of concern about the bizarre relationship disclosure. But perhaps
even more bizarre is that this deal has a structure closer to a strategic move
than actual philanthropy. Am I massively misreading this?

This page details how their main goal with the $30M isn't to increase OpenAI's
pledged funds by 3%, thereby reducing the marginal "AI Risk" by less than 3%.
The goal is to have a seat on the board (basically -- they use a lot more
words to say this in the announcement). What on earth is going on where a
charitable organization with Open in its name feels it needs to buy its way
onto the board of a prominent non-profit in order to:

"Improve our understanding of the field of AI research"

"[get] opportunities to become closely involved with any of the small number
of existing organizations in “industry”"

and "Better position us to generally promote the ideas and goals that we
prioritize"

Isn't the whole point of "open philanthropy" that you can direct funds to
organizations more open about what's going on?!

------
idlewords
Scroll to the end. This is a $30M grant to the guy's roommate and future
brother-in-law.

Unbelievable.

~~~
jaibot
I want to make sure I fully understand the accusation here.

You're saying that the Open Philanthropy Fund - which is funded by an $8.3
billion grant from Dustin Moskovitz and Cari Tuna, also close associates - is
funneling $30M money to an organization that pays below market rates
([https://www.quora.com/What-is-compensation-like-at-the-
non-p...](https://www.quora.com/What-is-compensation-like-at-the-non-profit-
OpenAI)), run by people who have dedicated their professional careers and
millions of dollars to philanthropic causes despite being surrounded by way
more lucrative opportunities for anyone with their skillsets.

If this were the scheme, there are countless better ways to do it.

They could just give them the money without any pretense. Dustin and Cari
didn't have to tie up this money in OPP. They could have skipped the years of
working with Holden and others for years to identify the best giving
opportunities, avoided any blowback, and just used their money the way every
other billionaire does. Or, instead of just giving it away, they could have
made him an absurdly compensated CEO of a new startup.

And none of that would have attracted any attention, no sneering condemnation,
just business as usual.

But that's not what they did.

They've spent years painstakingly identifying the best causes they could find
- anti-malarial nets, poverty relief via direct cash transfers, biosecurity,
intestinal work treatment, Schistosomiasis, prison reform, and yes, AI safety.
They've oriented their entire lives around this project, so of course many of
the people they're close to are working on similar projects. So it really,
really shouldn't be a _shocking twist_ that one of the people they're close to
might be in a position to use a small fraction of their available funds for a
lot of potential good. There are fewer than 100 people working full-time on AI
safety today. If you've concluded that it's an important cause area, there
really aren't many options.

And even then, they didn't have to disclose their personal connection. They
really could have just left well enough alone. But because they're dedicated
to transparency even in the face of stupidity, they made their personal
connection prominent and obvious. So now anyone on the Internet can cruise on
by and - ignoring the millions donated to third-world poverty and health
causes, ignoring the multitude of ways the money could have been quietly and
selfishly used, ignoring the fact that non-profits invariably pay below-market
rates, ignoring the copious public writing and research that's gone into these
decisions - can simply gawk and say "unbelievable".

When people say "No good deed goes unpunished", this is what they're talking
about.

~~~
_benedict
I don't know - it certainly seems to me that this seriously tarnishes the
credibility of Give Well, whose stated aim is to improve _everyone 's_ (not
just Dustin's) charitable resource allocation.

The likelihood that this $30m is the best possible use of that money? It just
happens to be that this personal connection occurs by chance? Pretty much
zero. Of course all opportunities in life are down to your network, but this
is pretty cut-and-dry nepotism.

If this were a totally unrelated personal investment by Dustin in a friend, it
would not be seen as problematic. By investing through these supposedly
impartial organisations that aim to influence everyone's behaviour, their
credibility in this mission is clearly harmed.

(At least this my initial response, while allowing that this may change if a
more detailed analysis shows this to be misplaced. But without this expression
of mistrust, such an analysis is highly unlikely to take place, and I do not
immediately see how it could fully alleviate this concern)

~~~
nilstycho
This personal connection did not occur by chance, but the causality you assign
is reversed. Holden did not support OpenAI because his housemates work there.
Rather, it is because of their similar worldviews that they live together in
the first place. It is unsurprising that people who think safe AGI is a
critically important investment end up in the same social circle.

~~~
notahacker
Holden isn't an AGI researcher though, he's a person who's made his name
arguing that some charities are much more efficient uses of money than others.
Indeed when asked to review the Singularity Institute, as well as criticising
the organisation itself he gave long and detailed arguments why he _didn 't_
think unfriendly AGI was a threat, was sceptical about trying to combat it
through AI research and dismissed the general form of arguments about the
crucial importance of donating to it as "Pascal's mugging". At best you could
say he was more open-minded towards the possibility his mind might be changed
on the issue than the average person.

It would be difficult to imagine that two people with very close relationships
to him working for OpenAI haven't influenced his apparent change of heart;
whether they've converted him to the cause by sheer force of intellectual
argument or not it doesn't look great.

------
vpontis
That's awesome! Open Philanthropy reminds me of
[https://80000hours.org/](https://80000hours.org/).

In their relationship disclosure:

> OpenAI researchers Dario Amodei and Paul Christiano are both technical
> advisors to Open Philanthropy and live in the same house as Holden. In
> addition, Holden is engaged to Dario’s sister Daniela.

This is so tangled. I don't mean it as a criticism as I'm sure a lot of SV
investments would have a much longer Relationship Disclosure sections. So
props to them for including this.

~~~
rspeer
Well, I would use it as a criticism. This is a tangled web of like-minded
people giving money to each other and calling it charity.

Some people conflate this with Effective Altruism, which I think sucks.
Compared to the rigorous work done by GiveWell, there's no way to tell if this
is effective, or even altruistic.

It's just people assuming that the world will be a better place if more people
who think like them have money, an assumption held by basically everyone
everywhere.

~~~
jaibot
Holden's dedicated a huge chunk of his career to moving hundreds of millions
of dollars to alleviating poverty and disease. He's one of the founders of the
effective altruism movement. You may disagree with his decision here, but to
dismiss his efforts, saying it "sucks" and isn't "effective, or even
altruistic", while ignoring the extensive public writing he's done on the
subject that led him to these views and strawmanning his position as nepotism,
is just awful.

"I disagree with the arguments presented, for these reasons" \- cool. If you
think the grant isn't a good idea, make an argument for that.

"This person who has dedicated their life to doing as much good as possible is
close to other people who also want to do as much good as possible, and their
work has led to convergent viewpoints, therefore this isn't altruism" is cheap
character assassination.

~~~
hawkice
> strawmanning his position as nepotism

So, you obviously feel strongly about this, but let me explain why your
comments are less persuasive for those of us outside this subculture:

The non-profit they donated to is (by any reading of their mission statement)
an organization designed to create new technology that "will be the most
significant technology ever created by humans" according to their own
statements. It doesn't disburse cash or benefits to _anyone_, and actually
pledges to keep some of the research secret, and "we expect to create formal
processes for keeping technologies private when there are safety concerns" \--
a situation the organization claims will happen, presumably regularly!

Creating influential technology is typically done for-profit, and research is
typically funded in ways much less open to individual favoritism (review
boards are a great anti-corruption tool), and the results of that research are
typically available to (among others) the people that fund it. There is a lot
about this situation that a reasonable person would describe as unusual.

In addition, all of these changes -- introducing more direct funding with less
oversight, lack of access to results, lack of expectation of benefit to the
targets of the charity -- all lend themselves to obscuring a fraud. That
doesn't mean a fraud is present, but I'd be extremely aggressive about
oversight.

What kind of oversight are we getting? Well, right now they list one of their
major goals as the "tricky" goal of figuring out of they're making any
progress at all.

I would not give this organization money. Dismissing these critiques as
"character assassination" ignores the fact that I've only described aspects of
the organization, not of the people involved, whom I have little information
about.

~~~
notahacker
Moreover, to add further context, the whole basis of Holden's effective
altruism work has been around the idea that philanthropic dollars _ought_ to
be focused on charities with extremely rigorous proof behind how much they
improve people's lives per dollar donated, and how much they need the money.

That context makes advising a donor to direct an "unusually large" sum to an
organisation with an extremely vague goal and no tangible measure of progress
towards it, little of the transparency demanded of other charities and
existing funding commitments well in excess of their spending plans look like
an extremely strange decision long before you read the disclosure statement.

~~~
jaibot
> Moreover, to add further context, the whole basis of Holden's effective
> altruism work has been around the idea that philanthropic dollars ought to
> be focused on charities with extremely rigorous proof behind how much they
> improve people's lives per dollar donated, and how much they need the money.

This isn't quite true; SCI, a charity that treats parasitic disease in the
third world, is the subject of massive uncertainty and conflicting reports of
effectiveness. It might turn out that it has very little impact at all. But
it's still a recommended EA charity because it looks like there's a decent
chance they're doing a ton of good. GW has written extensively about this.

------
dilemma
Two organizations that exploit the implications of the word "Open" as it is
used in the world of technology to market their own private companies and
organizations.

~~~
richardbatty
The Open Philanthropy Project uses the word 'Open' to mean
([http://www.openphilanthropy.org/what-open-means-
us](http://www.openphilanthropy.org/what-open-means-us)):

"Open to many possibilities ... instead of starting with a predefined set of
focus areas, we’re considering a wide variety of causes where our philanthropy
could help to improve others’ lives." and "Open about our work ... Very often,
key discussions and decisions happen behind closed doors, and it’s difficult
for outsiders to learn from and critique philanthropists’ work. We envision a
world in which philanthropists increasingly document and share their research,
reasoning, results and mistakes to help each other learn more quickly and
serve others more effectively."

This all seems pretty useful so I don't get what your criticism is.

~~~
dilemma
My criticism is that it isn't open, which is more or less absolute. Every
organization is "a little" open, sharing the information that benefits them.
And that seems to be what OpenAI intends to do: be open about that which suits
them.

------
jonmc12
When OpenAI was announced, they mentioned having $1B in funding. Why the
additional $30M?

~~~
pjscott
See section 2, "Case for the grant".

~~~
MichaelGG
I went through it but didn't see anything addressing why OpenAI needed more
money.

------
frik
Can someone explain why both orgs contain the world "Open" \- I would say
pretty misleading.

OpenAI hasn't released any open code or anything open.

And is OpenAI even about A.I.? (as several other here mentioned it's not AI)

~~~
tlb
OpenAI has released several open-source packages (see
[https://github.com/openai](https://github.com/openai) and
[https://openai.com/systems/](https://openai.com/systems/)) and several open
research publications (see
[https://openai.com/research/](https://openai.com/research/))

------
itchyjunk
If some of the comments I read on other AI related articles here on HN are
correct,

1 mil / year per expert * 10 experts per year = 30 Mil in 3 years

Maybe $30 mil isn't as much as we think it is in AI business?

~~~
sillysaurus3
_1 mil / year per expert_

Is this realistic?

I don't think it's impossible for a dev to be pulling $1M/yr in total comp,
but it seemed more likely happen at Google or FB rather than AI.

~~~
joshuamorton
Note that this says "per expert". I'm not sure what exactly you'd consider an
"expert", but I think a reasonable definition of "expert" could result in your
average expert being a million-dollar-a-year engineer.

~~~
riffraff
Does a thing like a million a year engineer exist?

I feel like this number would make more sense for a pool of researchers with a
strong lead than a single person.

~~~
joshuamorton
While I obviously don't know for sure, I'm quite confident that there are a
nonzero number of employees at any very large Tech Company that are engineers
(ie. do development work, commit nontrivial code), and whose compensation is
over $1,000,000 USD. I doubt its common, and I would expect that many of those
employees are not "just" engineers (ie. at the point where you are doing work
that is that valuable, its almost a certainty that you are leading a team, and
you are designing things), but I'm confident they exist.

------
t3io
Assume for a minute that AGI is being developed and in no way shape or form
does it function or is it formed in a manner that mainstream AI efforts focus
on...

That hypothetical could very well be the reality on the horizon.

What of Safety/Control research that has fundamentally nothing to do with such
a system or even its philosophy that the broad majority of these institutions
or ventures are centered on? What of deep learning centric methodologies that
are incompatible?

Safety/control software and systems development isn't a research topic. It's
an engineering practice that is most suited for well qualified and practiced
engineers who design safety critical systems that are present all around you.

Safety/Control Engineering isn't a 'lab experiment'. If one were aiming to
secure, control and ensure the safety of a system, they'd likely hire a grey
bearded team of engineers who are experts and have proven careers doing so. A
particular systems design can be imparted on well qualified engineers. This
happens everyday.

Without a systems design or even a systems philosophy these efforts are just
intellectual shots in the dark. Furthermore, has anyone even stopped to
consider that these problems would get worked out naturally during the
development of such a technology?

Modern day AI algorithms and solutions center on mathematical optimization.

AGI centers are far deeper and elusive constructs. One can ignore this all to
clear truth all they like.

So... If one's real concern is about the development of AGI and understanding
therein, I think its fine time to admit that it might not come from the race
horses everybody's betting on. As such, it is much more worth one's penny to
start funding a diverse range of people and groups pursuing it who have sound
ideas and solid approaches.

This advice can continue to be ignored such as it currently is and has been
for a number of years. It can persist across rather narrow hiring
practices....

The closed/open door will or wont swing both ways.

------
WillyOnWheels
Reading Bloomberg news too much makes me think AI research will only be used
to classify ads and make more efficient securities trading algorithms.

I will love to be proven wrong though.

------
seagreen
" When OpenAI launched, it characterized the nature of the risks - and the
most appropriate strategies for reducing them - in a way that we disagreed
with. In particular, it emphasized the importance of distributing AI broadly;
our current view is that this may turn out to be a promising strategy for
reducing potential risks, but that the opposite may also turn out to be true
(for example, if it ends up being important for institutions to keep some
major breakthroughs secure to prevent misuse and/or to prevent accidents).
Since then, OpenAI has put out more recent content consistent with the latter
view, and we are no longer aware of any clear disagreements. "

Really, really happy to see this being carefully considered. Good job to the
Open Philanthropy folks!

EDIT: That Slate Star link is amazing: "Both sides here keep talking about who
is going to “use” the superhuman intelligence a billion times more powerful
than humanity, as if it were a microwave or something."

~~~
cbr
[https://slatestarcodex.com/2015/12/17/should-ai-be-
open/](https://slatestarcodex.com/2015/12/17/should-ai-be-open/)

------
mankash666
I think there are more important causes than "reducing potential risks from
advanced AI". Honest to god, $30M will go a long way in saving lives TODAY.
Flint, MI anyone?

~~~
davmre
There are always going to be more important causes, under any particular
person's view of "more important", which depends very strongly on both
subjective values and (in the case of AI alignment work) on precise
probabilities of far-future outcomes. A dollar given to the Against Malaria
Foundation will do a lot more good, in QALY terms, than the same dollar spent
in Flint. And both dollars will do more (direct) good than a dollar given in
funding to, say algebraic geometry research.

Yet somehow we think it's important to fund all these things, and articles
announcing new NSF grants for math research are not typically met with this
kind of whatabout-ism.

~~~
knowtheory
Nothing about OpenAI actually addresses any real world problem. So i have a
problem with their rhetoric as much as their research agenda.

Nothing they're writing about addresses any of the real world problems with
how AI can or might be applied in society. They're a non-profit research lab
with no clear agenda and no clear connection to how they plan to interrogate
_the world_ which seems like an important part of the equation if you care
about outcomes.

So, irrespective of subjective judgements, please explain to me how any of
this is supposed to help anyone?

Or, alternatively, how isn't this just free R&D for industry unshackled and
unconnected to ethics or society?

~~~
auganov
I think the thesis is that most cutting-edge work is siloed in R&D departments
of big players. OpenAI hopes to ensure the power of AI will be out there for
any kind of organization to benefit from. Under the assumption that a more
democratized AI capability is less likely to lead to an adverse outcome than a
highly concentrated one.

I'm not sure I buy it, but that's what I think it is.

