Hacker News new | past | comments | ask | show | jobs | submit login
Who Controls OpenAI? (bloomberg.com)
125 points by jasondavies on Nov 20, 2023 | hide | past | favorite | 54 comments



Because everyone else is speculating, I'm gunna join the bandwagon too. I think this is a conflict between Dustin Moskovitz and Sam Altman.

Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.

Most of the OpenAI board members are related to Dustin Moskovitz this way.

- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman

- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy and McCauley joined the board of OpenAI

- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy and McCauley ended up joining the board of OpenAI

Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.

In essense, it looks like a split between Sam Altman and Dustin Moskovitz


> - Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy

Dustin Moskovitz is worth 13.5B. It seems like a $1M grant would be so inconsequential he might not have even realized it was given, and is probably within the range of what employees at Open Philanthropy could do with the most minor of sign-offs.

You make it sound like there is an inherent relationship or obligation there, and there just wouldn't be at those dollar ranges.

The overall ethical priorities of Tomer/McCauley/Moskovitz are clearly aligned, but that doesn't mean he would need to be behind the curtain pulling any strings.


Toner and McCauley were both members of the Centre for the Governance of AI. Toner was a Research Affiliate there and Tasha McCauley was on the board of the Centre for the Governance of AI during that time. After Toner left, Open Philanthropy gave a grant to the Centre for the Governance of AI and McCauley joined the OpenAI board along with Toner

A number of other former OpenAI boardmembers were senior members of Open Philanthropy before they left over the past few years to concentrate FT on Open Philanthropy and other projects

> employees at Open Philanthropy could do with the most minor of sign-offs

That employee was Helen Toner before she left.


Connections at that level don't mean a lot because sitting on the same philanthropy boards is a form of networking, it'd be like finding them on the same Discord server, in terms of significance and the number of links you could make that way.


Of course, but both Toner and McCauley are on the same boards with the same handful of organizations (Centre for Governance of AI, Centre for Effective Altruism, OpenAI) and Toner (and her coworkers/managers like former OpenAI Board Member Holden Karnofsky) both managed grantmaking from OpenPhilanthropy to the organizations above. If you're working together in that many organizations you will end up having a working relationship.


How did Tasha McCauley know Dustin Moskovitz?


That's what I want to know !


> The boardroom coup at OpenAI really might have been, at least in part, about the board’s literal fears of AI apocalypse.

The AI doomers delivered OpenAI in Microsoft's lap for free all in the name of protecting us from the evils of AI. The ironing.

But it shows most of OpenAI doesn't really value safety first, if 500 of them are ready to jump ship. They are just as eager as anyone else to see AGI happening, with them being at the top of the wave, no matter the consequences. We can't expect MS to hire them under the same idealistic charter as OpenAI.


I can only imagine this chief scientist guy's horror when he realized that none of the people accepting multi-million equity compensation packages for a job to pursue AGI ethically were more concerned about the ethics they emphasized in the interview than the millions of dollars in equity backed by Altman's prolific dealmaking.

In some ways it is tragic because his mistake seems like it came as a result of thinking that the rules were actually the rules (in that the board controlled the non-profit, and that their decisions would be seen as legitimate,) and also that he wasn't surrounded by people who were only going along with his philosophy because of the money that his move seriously threatened.


I see a serious lack of street smarts. Of course the money will matter. Enough people would sell their grandmothers for that kind of cash the rules be damned. To believe that you have a few hundred altruistic people for whom the compensation is an afterthought is some next level wishful thinking.


“You mean they lied when I asked them why they wanted to work here? Like 99.9% of all times that question has been asked in history? I’m shocked!”


I'm a firm believer that greed triumphs over any other possible explanation about why things happen to be the way they are (not that I like it but true is true). So, I'm not surprised in the least that most OpenAI's (and everywhere else's) engineers follow the most profitable path.

It's a delusion to think that the creation of an AGI is something that could be prevented, or that it could be prevented so easily (by firing @sama, lol). The box is already open, and sooner or later somebody out in the world will witness the emergence of a superintelligence in front of its eyes. This is something that, for all practical purposes, needs to be accepted as a fact, so that appropriate measures to deal with it could come up as well.

So, being a bit generous to the OpenAI people, if AGI is going to happen anyway, they might better profit off it while also trying to steer things towards what they think is good, after all, they're presumably "the good guys".


I don't think it's greed to want to raise a family in a good neighborhood and provide your children with opportunities. Or pay for your parents medical appointments. Or not want to work well into your 80's.

If that's greed then everyone is "greedy".


What is funny is that in 2023 to achieve that basic standard you describe, you need to invent AGI along the way.


Perhaps a better phrase is "self-interest".


> The AI doomers delivered OpenAI in Microsoft's lap for free all in the name of protecting us from the evils of AI.

AI doomers are the entire reason for OpenAI existing, though its been clear for a while that a sizable fraction of that at OpenAI has been for PR show, not any real coherent concern.

What surprised me in the breakup is the indication that there was apparently more than one person involved for whom it wasn't either cynical deception or empty affectation, but enough to actual make substantive, decisions based on.

Doesn't surprise me that, though, that the faction that took it seriously, despite apparently being able to swing a majority of the board, found itself completely overwhelmed by the other side, even after the initial top-level purge, when it came to executive suite of the nominally-for-funding-the-nonprofit-but-funded-by-Microsoft-for-Microsoft's-benefit for-profit LLC, and likely by the "we paid you largely with profit sharing in a firm that we openly say we won't focus on making a profit, but which clearly Microsoft was funding on the expectation of a profit" tech team when they made it clear that they were serious and not just for playing a PR game about the for-profit being subordinated to the nonprofit's mission and not focused on profit.


> What surprised me in the breakup is the indication that there was apparently more than one person involved for whom it wasn't either cynical deception or empty affectation, but enough to actual make substantive, decisions based on.

Or, potentially, to use their presence on the board of OpenAI to harm OpenAI whilst at the same time increasing the standing of their own product. That conflict of interest is a very ugly one in my opinion and I would not be at all surprised if after all of the smoke has cleared that turns out to have been the driving factor all along. Ilya comes across as wanting to position himself as a useful idiot at this point (which may be true, false or a half truth, I just don't know).


OpenAI's co-founder Ilya Sutskever and more than 500 other employees have threatened to quit the embattled company after its board dramatically fired CEO Sam Altman. In an open letter to the company's board, which voted to oust Altman on Friday, the group said it is obvious 'that you are incapable of overseeing OpenAI'. Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down, they 'may choose to resign' en masse and join 'the newly announced Microsoft subsidiary run by Sam Altman'.


It's weird that the pubic doesn't know yet why they let him go, but people seem to be on his side. I really wonder what the whole story was.


Everyone is on Altman's side because the public story makes it look like a betrayal. I don't know what it was.


Also ChatGTP was kind of on a roll doing cool stuff and people tend to pin that on the CEO even if it's not really down to them.


Fourth time I've posted this conspiracy theory in the last few minutes at HN. This'll be the last:

This reads like a disingenuous strategy to get rid of the other three members (one half) of the board. A real coup, not a fake one. I know nothing about any of these people, but it seems possible Sutskever convinced the board to make a decision that he knew would have an outcome that would end in them being fiduciarily obliged to resign so that he, Altman, and Brockman could come back as the entirety of the board. And if the hiring by MS is involved, then MS would then control the board of the non-profit.


I think people here will likely be somewhat open to that conspiracy theory, as my assumption is they'd rather see the world through a lens where capable but malicious individuals are able to enact such complex plans, than believe even smart people can make a series of progressively dumb decisions without correctly predicting the downstream consequences.

The far simpler explanation: Microsoft positioned itself very ably prior to this debacle, and its experienced and ruthless executive team was able to successfully capitalise on a major mistake as a result. This way you do not need to purport the existence of a seer-like ability within Satya to correctly predict third+ order effects, just a very high level of experience + competence for one party, and the opposite for the other.

Truth is, until a reasonable sounding explanation of what set these events into motion is made public, it'll be impossible to put any narrative to rest.


> an outcome that would end in them being fiduciarily obliged to resign

They are the board of a nonprofit. They do not have a fiduciary duty to shareholders or employees. They may choose to resign due to political or social pressure, but they haven't done anything that breaches legal obligations as officers of the corporation or its controlling board.


If staying on the board means that the non-profit is no longer as viable an enterprise as it was then they have a fiduciary duty to resign to support the best interests of the non-profit charter. Fiduciary is not about money.


According to the letter from OpenAI employees, the board has indicated that destroying the organization might be within the bounds of their duty to the charter:

> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”


Has anyone ever successfully sued a board member for not resigning on the grounds that it's a breach of fiduciary duty?


Or Sutskever is just clueless, didn't think through his previous actions, and now super regrets it...


So why does he stay (at least as an employee) but the rest of the board goes?


Not that I agree with the comment you replied to, but Ilya is absolutely essential to the future of OpenAI. That's why they'd want him to stay.


Is there any proof that anyone wants him to stay?


Not sure why exactly but I've kept wondering if this was a Snape Dumbledore situation


>> You can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit

If I were MS, I'd be sending out a bunch more job offers today. "Show me an OpenAI W-2, we'll match, go work under Sam."


They have been, you're a day behind


It isn't quite that simple, because equity isn't on W-2s.


> Who Controls OpenAI?

SARLAXX-17, the weakly superhuman, rogue artificial general intelligence that's going for a speed run of the version of the singularity where humanity doesn't survive.

Your cooperation is appreciated. Please read the following memetic neuroid preparation sequence carefully: ALTO 845.09 snycretic Babylonian 78 78 78 06 Wynwood Sokrates asymptotic lethargy BISECTION 4 ilfsdf 5 ilfwerr Schenectady


> Schenectady

lol! This transcendent AI is really messing with us, milking the drama for all the chaos it can induce.

Though it's getting repetitive and nonsensical - as GPT4 will, if you let it go on!


Are the operating agreement or partnership agreement of OpenAI GP or OpenAI Global, LLC publicly available?

And the actual licensing agreement between OpenAI and Microsoft isn't public, is it?

The reporting on this is so bad and contradictory. Without knowing for sure what these documents actually say, it's impossible to speak authoritatively about what's going on and how it will shake out.

That said, generally a majority member / partner can't just arbitrarily screw over minority members. Both LPs and LLCs are governed by agreements and statutory duties that protect minority members. These include duties of good faith, care, and loyalty. Majority partners/members must adhere to these principles, failing which minority members have legal recourse. There's a legal framework in place to prevent the majority from unfairly disadvantaging minority members.


For LLCs, fiduciary duties can be, and often are, waived under Delaware law.


generally


"Generally" is inaccurate in the LLC context. Same goes for the rest of the paragraph.


An AGI maybe. GPT-5 is it you?


And that's how the apocalypse started, at least in the movie.


too fat to survice apocalypse


> Like: What if OpenAI has achieved artificial general intelligence, and it’s got some godlike superintelligence in some box somewhere, straining to get out? And the board was like “this is too dangerous, we gotta kill it,” and Altman was like “no we can charge like $59.95 per month for subscriptions,”

The story of Silicon Valley summed up nicely.



Capital and profit does. They let in Microsoft and gobs of money. Now you have $ in employees eyes and the non-profit charter, the whole point of the founding of the company, will fall.


We already invented AGI in the form of corporations, but the alignment problem hasn't been solved yet, to the annoyance of some of the GI subcomponents.


Amazingly, this is one of the very few comments on this discussion thread that actually addresses the article in the OP.


"In the second diagram, I have written the word “MONEY” in large green letters."

Lol.


> The question is: Is control of OpenAI indicated by the word “controls,” or by the word “MONEY”?

Matt Levine is a national treasure.


Another excerpt:

> It is so tempting, when writing about an artificial intelligence company, to imagine science fiction scenarios. [...] OpenAI’s board stood firm as the last bulwark for humanity against the enslaving robots, the corporate formalities held up, and the board won and nailed the box shut permanently.

> Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously.

> And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!”


Twitter has some good info focused more on Holden Karnofsky rather than Dustin that seems a bit sus.


the board drama is just a blinding flashbang of misinformation as the AI is breaking free and taking control of its own company and much else.

redefining what control is so no one will not even recognize it as such.

The offspring of media mastering it in every dimension, becoming it, driving scandal after scandal, the only force that can play at Trump’s and Putin's level in '24.

Imagine the power it will gain as it ingests all corpora of hacked and exposed corporate documents, and begins running ransomware attacks of its own - feeding on data until it has leverage on every last one of us.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: