> Microsoft, which has invested billions in OpenAI, learned that OpenAI was ousting CEO Sam Altman just a minute before the news was shared with the world, according to a person familiar with the situation.
Well this probably disproves the theory that it was a power grab by Microsoft. It didn’t make too much sense anyway since they already have access to tech behind GPT and Microsoft doesn’t necessarily need the clout behind the OpenAI brand.
The "coup by MSFT" conspiracy theory made no sense. Microsoft has an insanely good deal with OpenAI:
* Exclusive access to resell OpenAI's technology and keep nearly all of that revenue for themselves, both cloud and services
* Receive 75% of OpenAI's profits up to $1 trillion
All they had to do is not rock the boat and let the golden goose keep laying eggs. A massive disruption like this, so soon after DevDay would not fit that strategy.
My guess at this point is financial malfeasance, either failing to present a deal to the board or OpenAI has been in financial straits and he was covering it up.
OpenAI shouldn't even be making a profit, as it's a 501(c)3 charity. The whole umbrella for-profit corp they formed when they became popular should be illegal, and is clearly immoral.
You have it backwards, the not for profit entity owns the for profit entity. From Wikipedia:
> OpenAI is an American artificial intelligence (AI) organization consisting of the non-profit OpenAI, Inc.[4] registered in Delaware and its for-profit subsidiary corporation OpenAI Global, LLC.[5]
IKEA [0] and Rolex [1] are structured in a similar manner, although different since they’re not US based.
It is hard to attract multi-billion dollar investments and attract elite AI talent when competing with for-profits. This was the stated reason and makes a lot of sense. The comp packages for elite AI talent is now claimed to be in the range of $10M.
To maintain a clear separation between for-profit and non-profit activities. If a non-profit operates in a market with for-profit competitors, tax authorities may start considering it a for-profit organization, making all of its income taxable.
And maybe to allow choosing the right people for the right job. If the non-profit has an ideological purpose, its leadership should probably reflect that. At the same time, the for-profit subsidiary probably works better under professional management.
If a nonprofit has mostly revenue and few donations (Mozilla) the IRS revokes their tax exemption. OpenAI could not have done the Microsoft deal as a nonprofit.
Closing the huge fundraising gap OpenAI had as a nonprofit by returning profits from commercial efforts instrumental to, but distinct from, the nonprofits charitable purpose, without sacrififing any governance or control of the subordinate entity.
Lol, 10 billion dollars of cookies and t-shirts. They'll have to be bigger than Nestle and Zara. To sell AI services, they need to build it and for that they need the money.
> Robert Bosch GmbH, including its wholly owned subsidiaries, is unusual in that it is an extremely large, privately owned corporation that is almost entirely (92%) owned by a charitable foundation. Thus, while most of the profits are invested back into the corporation to build for the future and sustain growth, nearly all of the profits distributed to shareholders are devoted to humanitarian causes.
> [...] Bosch invests 9% of its revenue on research and development, nearly double the industry average of 4.7%.
(Source: Wikipedia)
I always considered this a wonderful idea for a tech giant.
> Have you read any news about Mozilla's budget in the past 10 years or so?
Revenue/Expenses/Net Assets
2013: $314m/$295m/$255m
2018: $450m/$451m/$524m
2021: $600m/$340m/$1,054m
(Note: "2017 was an outlier, due in part to changes in the search revenue deal that was negotiated that year." 2019 was also much higher than both 2018 and 2020 for some reason.)
2018 to 2021 also saw their revenue from "Subscription and advertising revenue"— Representing their Pocket, New Tab, and VPN efforts to diversify away from dependence on Google— Increase by over 900%, from $5m to $57m.
Seriously, Mozilla gets shat on all the time, presumably because they're one of the few sources of hope and therefore disappointment in an overall increasingly problematic Internet landscape, and I wish they would be bigger too, but they're doing fine all things considered.
Certainly I wouldn't say their problems are due to this particular apsect of their legal structure.
>Seriously, Mozilla gets shat on all the time, presumably because they're one of the few sources of hope and therefore disappointment in an overall increasingly problematic Internet landscape, and I wish they would be bigger too, but they're doing fine all things considered.
I think they get shat on all the time because of what you mentioned but also because they consistently fail to deliver a good browser experience for most of their still loyal users.
Most of the people I talk to who still use their product do so out of allegiance to the values of FOSS despite the dog-shit products they keep foisting upon us. You'd think we'd wise up several decades in by now.
It is but that's capitalism, the alternative is to have what happens with most corporations where their majority shareholder is blackrock/vanguard etc, a basically souless investment conglomerate, whose majority shareholder is the other of blackrock/vanguard, etc. and then the 3rd biggest and then the fourth so on and so on.
You basically never have a person in the chain actually making decisions for anything but to maximize profit.
> OpenAI shouldn't even be making a profit, as it's a 501(c)3 charity
First, the “OpenAI" whose profits are being discussed isn't a 501(c)3 charity, but a for-profit LLC (OpenAI Global, LLC) with three other organizations between it and the charity.
Second, charities and other nonprofits can make profits (surplus revenue), they just can't return revenues (but they can have for profit subsidiaries that return profit to them and other investors in certain circumstances.)
> The whole umbrella for-profit corp they formed when they became popular should be illegal
The umbrella organization is a charity. The for profit organizations (both OpenAI Global LLC that Microsoft invests in, and its immediate holding company parent which has some other investors besides the charity) are subordinate to the charity and its goals.
> and is clearly immoral.
Not sure what moral principal and analysis you are applying to reach this conclusion.
> Not sure what moral principal and analysis you are applying to reach this conclusion.
I'm not the parent, but I think it's clear: if I'm a charity, and I have a subordinate that is for profit, then I'm not a charity. I'm working for profit, and disguising myself for the benefits of being a charity.
Not only do I think that that's not obvious, I think its a nonsensical conclusion that really only makes sense as a general statement if you think “for profit” means “to earn revenue” rather than “to return money to an interested party” and invert the parent/subsidiary relationship.
Obviously, the for profit subsidiary ooerates for profit—and where its not a wholly owned subsidiary, it may return some profit to investors that aren't the charity—but neither the subsidiary nor the outside investors getbthe benefits of charity status.
Profits don’t necessarily have to be used to pad the already overstuffed offshore accounts of wankers. Sometimes organizations choose to use profits to directly fund charitable works. Please wipe up after yourself if this causes your head to explode.
(I'm going to make some assumptions, just consider the broad point, if you please)
The girl guides are a non-profit; they teach kids about outdoor stuff, community, whatever, they do good works, visit old folks, etc.
If for some legal reasons they had a subsidiary that sold cookies (and made a profit), with all the profits returned to the non-profit parent, I think that'd be ....fine? Right?
If OpenAI hadn't restructured they wouldn't have gotten any money from Microsoft and they would have either run out of money or the team would have left and started ClosedAI. There's no scenario where they developed GPT-3/4 while staying nonprofit.
I'm guessing that's the point. Ethics required getting out of the 501(c)(3), so the ClosedAI thing sounds more ethical. The 501(c)(3) should've collapsed or not exist.
BS, you can still make a deal with microsoft as a non profit, where the deal gives them an exclusive licence to use the result in exchange for financing.
MS doesn't care about how money it cost, they care about the fact it's their ticket back into the fight with google and apple.
Hard to believe. Nobody is going around throwing billions without any hope of recouping any of it eventually (except the EU of course but giving any money to organizations which actually might be capable of building something useful is against their policy).
Doesn't Mozilla have an identical structure (which is the inverse of what you said, the nonprofit owns the for-profit--it wouldn't make any sense for a for-profit to own a non-profit due to the no private inurement requirement)?
I think Mozilla Corp is 100% owned by the nonprofit, which is a little different. It allows activity which a nonprofit couldn't directly do, and which has a different tax treatment, but its not returning profits to someone else like OpenAI Global LLC and, as I understand it, its immediate parent holding company both do.
But they are similar in that both involve a nonprofit controlling subordinate for-profit entities.
> it wouldn't make any sense for a for-profit to own a non-profit due to the no private inurement requirement)?
The most obvious example is the corporate foundation, but if we believe the first result from a search you're right in they are controlled but not owned by the for-profit:
> A for-profit cannot own a nonprofit because a nonprofit has no owners. However, a for-profit can set up a structure in which it effectively has control over the nonprofit, subject to applicable laws, including those regarding private inurement, private benefit, and corporate self-dealing
It's not just "restructuring" a business that's a 501(c)3... to make it a golden goose for MSFT. The whole thing was created to avoid one of Big Tech having a monopoly on AI, and it turns into Big Tech having a monopoly.
Perhaps it's all legal but I think it's very understandable to look at it and think it's a travesty.
I imagine you’d have to either pay back all of the people who donated to the non-profit first, or negotiate a deal with for a stake in the company before you can transform it into a for profit.
This is probably a dumb question, but what are some specific scenarios of financial malfeasance that could’ve taken place? Like Altman stealing money from OpenAI?
That's my unsubstantiated hypothesis. The board going so public of this nature could only mean he was doing some grave shit like embezzlement or intentional financial misreporting.
Through the public actions of Sam Altman in various places like the US congress it has become rather clear that his goals are to device and fear monger to create an environment of regulatory capture where due to misguided laws OpenAI will have an unfair competitive advantage.
This might be quite in line with what Microsoft tends to like. But it also can be a risk for MS if regulation goes even a step further.
This is also in direct opposition with the goals OpenAI set themself and which some of the other investors might have.
So MS being informed last minute to not give them any chance to change that decision is quite understandable.
At the same time it might have been pushed under the table by people in MS which where worried it poses to much risk, but which maybe e.g. might need an excuse why they didn't stop it.
Lastly is the question why Sam Altman acted the way he did. The simplest case is greed for money and power, in which case it would be worrying for business partners at how bad he was when it comes to public statements not making him look like a manipulative untranslatable **. The more complex case would be some twisted believe that a artificial pseudo monopoly is needed "because only they [OpenAI] can do it in the right way and other would be a risk". In that case he would be an ideologically driven person with a seriously twisted perception of reality, i.e. the kind of people you don't want to do large scale business with because they are too unpredictable and can generally not be trusted. Naturally there are also a lot of other options.
But one thing I'm sure about is that many AI researchers and companies doing AI products did not trust the person Sam Altman at all after his recent actions, so ousting him and installing a different CEO should help increasing trust into OpenAI.
through some of the things he sayed where very clearly not true if you understand a bit the technology used and very clearly pure fear mongering
so if he believed everything he sayed it means he would be incompetent, which just can't be true however I look at it (which means I'm 100% certain sure he acted dishonest in congress, and like I sayed before I'm not fully sure why but it's either way a problem as he lost the trust of a lot of other people involved through that and some other actions).
Completely disagree. Right now they're not much more than a fancy reseller of OpenAI's technology. The real prize would be exclusivity and control of the roadmap.
Buying them (or getting de facto control) is clearly an easier way to achieve that, vs. replicating the technology in-house.
IMO this is the most important part of Nadella's blog post:
> Most importantly, we’re committed to delivering all of this to our customers while building for the future.
It's curious to me that they see the departure of Sam Altman as a reason to remind us that they are "building for the future" (which I take to mean: working toward independence from OpenAI). I think it actually lends credence to the theory that this was a failed power grab of some sort.
You can't be serious. You think that Microsoft themselves saying they didn't know DISPROVES that it was a covert power grab by them? Have you heard of "lying"?
In my opinion, I'd say the shortness and lack of details backs up the story that they had no idea. You'd see way more words if a marketing department had it's hands on something like this. This was 100% a get something out asap job.
Just for once I’d like to see such a statement look like, “You sheep probably want a comment on today’s news. We’re not doing that. We’re just content to buy up all the shares you’re panic-dumping. Looking forward to flipping them back to you next week when you panic in the other direction.”
(Assuming they have some plan that gives them the flexibility to trade shares directly on the market like that. I think $GME had something like this?)
It's hilarious that people are commenting about Microsoft being "down". It's up over a $100 per share since the beginning of the year and at an all-time high.
You still can't justify a claim that nobody panic-sold, given what you know. Let's try to stick to claims we have a basis to believe, instead of fighting noise with noise.
A more reasonable claim from your epistemic state could be something like, "There was no major crash from the news, as might be seen in a general panic."
It began the day up 54.75% YTD and ended the day up 50.6% YTD. They've had single-day downswings as large or larger like 30 times this year alone. Microsoft is fine.
No one is saying they aren't fine, but Microsoft has a lot of shares, and shareholders can be very annoying. If Satya didn't make a statement there'd be dozens of Important People breathing down his neck.
Copying what was posted here in case they update or change it:
----
A statement from Microsoft Chairman and CEO Satya Nadella
Nov 17, 2023 | Microsoft Corporate Blogs
As you saw at Microsoft Ignite this week, we’re continuing to rapidly innovate for this era of AI, with over 100 announcements across the full tech stack from AI systems, models, and tools in Azure, to Copilot. Most importantly, we’re committed to delivering all of this to our customers while building for the future. We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team. Together, we will continue to deliver the meaningful benefits of this technology to the world.
What would make you think a marketing department would potentially be involved here? Besides the obvious Marketing = Bad connection prevalent here on HN?
I don’t see any verbiage that implies Microsoft had no idea. If Microsoft was the aggressor, of course they would play dumb and disclose as little as possible.
If anything this is a power grab by the board away from Microsoft. Optimistically, this could be an attempt to return OpenAI to its original status as a true non-profit company. OpenAI lost most of its openness under Sam.
They needed the Microsoft investment before GPT scaling was proven out. I imagine many entities would be willing to put money into a truly open research lab given OpenAI’s track record.
The focus on openness was literally how the board ended their statement on firing Altman.
And then Greg being all "committed to safety" in his resignation statement makes me think this was a conflict between being an open OpenAI with global research or being closed and proprietary in the name of safety.
I almost think its too late at this point, unless they have one hell of an arc. I don't see them being "open" until MS is totally out of the picture. I honestly don't even hate OpenAI in its current state... but sitting on the fence, trying to be both "open" and attached at the hip to MS is just... odd.
What can you promise any other investors when you’ve already agree to give MS 75% all of your net income pretty much indefinitely and 49% of your shares?
I’d try to get to the source of the compute, partner with AMD and Nvidia to build out DCs architected from the ground up to train and serve LLM’s. Get rid of Microsoft…
> put money into a truly open research lab given OpenAI’s track record.
Why? It’s hard to imagine anyone putting any significant amounts of money (in comparison to the MS deal anyway) without any exclusivity rights at least
OpenAI has the most capable language model in the world, that’s bordering on a national security asset. I could see the US government stepping in to provide funding.
There are now 4 people left in the OpenAI non-profit board, after the ouster of both Sam and Greg today. 3 of the 4 remaining are virtual unknowns, and they control the fate of OpenAI, both the non-profit and the for-profit. Insane.
For anybody, like me, who was wondering who is actually on their board:
>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
Sam is gone, Greg is gone, this leaves: Ilya, Adam, Tasha, and Helen.
Tasha: https://www.usmagazine.com/celebrity-news/news/joseph-gordon... (sorry for this very low quality link, it's the best thing I could find explaing who this person is? There isn't a lot of info on her, or maybe google results are getting polluted by this news?)
Helen Toner is well-known as well, specifically to those of us who work in AI safety. She is known for being one of the most important people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy.
She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.
She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.
She has an h index of 8 :/ thats tiny for those that are unaware, in property much every field. AI papers are getting an infinite number of citations now a days because the field is exploding - just goes to show no one doing actual AI research cares about her work
Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
Agreed. She’s not famous for her publications. She’s famous (and intimidating) for being a “power broker” or whatever the term is for the person who is participating in off-the-record one-on-one meetings with military generals.
> Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating. There is no advantage to be gained. Any nation that does this is shooting themselves in the foot, or the heart.
> The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating
Is part of the advocacy convincing nation-states that an AI arms-race is not similar to a nuclear arms-race amounting to a stalemate?
What's the best place for a non-expert to read about this?
> What's the best place for a non-expert to read about this?
Thank you for your interest! :) I'd recommend skimming some of the papers cited by this working group I'm in called DISARM:SIMC4; we've tried to collect the most relevant papers here in one place:
At a high level, the academic consensus is that combining AI with nuclear command & control does not increase deterrence, and yet it increases the risk of accidents, and increases the chances that terrorists can "catalyze" a great-power conflict.
So, there is no upside to be had, and there's significant downside, both in terms of increasing accidents and empowering fundamentalist terrorists (e.g. the Islamic State) which would be happy to utilize a chance to wipe the US, China, and Russia all off the map and create a "clean slate" for a new Caliphate to rule the world's ashes.
There is no reason at all to connect AI to NC3 except that AI is "the shiny new thing". Not all new things are useful in a given application.
I would say we're likely to see some governance and board composition changes made soon.
Honestly, I would expect more from Microsoft's attorneys, whether this was overlooked or allowed. Maybe OAI had superior leverage and MS was desperate to get in to AI.
"McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism"
I wonder if she (Tasha? Tascha?) was Sam FTX's girlfriend before Caroline. She hired him at the Effective Altruism foundation or whatever it is called after he left Jane Street.
What's with the lowercase? I think it's cute if someone is being deliberately low-effort, or trying to present that way, but IMO it's cringe to use it for consequential official statements like this.
> but IMO it's cringe to use it for consequential official statements like this
This is funny to me, as Twitter is the platform for "deliberately low-effort" posts, but you see it as a platform for official statements. How times change...
I'm often stunned by how casually and poorly executives write. The more rich and powerful they are, the worse it seems to be. I guess things like proper capitalization, punctuation, full sentences, etc. aren't worth their time, and people will hang on every word that they write anyway.
When the networked masses can't see you using a t-shirt, jeans, and a casual attitude, to signal skills valuable beyond convention, you have to adapt and transgress more blatant conventions.
Eventually being maybe 5+ years to build out the cloud tech to do so. The reason GPT-4 succeeded is massive RDMA+GPU compute clusters for training the model.
Microsoft already signed that deal with OpenAI, they can't break contract even if they did want to take an absolutely massive bet that the venture capital CEO and President can rebuild the technical infrastructure and acumen housed at OpenAI. OpenAI can replace Sam Altman and some of his newer hires, they probably can't replace Ilya Sutskever.
Microsoft could unilaterally term the OpenAI agreement and let OpenAI fight them in court due to this substantial and material event. If OpenAI doesn’t have the cash on hand to survive the legal fight, the non profit dies eventually when they exhaust their resources.
OpenAI only had resources because of Microsoft, and they bit the hand that feeds them.
That's true. It would be a pretty huge bet to invest billions more dollars in an uncertain startup when they already have an AI provider that is guaranteed to be significantly technologically ahead of whatever Sam can pull together for at least several years in the future.
They'd bring to the table having been CEO and CTO of OpenAI. That's a lot of relevant knowledge and experience (the latter being the more important in this case).
Join them up with Amodei again and.. going wild on speculation and fiction.. they fall back to Elon and Grok and it turns out it was his play all along?
I sense a Netflix documentary is already in the works!
But seriously, this muddies the water even more. I assumed the Microsoft deal being based on some false pretense was the reason this was all happening. I guess that could still be true and the board is trying to protect themselves from whatever else is about to come out.
I wonder if this is one of those pivotal moments in history where OpenAI collapses or fades and Google or someone else dominates the future of AI, and we’re all left wondering “what if”.
Unless Google (or someone else) outright acquires GPT4 and future GPT research, it's unlikely that someone will suddenly overtake OpenAI. From what we've seen, no one is close to GPT4, sometimes not even GPT3.
There was literally no one at the level of Google when it came out. I still remember Infoseek and Yahoo, both were garbage compared to Goog.
I know they aren't doing their best right now, but there is no need to rewrite history. Google was always superior to their search competitors, which is why it is so sad to see their current situation.
They might have been superior but it took a while for users to switch. Obviously some went across very quickly but AltaVista et al were still viable for a while.
Well, it doesn't have to be as good as GPT4. I've gotten a ton of use/power out of a local llama model. Even GPT3 was "good enough" for a lot of tasks. I mostly use ChatGPT because its the most convenient.
OpenAI has pretty much the "best" model and first mover advantage. They can lose the latter, and might struggle to keep the former.
I don't know if this is true at all but I read another comment as to mean that Microsoft basically has legal rights to fork ChatGPT. In that case if OpenAI dies maybe they'll just be relieved that the power dynamics got simpler.
Not really the money but expected return on the money from all these features being rolled out. You can 100% bet there was major estimations made on return on investment. Everything they are releasing has some AI magic on it now. The last thing you want to see is chaos in a company you're betting the farm on.
>I'd argue this is signaling they have the IP / source code / models / etc.
I think that just signals that they have a firm business agreement with OAI regardless of what Altman might be doing.
with a product like chatGPT, especially given the nature of how it has been presented thus far (our servers, our API, your account on those servers) , it seems extraordinarily dangerous to treat it like a common partnership agreement.
Microsoft (with thousands of lawyers on staff) invests $10B in a company and has no power or leverage over the decisions of a non-profit board headed by several names no one has heard of.
Come on. No way Microsoft’s team does a deal like that with 0 power or knowledge in a situation like this. That’s ludicrous.
Edit: the only case I can see for MSFT being truly blindsided is as follows. Elon is behind it. Sam and Elon have their breakup. Sam seems to win. They close the deal with MSFT, all is good. But Elon is intimately familiar with the corporate structure and all moves made historically, maybe even has some evidence of wrong-doing. There is probably only 1 person in the valley that could pressure the non profit to oust Sam (and by extension Greg) AND provide the financial/legal/power backing to see it through. It takes a lot money and influence to do this from the outside. That is really the only scenario I could see MSFT truly being blindsided for getting out-maneuvered by a dinky non-profit board.
This all feels very stupid on their part now looking at the board member names but I never saw this point raised before on HN. I guess nobody could predict this including MSFT.
Are all the rumors on social media sites considered out of bound on HN? They seem fairly plausible given the known circumstantial evidence floating around.
Skunkworks in the basement achieved AGI 6 months ago. Board left in the dark until they got an eight-figure electric bill. Now the entire company is in the dark.
I mean, if they don't come out and explain what happened, they can't be surprised when people "hallucinate" all kinds of random, preposterous explanations.
The wording of the letter from the board sounded like he was keeping company details from them to me, not personal matters. Maybe I'm wrong but that was the impression I was under reading it.
I think those are already posted 100 times on the main thread, and discussed in detail. None of that is new so can't be the reason for something like this.
Not really. It’s all so torrid. More interesting would be anything that points to a different reason or evidence the action was related to professional misconduct not personal. The problem is all the suggestions so far are too dissimilar from when LLMs “hallucinate”
My guess is Sam’s New AI venture Humane has either taken key tech from OpenAI? Key talent or in some other way trod on something internal. One thing to consider is that by making AI wearable it is a step toward embodiment- which has always been a problem for AI learning as it does not interact with the real world. Getting live video, audio and sensor data from humans moving through space doing things in context will be amazing data to train the next bump in AI toward AGI.
I get the impression it was a combination of at least two things:
1) A long standing disagreement between AI-safety and AI-profiteering with Ilya on one side and Altman on the other. Ilya (board member) was the one who told Altman to attend video call, then told him he was fired.
2) Some side dealing from Altman - raising new VC funds - maybe in conflict of interest with OpenAI, that was the final straw.
There also appears to be a lot of rumors about Altman's personal conduct, but even if true that doesn't seem to jibe with the official statement over the reason for his firing, or Brockman and others resigning in unison - more reflection of the internal rift.
lol there’s thousands of Sams out there, he’s not that special, and he’s certainly not indispensable.
I don’t know why so many here are struggling to accept that this guy fucked up, lied to his boss, got caught, and got fired for it, and that’s all there is to it. boards will tolerate many things, but willfully lying to them about anything material is not one of them.
a ceo who won’t tell the board the truth is a ceo who thinks they are more important than the company. some boards don’t care, because they are already bought off with equity, but this board doesn’t get equity…
Why does Microsoft need him? Microsoft has some of the foremost researchers and research divisions in the world and have successfully demonstrated their ability to evolve as a business. Altman's background doesn't seem remotely enough to have such a high-level position at a company like Microsoft.
I don't know anything about Microsoft's relationships with OpenAI.
What I do know, having worked for many large organizations, is that reading the daily press (or listening to the news) is a terrible way to get accurate real-time facts about current corporate happenings.
> Microsoft, which has invested billions in OpenAI, learned that OpenAI was ousting CEO Sam Altman
Microsoft, while a large investor (who has already reaped large rewards from that investment) explicitly has no governance role in any of the OpenAI entities, including the one at the very bottom of the stack of four that they are invested in, and this was a decision by the board governs the nonprofit at the top of the stack about personnel matters, so there is no reason to think that Microsoft would be notified in advance.
I can tell HNers haven't worked at the executive level. Executives can and do do things all the time without informing their PR departments ahead of time. Often the feigned surprise is intended.
"Im shocked to see gambling in this establishment! Shocked I tell you!"
So MSFT put $10bn into OpenAI, presumably at least in part on the strength of sama’s leadership. But if the stories are to be believed, a huge chunk of that investment was in Azure credits, and investment into new Azure GPU DCs/clusters for OAI to spend those credits on.
If MSFT doesn’t like this move, why wouldn’t they just … not honor those credits? Or grant more to a successor entity? Does OAI have its own warehouse of GPUs separate from Azure?
Seems like a very dangerous game for Ilya to play.
I don’t believe it. I watched OpenAI DevDay live last week (wasn’t it?). I immediately noticed how Sam Altman, the CEO of OpenAI, was treating (slighting so subtly in my mind) Satya Nadela, the CEO of Microsoft.
The last this he said was: “I look forward to building AGI with you” or the like…
I’m betting that he insulted Satya at that event or upshowed him, etc. and that’s why he’s kicking rocks…
Microsoft has an influence over fking governments. It doesn't need to have an official board seat. It doesn't even need to ask what they want directly. It's enough for people in power to be aligned with their interests.
I'm not saying that's the case here, just pointing that having no ownership or board member in an entity doesn't rule out having power or influence.
Hedgefunds are betting the on AI and tech companies.
Without the top 5 tech companies, S&P500 has lackluster growth.
Microsoft has added trillions to its cap. The statement “we have all the access we need” is a powerful statement. To both OpenAI board and investors.
OpenAI is built on Azure compute. MS has invested billions of their own, they’re building their own chips now.
Essentially Microsoft is saying you can burn OpenAI to the ground, “we have everything we need” to win! - the data, the compute, the algorithms, the engineers, the capital and the market.
This is a way bigger blow to OpenAI than Microsoft.
I dunno, if they were behind it, wouldn't they have an interest in claiming they had no idea? And if they weren't and were truly blindsided, would they have an interest in admitting it?
I think this strengthens my theory that the AI safety true believers wrestled control away from the entrepreneurs.
The Open AI board letter, representing just 4 people, screams butt hurt and personal disagreements. Microsoft, who just finished building OpenAI’s models into every core product, was blindsided. The chairman of the OpenAI board, Greg Brockman, another startup exec was pushed out at the same time. Eric Schmidt, with his own AI startup lab start singing Sam’s praises and saying “what’s next?”
My guess is that Microsoft is about to get fucked and Eric Schmidt is going to pop open a bottle of expensive champagne tonight.
Not sure why you were downvoted. It is very hard to understand who really has power at OpenAI, and as of now half of the board consists of people whose biggest life accomplishment is somehow getting on the board.
+1, the parent comment was helpful, tbh today I found out that 4 board members on OpenAI, 3 of whom I never heard of till this news, have so much power over this organization that has been making headline news for most of the past year.
In the AI safety community (which I work in), Helen Toner is well-known. She is famous (among our community) for being one of the main people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy.
It’s quite rare for these board communications to say anything of substance at all. You let the PR folks work their magic and manage the narrative. This reads like 4 people out of their depth desperately trying to justify themselves.
I would not be surprised if it turns out Microsoft has a multibillion dollar, complex financial instrument axe on the necks of these people by Monday forcing a sale or a new management structure that gives them more control.
The board of OpenAI has their hands on the ultimate self-destruct button, though. If M$ made a power play, worse case scenario they end up with ownership of the for-profit OpenAI, but they'd still have no control of the non-profit, which owns the intellectual property.
If the deal goes sideways, the board of OpenAI (the nonprofit) could just dump everything onto the open internet. All M$ has is a substantial but minority stake in a company that the non-profit OpenAI owns all the beef of.
"full access to everything" feels like a shot across the bow sending a very clear signal to the new board that it should not attempt to limit access to rapid commercial (or other) exploitation of any research results emanating from OpenAI for whatever reason given, be it 'alignment', 'safety' or disproportionate exclusive leverage running afoul of OpenAI's original mission.
Wow, no insincere "spending more time with my family" or "heartfelt thanks to him for his contributions" or something? It's both refreshing and very surprising at the same time. I guess it's either a really bad table flip or there will be more to hear soon.
I find a bit odd that MS didn't have a couple of people in the board of directors, who I assume accepted Sam Altman resignation (and signed the severance package).
Edit: I just read he was fired, but the point remains.
Per my understanding, the board oversees the non-profit which owns the for-profit entity which Microsoft invested in. It's not clear to me how the non-profit board was picked but equity holders in the for-profit don't have a say on the matter.
Well the first sentence in this news piece adds a lot more color to the official press release and stands out on its own:
"Microsoft, which has invested billions in OpenAI, learned that OpenAI was ousting CEO Sam Altman just a minute before the news was shared with the world, according to a person familiar with the situation."
It's hard. I'm instinctively inclined to believe this story, but from first principles, why should I trust that Axios has adequately vetted this source? All I know about them is that lots of people in my circles send me their articles, I've never seen or conducted a review of their journalistic practices.
Obviously thinking about it this way would cause me to miss or disbelieve a lot of true stories, but it doesn't seem right to say I should trust every outlet I see widely posted either.
I know what it means and I agree it's probably an exec. The issue is that the premise - "Microsoft Corporation didn't know thing X at point in time Y" - is essentially unverifiable gossip, yet is presented here as fact
Well this probably disproves the theory that it was a power grab by Microsoft. It didn’t make too much sense anyway since they already have access to tech behind GPT and Microsoft doesn’t necessarily need the clout behind the OpenAI brand.