Wow, I'm pretty shocked how this all went down. Given the abrupt nature of the announcement and the pointed message in it (essentially accusing Altman of lying), I was sure there must have been some big scandal, not just normal boardroom differences of opinion and game-of-thrones-style maneuvering.
Even if I agree with Sutskever's general position, I think his actions here were colossally stupid. He's basically managed to piss off a ton of people, and I have no doubt lots of employees will just shift to whatever Altman and Brockman end up doing, or else there will otherwise be some huge splintering of OpenAI talent.
Also raises deep concerns on governance structure. That the board took this action without consultation with key investors is madness. That they wrote up something which made it sound like a disaster internally further eroded trust that they actually know what they are doing.
I’d expect a tough challenge ahead as msft/other stakeholders try to put guardrails in while dealing with the fact that whoever’s bright idea this coup was - is now in charge.
In a non-profit the role of a board is oversight, not appeasement of "investors" or stakeholders/donors. Why would going to them be required to determine if an executive had lied to the board?
In any board, the board's role is not strictly oversight but also working to preserve the health of the organization and to meet meet organizational goals.
Pissing off all your stakeholders with a snap decision isn't great governance. Particularly egregious circumstances might make it necessary.
Finding something wrong might require action, but a board's responsibility is also to make the corrective action in a responsible manner that is in the organization's interests.
> a board's responsibility is also to make the corrective action in a responsible manner that is in the organization's interests.
That very well may have happened here. It is important to consider that this is effectively an HR matter and that the results of employee investigations cannot be made public.
There's a lot of speculation about this whole fiasco but strictly speaking the board would risk being in breach of fiduciary--as stewards of a charity--if they believed that an executive had misled them about the use of assets. It may piss off their donors because of a personal preference for the executive in question but it would be their responsibility to act accordingly.
Yes... but my point is that this doesn't preclude them taking a couple of days to figure it out, let stakeholders know, avoid making a particularly dumb public statement, changing their mind because everyone is going to quit, etc.
Here, there's been a statement that there's no malfeasance, too.
more likely I think they found that not letting him go isn't and option due to a breakdown of communication (a formulation which normally implies a fundamental break down of trust)
but the trust of the board completely braking down doesn't mean that e.g. MS doesn't trust or want him anymore as in the end the interest the board have to defend and the ones which stake holder like MS have might be in conflict
so even if it has negative consequences if it must be done it might be easier to get it done fast and then resolve the mess instead of giving other a chance to turn it into an even bigger mess
To my eyes, and to many others, it appears the board is power-tripping. Also, OpenAI's unique structure means they have no checks on the board that an investor class would normally give.
It's difficult to envision a thriving future for OpenAI under these circumstances or a world in which they are a significant factor in AI policy. Their early market lead, still a considerable advantage, will not be sufficient to weather the storm brewed by the board's own actions. Losing their market leverage would mean ceding the influential role in shaping AI's future, potentially leaving the giants such as Microsoft, Google, Amazon, and others assuming control and steering the future course of AI, but without the creed to make AI "good."
the hybrid structure was intentional (and known by MS etc.) created to be able to do things like rain in (or fire) a CEO which is more representing the interests of the other minority stake holders then the interests of the non-profit.
So depending on the exact non-public reasons for the communication breaking down it might not be a disaster but the hybrid doing exactly what it was intended to do.
Now the real interesting question is how much MS must have been afraid to be technologically left behind to knowingly enter such a deal with such a huge investment?
> Also raises deep concerns on governance structure.
It's so strange to hear this regarding a CEO, a single person, effectively going rogue against a company's charter, but then layoffs of tens, hundreds, and thousands of workers, on the basis of zero fault of their own, by these CEOs and boards are considered good governance.
If cigarette companies could cause 5% less cancer by selling 5% fewer packs of cigarettes (resulting in 5% lower profits) do you believe they would do it?
You might say that companies have strayed from the Platonic ideal of a corporation and that cigarette companies have strayed from their “purpose” but the reality is that purely prosocial companies have never been the norm.
You might also suggest that the very fact that people are purchasing cigarettes demonstrates that they are providing “valuable” goods/services but this is to water down the definition of value to the point of meaninglessness.
The truth is that a corporation is a sort of alien entity that almost entirely operates to maximize profits. I say “almost” entirely because it is possible in extreme cases for their leadership to face personal responsibility, but it’s pretty rare.
So you're saying we already invented the AGIs that are taking over the planet, enslaving the human race and don't really care about us or the condition of the Earth?
"We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden. Individual atomized humans are thus either co-opted by these entities (you can live very nicely as a CEO or a politician, as long as you don't bite the feeding hand) or steamrollered if they try to resist."
The primary purpose of a company, absent any other overriding objects in its constitution, is to act in the interests of its members (~shareholders, although not all members are always shareholders).
That usually means making money, but it's not a certainty that's what the primary purpose will be.
> It is to make money by providing valuable services to society.
I think you mean "by providing anything anyone will pay for" in this universe you describe. There are a ton of companies that provide zero or even negative valuable services to society.
And what you describe is unbridled capitalism. That's not what most people consider viable. A company is not a logical entity that satisfies "if company, then capital at all costs", although there are the few (the super wealthy) that would like it to be that way. A company is part of a complex system and thus it needs to satisfy a wide variety of conditions.
And note that my comment is reacting to the idea that firing the CEO is some sort of crime against humanity.
Companies are by and large communist dictatorships - usually there is a Great Leader (CEO), at best there's the Central Committee (board of directors). Effectively, nobody else decides the direction of the company, or makes any impactful decisions.
The rank-and-file serfs don't matter, they're expendable and interchangeable. When Dear Leadership is shaken up, it's newsworthy. When ten thousand labourers are let go, it's just a statistic.
It's totally true. I travelled around East Germany in the 80s and it looked like a bigger and less functional version of a lot of corporations. Motivational posters everywhere, mission statements like "we believe (meaning you, the workers, not us leaders) in A, B ,C", five year plans, leadership living in a world completely detached from the reality on the ground.
I don't think "agitprop" is at all a fair comparison. It might be considered troll bait, but personally I view it as pointed quasi-satire. Corporations do tend to be authority based top down structures. There is a tendency to assume stuff must be entirely top down (authoritarian, communist, etc) or bottom up (markets, capitalism, voting, etc). In reality, both types of structures have different failure modes and it works well to have layers of each type of system.
Pointing out that modern capitalism is comprised of actors that have large structural similarities to failed communist government seems like more valuable comment than your shallow and dismissive response.
"Agitprop" usually refers to praise of the state, embedded into various cultural works -- I don't think this is that.
I'll try to expand on my previous comment without using the "C" word itself, prior programming seems to have triggered an emotional reaction there.
Here are some qualities that many (if not most) corporations seem to share:
- centralized power
- collective goals (Team A has an 18 month plan to deliver V18)
- top-down approach
- bureaucratic hierarchy
- collectivist ideology (down to "we're a family, we're all in it together, do it for the company")
- controlled expression (approved list of questions for the all-hands meeting, anyone?)
- resistance to change
- top-down reforms (vc / top brass reorgs)
- limited transparency
Does that sound like traits characteristic of capitalism, or more like spooky ivan papa bear, sickle and hammer stuff?
Where the similarities end is around ownership. In an (idealized) communist system, people own the means of production; in a corporate setting, everything is owned by the organization. Of course this is a key, essential, and critical difference, and it goes all the way back to maybe the 1600's when chartered corporations first appeared; but the roots of that reach even earlier, to the aristocracy disempowered by the rise of the merchant class in the middle ages.
The idea of parasitic Majority Shareholders siphoning off any benefits created by the collective efforts of individually irrelevant and easily replaceable Individual Contributors has been with us ever since. That the corporate structures would adopt the social organization aspects of a system that was created in direct response to their relentless exploitation, well, that's just irony of the highest caliber.
Tell me you have never worked in a publicly traded without saying it.
This is EXACTLY what late stage "political" companies are like. There is no rational way to decide who gets the window office vs. the internal office. Everything is "political"
I'd say it's more the other way around - dictatorships like the Soviet Union operated like one big company. This form of critique typically uses the term "state capitalism".
OpenAI is privately held so there would be no reason for the board to announce this publicly before consulting with / informing the shareholders. Short of an impending media crisis that they needed to get ahead of this was mishandled. That doesn't mean it was mishandled, just that I don't see any evidence just yet of a valid reason to do this, they've done absolutely enormous damage to the OpenAI brand by doing this in this hamfisted way, you'd hope they at least had a good reason.
Note that the non-profit doesn't have shareholders, but the effect is much the same because the two are linked closely together and you can't affect the non-profit in such a drastic way without also affecting the for-profit arm. So they just pointed the double barrelled shotgun at their feet and pulled the trigger, twice. It will take a massive effort to restore trust in the brand, both on the commercial side and on the side of the people working there.
I don't disagree that this may have been ham-fisted or that it may have been a huge mistake. But I do think that talking to investors in the for-profit arm of the company about potentially removing the figure head has an even greater potential to backfire. There's absolutely zero chance that information wouldn't leak before they could make their move. And while it might be a nice gesture to Microsoft to inform them of their decision, the truth is that Microsoft has no stake in the non-profit OpenAI and did not have any right to know of these decisions as they were being made.
Microsoft is the one keeping the lights on and the GPUs running at OpenAI. Microsoft could utterly destroy OpenAI tomorrow if they wanted, shutting down access to their entire hosting infrastructure and most of their product lines.
The OpenAI board should have known not to piss off Satya.
I don’t see a world where MSFT invests more money into OpenAI with the current governance structure after this. The fact that MSFT didn’t have a board seat was unusual before this. Now it just makes MSFT look like fools.
> The fact that MSFT didn’t have a board seat was unusual before this. Now it just makes MSFT look like fools.
It is a 501(c)(3) even if they had all of the board members Open AI, Inc can not be run for MSFT benefit or for profit. A sympathetic board member might have tipped MSFT off about the firing ahead of time though.
Hopefully MSFT just goes off and does their own thing. I find this holier-than-thou "we're non profit and we're going to save the world how we see fit" irritating. Talking about AGI like it's an animal ready to pounce when we're quite obviously nowhere near it. Machine learning development has been sluggish for the longest time and the moment it was commercialised, there was an explosion in possibilities and development.
"AI safety" is a euphemism for "aligns with our interests." "Roko's basilisk" is nonsense science fiction babble.
Don't these events prove pretty conclusively that AI safety is not a euphemism for "aligns with our interests"? There's no way anyone at OpenAI could have expected to benefit professionally or financially from this.
This isn't about bias. Sutskever has recently been dedicated to what OpenAI calls "superalignment" (https://openai.com/blog/introducing-superalignment), the problem of future AI systems which don't do what humans want at all because they're too smart. You can dispute that research or its premises, but after it inspires a board coup at a billion dollar company, I don't think you can reasonably dispute that people believe it.
MS may not have any legal sway over the non-profit, but I wouldn't be surprised that in most cases they'd at least be consulted or given a heads-up on something like this, since it affects the for-profit bits of the org as well.
And now I expect that we see this "board" dissolved.
Frankly, as it should. If you are going to take $10B from another organization and you nuke the CEO during market hours and then later it turns out that it was just an ordinary disagreement, then you should answer for that action.
On a Friday, no less. This is actually the kind of thing that can sink MSFT's apparent "first mover" advantage in the eyes of big money. OpenAI is now fractured, and whatever work they were going to be doing on tech is now going to be devoted to figuring out WTF to do with this mess the board just created all by themselves for no apparent reason.
> And now I expect that we see this "board" dissolved.
The board has the power to dissolve itself, I am sure there(edit typo) are legal mechanisms for illegal acts, I think the IRS can revoke 501(c)(3) status but not dissolve, not sure if there are other mechanisms.
It's non-standard but I'd argue that the weird structure was designed to enable this exact type of decision to be made by the board. The non-profit board is a peer of investors like microsoft, not answerable to them like a normal company.
OpenAI might just show the world how the naive ideal we are not driven by profit but spending billions every year is never a good idea.
Those board has no stake in the company, on the other side means their have no accountability either. It is like they remote controlling the titanic, and who cares it crashes, because they are not on it.
Regardless, they made very stupid decisions now sabotaging interests and the company itself, brutally beheaded itself.
That is why he is now essential. He is the hinge of trust, and people believed his vision of OpenAI, as a company.
Ilya might be the science guy, but he 0 track record to run a company. And now he executes this badly planned coup, how would people trust him not to play such shticks in the future?
It's confusing to me why people consider this such a big deal then. Nothing has emerged that shows that he has a core piece of the ouzzle that no one else has.
wow i wish i always have what i did 15 years ago be the sole descriptor of my worth.
Openai has been really fast in delivering new features, and for me i just want it to continue on its trajectory.
> Regardless, they made very stupid decisions now sabotaging interests and the company itself, brutally beheaded itself.
The board in a real way decides what the interests of the company are since they are the majority owner. The interests they set however may or may not bring about the results they want though.
But the CEO that was done under was not sufficiently candid with the board.. I think they realized they have some risk being only a technicality that does whatever the CEO and Microsoft tell them is aligned with their mission.
Honestly given all what I have heard about Sam Altman I wouldn't be surprised (i.e. speculative) if he did systematically manipulate and deceived the board (but subtle enough to not be legally "lying" or fraud) with the intention to forth through his goals no matter weather they agree or not.
Assuming such situation happens in is noticed there is no reason to consult because it wouldn't change anything and just make things more complicated as the only action which can be taken is to fire the manipulator in power asap.
Especially some recent actions where not so much in line with the mission the board is supposed to protect push for but very much in line with giving especially Microsoft a additional competitive advantage (i.e. additional to what Microsoft payed for).
And I have seen many comments like but MS and other stake holder should have been part of the decisions and similar. But here is the think, with the specific structure of OpenAI MS and other knew from the get to go that it's a high risk investment where even through they in vest a ton of money they won't have _anything_ to say at all when it comes to decision done by the outer non-profit which is implicitly the majority owner of the for-profit OpenAI.
That's not how this works. OpenAI took investments for a reason. They benefit from the MSFT relationship. Harming that relationship will be bad for them (maybe not catastrophically bad, but significantly bad). I would be surprised if the relationship is not strained after this event since they have almost certainly created a major headache for MSFT (Satya is almost certainly fielding questions from his own board about this).
I’m pretty sure it’s catastrophically bad. OpenAI’s deal with Microsoft happened when ChatGPT took off at insane rates and OpenAI experienced scaling troubles. There was literally no one except a handful of cloud operators—Google, Amazon, Microsoft, etc. who could provide enough GPUs to keep things running. Microsoft basically stepped in and said “we’ll provide the GPUs and pay the hosting bill for half the profits.” OpenAI now runs everything, from ChatGPT inference to GitHub copilot to GPT-5 training on Azure GPU instances.
If Satya is as pissed as it sounds like he is, and if Ilya et al double down on their madness, Microsoft could back out and shut that all off tomorrow. OpenAI would be dead in the water, with no running product and no path to recovery.
Professional boards of directors are supposed to be attentive to the risks to the organization as a whole, regardless of whatever weird ass governance structure exists.
Blowing out your CEO on a Friday afternoon and then accusing him of lying is not the way to do this, and they are likely to find out why come Monday morning at about 9:30AM Eastern.
> Given the abrupt nature of the announcement and the pointed message in it (essentially accusing Altman of lying), I was sure there must have been some big scandal, not just normal boardroom differences of opinion and game-of-thrones-style maneuvering.
If it was a normal boardroom difference of opinion, how would it have gone down? You think there would have been public along the process?
From an outsiders perspective I think both situations would have looked the same; we have no idea how long the process went on. If the board wanted to get rid of Altman but couldn’t convince him to resign, the announcement was always going to look like this.
To the broader public, maybe, but certainly not to OpenAI's biggest partner, and certainly not to their own board chief.
And if Sutskever's beef was that OpenAI was going too fast at the expense of responsible AI development, he just completely screwed himself, as basically everyone who wants to move fast now will leave OpenAI and join the companies that do move fast. This is basically exactly what happened to Google Brain going too slowly so everyone leaving to join OpenAI and others. At least OpenAI was previously in the position of being able to guide responsible AI development, now companies will probably all go by the "drive fast or die" ethos.
The board may come off looking pretty bad after all is said and done. It is reasonable to fire a CEO if there is a strong disagreement on strategy and direction but it seems unreasonable to do so without adequate planning and preparation.
The typical approach is to let the CEO know they are being replaced, line up a successor (even if temporary) and arrange an orderly transition (message key partners, customers, etc). This avoids the narrative getting away from you ("omg, the product is just mechanical turk, about to go bankrupt, have some serious accounting issues, CEO did crimes, etc"). These bad vibes have a way of persisting and can cause a loss of trust with key stakeholders (previously overlooked issues now get revisited with a more jaundiced eye)
The way the action was taken (suddenly and with accusations) gives the appearance of personal animos rather than measured, sober judgement. Of course, there might still be good reason for taking this approach but so far, it is not looking good.
the thing is there is (legally) a huge gap between lying and "not being candid"
in the later case you can have a lot of subtle manipulative formulations, phrasing misleading emphasizing and de-emphasizing of parts misleading representation of what certain decisions imply, intentional not correcting when you notice that someone did misunderstand something and it will lead to them supporting decisions of you they would normally not have supported etc.
all of this are legal actions (and in-actions; at least if subtle done) which still can get you fired for "breakdown of communication and/or trust" in pretty much any job, especially a position like CEO
outright lying on the other hand likely can entail a lot of legal trouble in many ways
so them saying that he "wasn't candid and there was a break down in communication" then it likely means that and a break down in trust but not outright lying (except if they want to downplay the situation, but why should they)
This "legal nitpicking" is pretty much irrelevant, even in a court of law:
1. The OpenAI board certainly gave the impression that something egregious had occurred (just look at all the speculation on HN and elsewhere). I have literally never seen a separation announcement that was that pointed where there wasn't significant malfeasance. In the US, basically anyone can sue anyone else for anything. It would be quite easy to argue in court that the board gave the impression to a reasonable observer that Altman lied over something important - Kara Swisher tweeted that basically the board is screwed unless they have an extremely unambiguous smoking gun.
2. On a practical level, the legalese really doesn't matter. Then general SV opinion already seems to be that the board did Altman and Brockman dirty, it's already been reported that Altman is starting a new company, and I have no doubt heaps and heaps of OpenAI employees will jump ship to this new venture, as many high level employees have already announced their departure.
through AFIK the general opinion I have heard outside of SV by people working with AI is that Altman is the one who had been acting "dishonest, manipulative" in a way which endangers open ai (not OpenAI but AI which is open) in recent times
and lets be honest SV isn't really known for caring about morals and bad consequences their tech can have, but the non-profit which made the decision has roughly committed to upholding such standards
> that basically the board is screwed unless
I highly doubt this, probably his comments (some of which where very misleading and manipulative i.e. not candid at all AFIK) in front of congress in context with the goals of the (non profit parent of) OpenAI which fired him has committed to would be enough reason to fire him. If we used similar manipulative speech also in board meetings this would be a pretty clear open and shut case in favor of the board which also would thoroughly destroy his reputation (iff that is the case and it's brought in front of court) so from an outsider perspective this would be a very interesting case.
Ilya, who deposed Sam, is very much not in the open AI camp. His beef is that Sam is going too fast and making product available to the masses too soon.
> Kara Swisher tweeted that basically the board is screwed unless they have an extremely unambiguous smoking gun.
Can you please add a link to that post? It’s been so frustrating trying to follow this story on Twitter. Kara’s feed has a single pinned post with no indication from the UI about where the rest of her posts are.
but there is still a huge different between something which is done subtle enough to not realistically be able to be pinned down by any court but entails a fundamental break of trust (which is egregious) and what legally could count ad fraudulent actions and similar (and is eregious and can additional lead to legal cases being brought against you)
If the board was in the right and there is something very serious the public doesn’t know about, why hasn’t a single member leaked the story to the press? This also doesn’t make sense with reports that Sam’s return is being negotiated, although I think this comment was made before these (unconfirmed) stories came out.
The board are looking very bad here imo. Either it was serious misconduct from Sam and that should be at least leaked, or they staged a coup.
Yeah, I see this as a pretty bad day for the AI world, the splintering of talent could very much hold back development, considering how OpenAI is so far ahead of everyone else, anything that slows them down while others start up, means slower benefit to society.
After Silicon Valley gave everyone terrible gig jobs and disastrous social media that amplified polarization then ended privacy to sell ads, does anyone really think anything they produce is a “benefit to society”?
Like how much damage do they have to do in the name of efficiency, profit, “the future” before the “benefit to society” meme dies?
OpenAI was another pseudo monopoly in the making. They were pretty explicitly trying to scare congress into regulatory capture over (very misleading) fears of AGI danger.
It's not just about availability (even though it is compute-limited, making it a technology only for those who have access to sufficient equipment or paying for it). Some technologies enable even more accumulation of wealth and success to the successful. AI is a prime example of that. When all the big conglomerates own all the supercomputers and have their own virtual employees, what will incentivize them to take into account the benefits of the general public?
maybe it's a good day for the AI world if stopping this rocket means exploring other architecture venues like maybe Hintons forward forward networks on other hardware.
I agree, the abrupt nature sort of indicates something serious.
But I've seen "on the spot firings" somewhat with Open Source-based, high profile startups. There's always a point in a FOSS startup's life where Investors/Board/Founders/SrStaff have core communications break down in one day (aka yelling matches) and people leave immediately--typically from debate on GPLv3, legal violations w/GPLv3, profit strategy, and 'community' support. Makes me now wonder what was the orig exit strategy for OpenAI.
I can't find the exact quote, but I distinctly remember Sam giving founders advice along the lines of, "operate under the assumption that co-founders and investors are not going to screw you". Pretty sure it was a Startup School lecture at some point.
That still may be good life advice in general (even if it wasn't for Sam in this case) but what I really don't get is the fact that OpenAI's board governance was structured in a way such that this was even possible.
I also don't understand what is to be gained from the perspective of the remaining senior leaders at the company. This is a tremendously momentum-killing event. I cannot think of a single facet of their day-to-day operations, product roadmap, competitive position, etc. that would be improved by this decision.
Yesterday, when this was announced, I was bracing myself for some truly awful news about something that Sam had done in his personal life that was about to be divulged, since that is the only possible rational reason for the board to make the decision it did.
Ultimately one important job of the board is to be able to replace the CEO. So it would be odd if it was structured in a way that this wasn’t possible.
> Ultimately one important job of the board is to be able to replace the CEO.
This is true in the abstract but it is not common to see the narrowest possible majority on a six-person board oust the co-founder/CEO (also a board member) while excluding the board chair (who was also the company's President and co-founder) from the process.
> So it would be odd if it was structured in a way that this wasn’t possible.
You are wrong here. Boards are usually structured so that what just happened at OpenAI is not possible. The exclusion of the board chair from the process is especially egregious and the involvement of board chairs in decisions of this magnitude is often mandated in documents of incorporation.
Matt Levine is going to have a field day with this, LOL.
No, the VAST majority of boards are able to fire the CEO and replace the Chairman if they have the votes for it. It’s just that this is a rare occurrence, and certainly rare at a high-profile tech startup. The. ChargePoint board just fired the CEO (and non-Chairman member of the board) on Thursday along with the CFO. https://www.chargepoint.com/about/news/chargepoint-announces...
> This is a tremendously momentum-killing event. I cannot think of a single facet of their day-to-day operations, product roadmap, competitive position, etc. that would be improved by this decision.
What you’re missing is that the board (at least what remains) seems to disagree strongly with the direction in which the organization was headed — then why would they care to maintain momentum along the “wrong” roadmap? The changes seem (deliberately) driven by the intent to steer in a different direction and with slower momentum. Others might disagree with that vision, but the board feels that it aligns better with the founding charter.
> the board (at least what remains) seems to disagree strongly with the direction in which the organization was headed
Yeah that is clearly the case, but when you have a situation where four of the six board members execute a coup against the two most powerful and important board members (the CEO and the chair), the company looks like a shitshow and nobody is going to want to work there or partner with them, at least until they are able to present a credible narrative about what they did (which may never happen).
Totally get the fact that the four remaining board members didn't like where things were headed, but their actions yesterday created a tremendous amount of collateral damage that will massively impede OpenAI in its journey toward whatever new azimouth they choose.
> a situation where four of the six board members execute a coup against the two most powerful and important board members (the CEO and the chair), the company looks like a shitshow
Umm, if it's "one human, one vote" then the board was working exactly as designed.
Being "important" may not save your job. Been there, done that.
I don't think total organizational chaos and fracture is "exactly as designed." The board was designed to have independent veto power in the name of ethical AGI, but they broke the emergency glass for apparently no real reason.
Right, my point here is that the "design" of the board here raises a lot of questions.
Board charters are often written so that this kind of thing doesn't happen - especially in the case of startups with small boards and charismatic founders.
> I cannot think of a single facet of their day-to-day operations, product roadmap, competitive position, etc. that would be improved by this decision.
This is a genuine question, can anyone explain to me what the actual value of San Altman is?
I don’t know anything about the guy except the core public outline of his career but I find his timeline confusing.
He founded Loopt which by all accounts was unremarkable.
Then he ran YC during a period where at best you can say it didn’t get wrecked, though it seems to have lost focus and influence. He left YC in a situation that seems similarly abrupt minus the PR drama which was kept completely under wraps.
Then OpenAI which is a genuinely impressive organization but by now he’s been in personal conflict with basically everyone involved and doesn’t seem to have been a driver of the actual technology.
Other than that there’s world coin which is a total grift at best and dystopian at worst. And stuff like Bohemian Grove or whatever.
With all that said I don’t have a personal opinion just observing the broad strokes and trying to figure out how exactly he ended up some kind of kingpin.
> can anyone explain to me what the actual value of San Altman is
Weird as it may seem: that's not all that important. What is important is that the rest of the world sees OpenAI as a mature and stable organization and that image has now been seriously damaged. Even if they wanted to fire Sam there must have been a dozen ways in which that could have been done more effectively and without this much damage to the OpenAI brand. It's almost comical, they were on track to outrun Google and Facebook combined and now they're on track to be an also-ran. You can expect a large number of resignations because no matter what you think of Sam Altman you want to work for a company that is mature and stable, not one where execs are replaced on a moments notice on a clear and sunny day without a direct and clear cause.
> they were on track to outrun Google and Facebook combined
But what if they don’t want that. Those are rapacious and malevolent organizations.
Regardless, even if that is the goal what’s the evidence Altman is the right guy for that? It’s not like he’s done that before. The relationship between his stature and his track record is the part that’s confusing me.
They are. But who is 'they'? The board? The shareholders of the for-profit? The rest of the people working there?
And if the board wanted the company to be a pure research organization they should have acted much, much earlier, and the company probably should not have hired Sam Altman in the first place but someone like Geoffrey Hinton or another respected name from academia, provided they would be available in the first place.
Agreed, but then they did it in the worst possible way. You don't create a crisis around your brand like this on purpose. Much better to get everybody to play along and make it look as if Sam really wanted to spend more time with his family. Short of a cold body in his freezer this was done carelessly. But let's wait and see how it all develops because it is more than just a little strange to see it play out like this without a good enough reason for the haste, if it turns out there wasn't I expect the board to be axed.
It could be as simple as using the bylaws as the chair of the board to point out a violation of the bylaws. It all depends, but at the end of the day board members that try to cling to their seats usually fail to do so. Ultimately their position could be challenged in court but most board members of non-profits are not that anxious to see their reputation destroyed that they'll let that be the deciding factor.
Either way, normally you'd have a pre-written resignation letter drafted where the only thing missing is their signature, and you'd confront them in a meeting: resign voluntarily or we'll put your continued presence up for a vote at the next board meeting. Of course the board could try to eternally vote itself back in but that usually doesn't work because a board has to be able to serve in its oversight role and one part of that is that the board has to have broad support, both legally and from within the organization. For instance: the board might no longer be able to find a CEO that is acceptable to the rest of the C-level. That would be a major problem.
Corporate governance is hard, non-profits have a bunch more twists but in the end nobody's position is carved in stone. Note that even non-profits have bylaws and these usually detail clearly how board members are to be proposed and what the procedure is to get them to become established as well as how they can be removed. If it can be proven that a board member has acted against the interests of the legal entity they represent then they usually can be removed even easier because that's a clear conflict with the statues of the non-profit, then there are potential conflicts of interest (for instance sitting on the board of another entity that has goals that are not compatible with those of the non-profit).
Nobody, including a board of directors is inviolable, it all depends on who the stakeholders of the non-profit are, the board is in principle independent but ultimately the judiciary still has more power than they do.
Depending on the bylaws it could be stakeholder: donors, beneficiaries or the employees of the non-profit in some organized form. All of these could petition the court if they feel that the non-profit wasn't governed properly. Note that it isn't known if the board acted unanimously (likely it it didn't) and what the grounds were. That will make a big difference to any outcome.
Non profits that lose their donors (especially if the money was pledged but not yet committed) usually don't live long so the board has some incentive to play ball.
> unless there’s a really genuinely large scale staff revolt
That's already underway, see other news about resignations at OpenAI, also donors have a very strong play to make.
Donors could simply withhold the next tranche, but possibly they can funnel enough money upwards from the for-profit to compensate for that.
Even so I would expect the other shareholders in the for-profit to be furious, especially those that liked Altman. So they're going to have to do some explaining because right now this does not deserve the beauty prize, to put it mildly.
Can you quantify and specify the brand damage? I only see some possible damage to current and possible future employee morale (in that some of them are quitting and others may be less inclined to take a job there). Do you see this as seriously affecting relationships with companies such as Microsoft? With end-users?
Depending on how the bylaws are put together: the donors, the beneficiaries, the employees of the non-profit and any other stakeholders. Any of those acting alone or in concert could petition a court if the board doesn't voluntarily resign. And if the board split on this issue is a close one then that might happen easier.
I find the idea that some random court filing would succeed in recalling the board of the most high profile technology company in the world, based on “we felt that they were brusque and unprofessional”, to be very unlikely. I mean, MAYBE if some of them started going to prison, but otherwise… this seems like the realm of politics more than court proceedings. But maybe I’m missing some precedents?
How does 'we backtracked and re-instated him because we made an oopsie' sound compared to a 'random court filing'?
And yes, there are plenty of precedents of board members being recalled, most of them aren't stupid enough to fight it, especially not non-profit board members, who are supposed to do this all for glory and sunshine. Typically they are presented with a pre-written one pager with date and place already filled in and all they do is sign it or they'll find their position to be up for a vote. And of course, the ranks could close around this decision but that might make some extremely powerful enemies. Think 'Microsoft', Sam Altman, Greg Brockman, Reid Hoffman, YC, Peter Thiel, Elon Musk, Amazon Web Services (AWS), and Infosys.
The combined onslaught of that would annihilate the board. So unless they made very sure they had plenty of backing on this decision they set themselves up for a very difficult situation.
I don’t get why people aren’t seeing this. OpenAI said they’d achieve AGI by December. It’s mid November. When they claim to have achieved AGI, Microsoft’s deal with them ends immediately. Sam Altman is a practical man who thinks a lot about server costs, and Ilya is a terrified man who thinks a lot about potential catastrophe. Sam recently bragged about “pushing back the veil of ignorance”, and Ilya is working full-time on alignment techniques.
Why the dominant narrative isn’t “they’re probably disagreeing over the AGI status of a GPT5 ensemble because it affects their relationship w/ Microsoft” I have NO idea, and I’m trying to keep my head down and not let it all drive me crazy with anxiety…
Much love to the fellow hackers out there. If I’m anywhere close to right-ish, then I’m looking forward to exploring the post-SV/VC world with you.
I'd love to know how necessary the donations to the non-profit still are now that the commercial venture has picked up. But MSFT might want to re-think their OpenAI integration efforts at this point.
Being leader himself is the value, first and foremost.
It is like saying Jobs doesn’t make iPhone, so he is not useful either. I don’t like the Sam is the new Jobs narrative but here it fits.
OpenAI is no small company now, it needs a head to pull itself in a single string.
In real world, power or trust goes to the person people who actually trusts. A manager isn’t a manager just by calling oneself manager. Without trust, people will find or form other decision centers around them. The result is execution became watered down or outright not being executed.
That is how I see the value those high level executives really offer. They are the center of trust and accountability. If people trust them, people beneath him will make sure things happen fast and hold themselves accountable. On the other hand, you just end up with an ineffective company
Yeah I get the concept. But this guy didn’t launch the first viable personal computer from a garage he made Loopt and then has been aggressively promoted ever since.
He’s run exactly two impressive organizations, wasn’t the rainmaker at either, and left both somewhat abruptly. I don’t quite get it.
> "operate under the assumption that co-founders and investors are not going to screw you"
Think this is good advice for day-to-day interactions for everyone but it will always run into limitations. Especially for an edge case like OpenAI
I'm unbiased and not super familiar with this situation either but the quote above makes me think of "Never attribute to malice that which is adequately explained by stupidity."
Sam Altman is a typical fast-growth guy and this is the opportunity of a lifetime to cement himself in the pantheon of high growth tech gods — Jobs, Zuckerberg, Bezos, Musk, Hoffman, etc.
But aw shucks it’s a terrible fit actually, because this company has a non-profit, idealistic mission. In the face of unprecedented high demand, Sam Altman is like fuck the nonprofit hippie shit, let’s launch this thing into outer space. Let’s build fast and ship too many products too early, use investor money to give discounts and build moats, let’s go for regulatory capture! It’s the opportunity of a lifetime guys!
But the board is like nah, fuck that tiring ‘ship broken shit’ tech bro shit, let’s stick to our vision. This is too serious and important to treat it like a fucking Airbnb or Coinbase.
> But until the board clearly states this it is also speculative.
Even if this is the case, and Jeremy Howard’s Twitter thread on this would lend credence to this version, OpenAI butchered the execution. If firing Altman really is about not letting profit ruin a very important advancement in technology, the board would have the moral upper hand. And yet, their communication has been so poor that right now, they’re not exactly winning the PR battle.
If you’re gonna stage a coup for such a high profile company, you better know what you’re doing. So far, it’s been looking like amateur hour.
There's a vast ocean of difference between 'growth at all costs' and 'good corporate governance'. OpenAI is bleeding cash. Sama knows that you need to bring money in to build important new technology, especially in this non-frothy high interest rate economy. So while it's not critical to make a profit, you need to keep the lights on to keep the new research and development going. What Ilya and the board have done is completely wrecked OpenAI's momentum at a critical time. Unless they are sitting on world-changing AGI, this move made no sense and they don't have the company's best interests at heart.
It could also be cynicism and not naivete? It'd be pretty amazing if OpenAI actually wrested away control for the betterment of humanity. That'd be the good news of the decade... just hard to imagine in this world.
That’s a broad, uncharitable brush with which to paint everyone else. Maybe we need a razor: hesitate to attribute to sociopathy that which can reasonably be explained by a benign difference of opinion.
In two decades of tech booms, when have we seen a huge company actually do what's good for society? Our economy and society elevates sociopathy by design.
The "difference of opinion" is a fundamental difference in ideology and motivations, not to mention day to day operating differences.
I think the opinion that the grand parent identifies as sociopathic wasn’t the board’s or even Altman’s, but of the masses on HN shocked by the event. The opinions of the majority here are the “sociopathy.”
You don’t need to be a sociopath to be conditioned into holding sociopathic views. Plus, I think your objection is relying really heavily on “benign” — in a situation of this gravity, who’s to say what’s benignly relative and what’s worth arguing about?
Perhaps HN visitors are not benign, I have no idea. They mostly seem reasonable and friendly to me. I’m only saying calling groups of people you don’t know any particular thing, who are not homogeneous, is fraught and uncharitable. Not to mention thinking you’re the only person with moral clarity amongst your heathen peers, is, well, not a recipe for a great time. Why even come here then?
People with non-benign views are still worth talking to :). I bet some of my current views turn out to be harmful and regrettable in hindsight. Definitely didn’t mean to attack the inherent/essential character of Hacker News Residents, just defend the original point and point out the inherent validity of calling a particular belief set “sociopathic”
> I cannot think of a single facet of their day-to-day operations, product roadmap, competitive position, etc. that would be improved
If the dispute is over whether they should be a pure research nonprofit or not, then operations, product roadmaps and competitive positions are things they would not even want to begin with. It was only four of them that booted the other two out, and of those at least two seem to be professional non-profit board sitters. The transition of OpenAI into an actual product company doing something useful for other people is incredibly recent, it's barely a year old. It may be that they simply felt that offering products isn't what OpenAI is meant to be doing.
Alternatively it could be some blowup related to doomerism, although the memo saying it's not related to "safety practices" may rule that out.
Definitely related to doomerism, imo. There’s no way Ilya spends all day thinking about how to contain “super intelligent AI” and watching prototype demos that “push back the veil of ignorance” and makes a decision of this magnitude without all that being the driving factor.
The safety comment seemed more about reassuring people there wasn’t, like, a data breach or “escaped model” or something crazy. In other words, clarifying that the disagreement was about the future, not the present.
That the board is not very good at their job. A more experienced board would have been more professional and less dramatic, but we're in the stupidest timeline.
That they weren't concerned about day-to-day operations, employee morale, product roadmaps or any of that boring stuff. They were operating on the precarious calculus of their "friendships" and commitments.
They had to act fast. It is possible, if Altman or Brockman had more chance to talk to board people, they could have changed at least one member's mind.
That 501(c)3s like OpenAI are entities where the shared interests they are designed to serve are ideological, not generating profits to return to investors; OpenAI (the governing nonprofit) is in some respects more like a church than a commercial enterprise.
You're missing that boards have fiduciary obligations to govern and cannot do so when the CEO isn't fully communicative with them.
Yet another reason why it's a bad idea for a CEO to also be on the board. And probably also evidence that it's a bad idea for conflating the jobs of CEO and chief spokesperson.
Having served 20+ years as a director on various 501c3 boards, I'm pretty familiar with the requirements of the role. Fundamentally, directors have the duty to serve the interests of the organization vs. the interests of directors. This quote [0] sums it up succinctly:
Board members are the fiduciaries who steer the organization towards a
sustainable future by adopting sound, ethical, and legal governance and
financial management policies, as well as by making sure the nonprofit has
adequate resources to advance its mission.
So yes indeed, board members serve with the burden of explicit fiduciary duties to the organization.
You're nitpicking via an overly literal interpretation of what I wrote and missing the point entirely.
In the case of nonprofits, the fiduciary duties of board members pertain to keeping the organization financially healthy as opposed to representing the interest of shareholders. But you already knew this, right?
Dropping this link to another comment I made because people often forget what "fiduciary" actually means.
You see it all the time in legal dramatizations where the lawyer is telling the client that they can get more money, but the client tells the lawyer they just want the legal process to stop. The client gets their way because the lawyer has a fiduciary duty to them.
Edit to add: I'm curious what word "Revlon" was supposed to be.
> Edit to add: I'm curious what word "Revlon" was supposed to be.
I'd guess "revenue", but who knows. Autocorrect is a disaster for clear communication these days. It's strange to me that it's as bad as it is. Is there a patent on using context to decide which word is appropriate? If not, I can't figure out why it isn't done more commonly. It seems certain the poster didn't type "Revlon" with a capital R, and it also seems certain that any statistical model wouldn't think "Revlon" would be the most likely word here.
It’s all about edge cases. Sometimes, people are discussing some random proper noun, and with modern lax internet/SMS etiquette all sorts of unusual constructions pop up occasionally. Hard to know when it’s intentional, and any amount of over-zealousness is immediately noticeable.
Sure, and thus the default should be to assume the user is correct and not change anything. But how much chance is there that the user actually typed Revlon with a capital R? I think practically nil. Instead, something was typed that wasn't in the dictionary and the autocorrect decided that "Revlon" was best fit. Depending on what was typed this might be true by Levenshtein distance (https://en.wikipedia.org/wiki/Levenshtein_distance), but I'm really doubtful it could possibly be the best fit in this sentence by any metric of likelihood that considered context. Hence my question of why context isn't used by most autocorrect systems.
> But how much chance is there that the user actually typed Revlon with a capital R? I think practically nil.
Uh.......
You might want to spend a second researching what "Revlon" is before speculating like this. I'm 100% positive that they actually typed Revlon with a capital R, because I know what Revlon is, and it's completely topical and contextual. It's very well known in certain circles, much like Chevron, Howey, and Brandenburg.
Yep, "fiduciary" is about a trust relationship. A fiduciary is obligated to act in a trustworthy manner. 501c3 board members are entrusted with the welfare of the organization vs. their own interests.
I surmise that "Revlon" stands for "an entity worth billions". IOW having a component with large monetary value to manage.
This memo creates more questions and answers, and it hints at the forming of internal factions in OpenAI.
Who's the COO talking on behalf of when he says "we have had multiple conversations with the board"?
Who's full support does Mira (Ilya's choice) have?
Why does it need saying?
The rapid escalation from "we seem to disagree on this" to "walk them off the premises" was standard practice at OpenAI, along with firing people on Friday, at noon.
Many people were "resigned" while they were actively discussing (or at least so they thought) any disagreements with their immediate management, or the higher echelons of OpenAI.
Coordinating this internally took more than a few days, and there must have been a few middle-managers involved in several meetings to coordinate this.
Details on Ilya's selection function on who to read in to this would inform a lot of his most intrinsic motivations for this.
It's a pity to see brilliant minds unravel under the pressure of their own invention . . . and so publicly.
> Who's the COO talking on behalf of when he says "we have had multiple conversations with the board"?
My guess (and its a guess, not knowing anything but excerpts from the memo which we aren't even told who it was addressed to) is that its on behalf of the executive team of OpenAI Global LLC, which, as a reminder, is separated from the OpenAI nonprofit by one (by ownership) or two (by control) intermediary organizations (the holding company that is the direct parent of OpenAI Global LLC, and the OpenAI GP LLC which is the organization through which the nonprofit exercises control of the holding company.)
> Who's full support does Mira (Ilya's choice) have?
The rest of the OpenAI Global LLC executive team.
> Why does it need saying?
Fairly common thing to say after an externally-imposed change of leadership, especially one that is or has the appearance of being a radical shift in direction.
> The rapid escalation from "we seem to disagree on this" to "walk them off the premises" was standard practice at OpenAI, along with firing people on Friday, at noon.
> Many people were "resigned" while they were actively discussing (or at least so they thought) any disagreements with their immediate management, or the higher echelons of OpenAI.
Where does this come from? Its not from the article (either the quotes from the memo or other material in it.
> Coordinating this internally took more than a few days, and there must have been a few middle-managers involved in several meetings to coordinate this.
Same issue as above, plus this contradicts the information that has previously come out that the only person outside the participating board members who was told anything before the dismissed parties were informed was Mira, and even that wasn't days in advance.
The for profit entity is ultimately controlled by a nonprofit, kinda like how Mozilla the nonprofit also owns Mozilla the profit-seeking company. Patagonia (the clothing company) recent became something similar too.
The onion layers let the for profit subsidiary make some money and get taxed on it, but they are ultimately beholden to the nonprofit parent's wishes.
I guess OpenAI just expects to get a lot of donations / quasi-investments. Hell, Mozilla survived for two decades on Google's teat, and all they do is make a browser and act as a thinktank.
If OpenAI can keep up the momentum, there'll probably be funders lining up for new arrangements after the Microsoft one expires. Shrug.
No one cared about the GPT store. Just like no one cared about Plugins. That was just marketing to show the technology is 'accessible' to non-technical people.
The question is how is this going to impact their API and the increasing demand for lower-cost, higher-performing models to be used by the masses? If they want to be like Google and keep everything behind closed doors for years that's their decision, but they need clear communication about it.
Serious wrongdoing with smoking gun would have been the only justification for the board acting the way they did. This reflects very poorly on the OpenAI board. And makes it more likely that this affair is far for complete.
Agree completely. Badmouthing someone as you fire them is immature, and would only be warranted in some kinds of very extreme circumstances, which this does not appear to be.
I predict that this is the beginning of the end for OpenAI. The company won't collapse overnight or anything, but its best people will start leaving in January, then its average people will leave over the course of 2024, and there will be little left by 2025. They may have enough money to live on as a zombie company for several years after that, but everyone will know that it is no longer what it once was or could have been.
That sounds very plausible. Another option is that as more information like this becomes public pressure will mount next week to restructure the board and the organisation and try to repair things.
One conceivable situation in which the board's behavior could be justified is if Ilya presented them with an urgent ultimatum: either Sam goes or I go. If the rift between Sam and Ilya was growing to the point that Ilya would rather leave the company than continue with Sam, it's a rational move for him to propose this. At this point the remainder of the board would be forced to make a decision as to whether Sam or Ilya is more vital to advancing OpenAI's mission. Choosing Ilya over Sam would certainly be a justifiable decision in such a scenario.
It would be unprecedented for a board to act in this way without proper reflection and consultation of various stakeholders. It seems all much too sudden and even though Ilya is an important person he does not by his lonesome overrule everybody else who has a stake in OpenAI and even if he did it could have - and should have - been handled more gracefully.
If the board ended up acting rashly I expect the next thing to happen is that the board itself is exiled.
You keep claiming this across the entire thread, yet people explained to you dozens of times that the board is the ultimate arbiter over the nonprofit. They are not going to get ousted, unless they start killing people or anything like that.
They all have enough money to be able to fight any legal challenges. This is not your regular, poor nonprofit paying slave wages.
You always claim "precedent" - so please show me, when has the board of a nonprofit even close to as high profile as open AI ever been removed by external pressure? As far as I can tell, we're on entirely new territory here.
I for one am glad Ilya is prevailing here - sama is the typical SV sociopath pushing growth at any cost. We really don't need any more of those guys at a place this important.
You are apparently not up-to-date with the latest developments. Please go read the other thread.
It is going pretty much according to the few ways in which it could go: lawsuit with unknown outcome (most probably by MSFT or other minority shareholder), reversal, board (or selected board members) resigns. The future timeline in which nothing changes and the board holds fast is not one that I think is viable, assuming Altman didn't murder anybody in broad daylight.
And while I in principle agree with your character assessment (though I wouldn't put it in those terms, I'm not qualified to diagnose people to that degree) that still doesn't mean you can do what you can do in any way that you want to do it without consequences. The board forgot for a hot moment that they are first and foremost representatives of the stakeholders and govern accordingly. To oust one founder and cause another founder to walk you have to have grave reasons and solid support from all of the stakeholders.
So far I have not seen these so the decision can't stand as it is. It's entirely possible that those grave reasons exist, and even then this should have been handled with some tact, not the blunt side of the axe.
Sounds like a discussion that can be held maturely, and evolve in the course of a couple of months, with the board trying to settle the argument, or putting pressure on Sam to leave. The MO just looks ridiculous.
Sam's own related tweet:
"curiosity powers progress, but ego often does the heavy lifting during the slogs"
My take is that while Sam's statement is not a popular thing to say or admit, it's more honest than Ilya's tweet, which sounds more like a grandiose ego trying to pretend it has no ego. I know which I prefer.
Either the board is now lying to the employees, or the board monumentally fucked up with their communication publicly (vaguely indicating some serious wrongdoing in their statement yesterday). Either way, the board look like morons. Concerning they're now in charge of some pretty powerful and important tech.
I think the fact that their largest partner and shareholder, Microsoft, wasn’t made aware of this earlier, or even given the chance of having this news drop after trading hours, shows the incompetence.
I'm not a lawyer but their original announcement seemed very close to libel and I would think Sam has some legal options should he choose to pursue them.
But on a cursory look it seems to me like the boards' intentions are the right ones, while Sam seemed intent on making the most money possible with no regard for any of the Org's founding principles. It's why the "Open" in the name had become a joke.
I had no hope that OpenAI would ever deliver on its original mission, because when has a commercial takeover ever been reversed?
This may or may not kill OpenAI's lead, but maybe at least it will bring back openness ...
and what will you do with all that openness? i personally hoped that openai would continue to innovate and ship at insane speeds, continue to cut costs so i can continue to use their apis. providing the best ai for the lowest possible price is how they would have made me and the rest of the world better and more prosperous. now we'll have to wait and see what they actually deliver.
This looks extremely stupid from the board and validates the theory that most of the board has no clue on large organisation governance, full of people who have mostly not seen boardrooms in action. It will be very very hard for OpenAI to raise money with the presence of such a maverick board, and I doubt MSFT would want to give a penny more to them without some big changes. For the people saying they don't owe anything to MSFT, MSFT is the 49% stock holder in the LLC, and a 49% stockholder almost never gets treated like this anywhere else, doesn't matter if the remaining 51% is held by a non profit or not.
For people saying they shouldn't have done the MSFT deals, how else would they have gotten the money to build anything at all considering how much GPUs cost? Their competitor Anthropic is doing the same raising money from all of big tech, made possibly only due to the ridiculous success of ChatGPT. For others saying Ilya is the key, Google had a big lead on AI researchers and the only reason Google is not the undisputed king is due to bad leadership and product development.
I would venture to say that it is a near certainty that those researchers who left Google originally to work for OpenAI are going to be wanting to head right back there.
I think a reasonable translation of this is something like: "He didn't do anything actually illegal, or outside of the realm of what he was empowered to do as CEO, but he was doing things we didn't like," and then either didn't tell the board about those things, or told them in a way that was framed to make them less controversial.
So yeah, to me, really backs up the narrative that the board and sama were in disagreement about some key things.
This may be the end of OpenAI. OpenAI’s big advantage over Google was in production using and making the research available commercially to the public. I think Sam Altman was a big part of that push for commercialization.
Now there is a good chance that the “true believers” in AGI have taken over who will want to focus on just trying to achieve that. There is a good chance that true AGI is decades or more away (if ever). Without a product to push, this pure research organization will produce a lot of really cool papers(maybe) but not much else. As the talent see that there are better economic prospects elsewhere, they will leave.
I think it's a bit naive to believe you can make the jump from chatgpt to AGI. IMHO, those should have been 2 separate tracks where they milked the LLM hype as much as possible and used it to fund AGI research.
They can spin it how they want, but it'll 100% turn out to be a AI doomerist panic reaction. They didn't even inform MSFT about it which is absolutely ridiculous
Yeah it seems like not entirely candid may turn out to be something like "he said he was a super-doomer and pinky swore it to us, but then we realized he wasn't doomer enough for us".
If it's not malfeasance about finance, business, safety or security practices and is merely a "breakdown in communications" then that rules out virtually everything that could justify such an abrupt firing. Sounds like there's no legal problems to worry about, leaving an incompetent board as the only obvious possibility left.
When I was younger I remember thinking non-profit orgs were awesome and hoping to work in that space one day. This + the Wikimedia Foundation funding scandal + the proliferation of "misinformation" NGOs has really turned me off the whole concept. Nonprofits seem to have more than their fair share of problematic governance. There's something about commercial trade that grounds people.
> There's something about commercial trade that grounds people.
It's called shareholder lawsuits. I really do wonder about boards of non-public companies, though. I'd expect them to be, on balance, not much different from NGO boards or even HOA boards.
In retrospect, it wasn't the brightest move for Microsoft to be baking this in to their OS, Office suite, search engine, etc. Having a core feature of your core products depend on a startup with a bizarre profit/non-profit structure is idiotic. Amazing they haven't been building this tech themselves - maybe that's their next move. Poach as many of the OpenAI staff as they can and build in-house.
I think MSFT has full access to the underlying tech, not just a strategic partnership with OpenAI. I think their AI/LLM offerings have a future even in a future without OpenAI.
At the risk of getting blasted, I keep seeing all these people talking about how it's great OpenAI is being steered back to a research organization. How does one expect research to make money if not being beholden to a product? Forget whatever your immediate reaction to that word is. I can already hear the "who needs another SV overhyped product" group, and your ignoring the realities that people don't pay so you can sit around thinking in industry unless your thinking can generate business value. You can not like that, but it's the system we exist in.
So let me frame it another way, how does research expect to be supported if it's not providing material value? Research doesn't exist in a vacuum, if you don't want to be in industry you don't have to, we have academia for that.
I'm cool with OpenAI becoming more like Google brain, but then its Microsoft calling the shots. Except wait the company structure is weird, so they're not really and after seeing how Ilya handled this I don't expect him to be able to stomach that relationship long term.
There is a difference between the product being a way to fund the research and the direction being driven by the product.
If OpenAI ends up like a Google Brain, that doesn't mean Microsoft calls the shots, because Microsoft doesn't own any of the IP or researchers. Microsoft just has a a license to something that is almost certainly contingent on funding, and as long as the research is good, others will be happy to take their place.
OpenAI exists only due to massive cash injections from MSFT (and initially from Musk). Saying that they shouldn't have done the MSFT deal is like saying they shouldn't have started the thing.
Sorta yes and no. Huge capital injections became a necessity _after_ the release of ChatGPT. Pre-ChatGPT capital costs were much lower for OpenAI, the popularity of their chat bot (100 WAU apparently) created an existential crisis for capital.
Before that, as a pure research organization, capital requirements were much smaller and (probably) could have chugged along without a player like MSFT, or at least a much lighter partnership.
Training GPT3 required massive amount of compute, and not to mention paying top salary for top talent. Billions were injected even before chatgpt was anywhere near.
It seems openAI has ~500 employees (well, 495). This is being handled with somewhat enigmatic, ceremonial language fitting to an imperial palace with different factions. Interesting to watch, but also a bit ridiculous
This all revolves around what precisely it was that they claimed Altman wasn't candid about. As long as that isn't clear it is the board that has something to explain and in my opinion they don't have a whole lot of time and it had better be good because OpenAI will be hemorrhaging talent and brand equity until this is resolved.
I think this will ultimately be good for everyone, however I can imagine there’s a huge populace of OpenAI employees hoping this was a get rich quick route and are rethinking that. I know OpenAI has a different structure for reward than typical equity, but there had to be the thought this was going to be bigger than Apple and Google combined and there would be legions of billionaires in the making. If I were one of them, I would be seriously thinking about how to value my future at OpenAI.
Good. Considering they’re already getting millions in comp, I’d rather have people working on AGI that aren’t in it solely for the money. And it’s not like Cohere or Sam’s next company won’t be trying to poach researchers anyway, but at least one company (OpenAI) isn’t going for profit for sure now.
And now Sam is free to start another company which is incentivized by profit, giving him the ability to easily gobble up the part of the talent at OpenAI that was hoping for that.
He’s not the only one. But at least 4 top people have already left because of him leaving. He can clearly take people with him. That’s the comment you’re responding to. His power isn’t him alone. It’s everyone he brings with him.
Really? I'd be way more inclined to be pessimistic about it.
OpenAI is massively ahead of anyone else in the AI space, anything that shows problems in the organisation seems like it could set us back massively in the development of AI.
Several key people at OpenAI have also resigned, including Greg Brockman who was removed from the board but told he was too important so would remain in his other roles, he also tweeted not to worry bigger things are coming.
If Sam breaks away and does something else in this space, I can see others joining him and although competition is deathly, I feel like it is just going to fracture our AI developments, everyone is trying to catch up with OpenAI, any new company would be starting from scratch and be years away from anything useful while also leaving OpenAI a shell of it's former self.
Hopefully not the start of bigger issues at openAI.
A flip side, AI development fragmentation will mean many more techniques are tried and know how diffuses into a much broader market. This has the likelihood of greatly accelerating everything. Further, by lopping off the top of OpenAI you create space for new people to grow and learn and lead.
In the very short term it’s disruptive. In the medium and long term it’s probably a wonderful thing to soften the ground a bit and let competition take hold.
Brought this up in another thread and was downvoted. But yes the 900k compensation package was probably more than 600k PPU equity which is worth who knows what now.
For a salaried position where the base salary is above the legal minimum salary for wage & hour exemption, I don't see what the legal problem would be.
No? Offering a compensation package can't be tortious interference, tortious interference requires an existing contract, and action by a third party that (along with a whole bunch of other conditions) causes a breach of that contract.
Well, you have to specify a jurisdiction. Let’s go with California since that’s where a lot of OpenAI’s employees are. Here’s what you need:
1. an economic relationship existed between the plaintiff and a third party which contained a reasonably probable future economic benefit or advantage to plaintiff;
2. the defendant knew of the existence of the relationship and was aware or should have been aware that if it did not act with due care its actions would interfere with this relationship and cause plaintiff to lose in whole or in part the probable future economic benefit or advantage of the relationship;
3. the defendant was negligent;
4. and such negligence caused damage to plaintiff in that the relationship was actually interfered with or disrupted and plaintiff lost in whole or in part the economic benefits or advantage reasonably expected from the relationship
Yes, its open and shut that offering a compensation package doesn't mean that, starting with #1.
And you can't make that case with the later decision by the nonprofit board (even if one assumes that an entity that has complete control of a party can even count as a "third party" for tortious interference rather than just being a source of breach of contract if it actually induces a breach), because the OpenAI Global LLC operating agreement expressly sets this out as normal, so there is no reasonable expected benefit of the relationship that is being interfered with.
If OpenAI Global LLC offered its employees the PIU/PPU as compensation but did not disclose (or, a fortiori, actively concealed), you could make a case for fairly simple fraud. But tortious interference is just not a tort that works here.
I don’t follow. You’re saying PPUs granted by the for-profit don’t carry a reasonably expected benefit? And that a parent can’t tortuously interfere with a subsidiary? Both are factually untrue. I don’t see the wiggle room here that you do.
> I don’t follow. You’re saying PPUs granted by the for-profit don’t carry a reasonably expected benefit?
Yes, a profit sharing claim in an LLC whose operating agreement explicitly says it will be operated for the charitable purpose of a controlling nonprofit and that profit will not be its guiding principle does not create a reasonable expectation that it will be managed for profit.
Either the employees have that information disclosed, and there is no expectation, or it is concealed by the subsidiary employing them, in which case there might be a reasonable expectation, but its due to fraud by the subsidiary, not interference by the parent.
I think we need Sutskever to do a speech now to tell us what _his_ vision is exactly and what is going to happen to the GPT Store especially.
Possibly followed by Murati, but also maybe her comments won't be that relevant if Sutskever is really as into shutting down product development as it seems.
I'm curious what recourse Microsoft has, if any. Presumably there's some clause protecting them against openAI self-immolating?
Interestingly, the situation is fully reversible for now:
1. Majority of employees sign a strong letter condemning the board and calling for their resignation.
2. MSFT threatening to activate whatever protective clause they have
3. Other investors/donors threatening lawsuits.
4. Sustkever et all resign, new board appointed and Sam and Greg come back.
Things could be back to business as usual by Monday morning.
> I'm curious what recourse Microsoft has, if any. Presumably there's some clause protecting them against openAI self-immolating?
Given the disclaimer attached to the OpenAI Global LLC (the entity Microsoft is directly invested in) operating agreement that is also reposted on OpenAI's own description of its structure about how investment in the firm should be treated as a donation, and that the firm has no obligation to seek profit, and its primary goal is to serve the charitable mission of the nonprofit and that that will trump all other concerns including profit, I doubt very much that that is the case.
Now, if there was actual evidence that the board (which is, remember, the board of the nonprofit) was not making a bona fide effort to serve its understanding of the charitable purpose of the nonprofit, but instead serving some unrelated (especially profit-oriented) reason, that would perhaps be grounds of the lawsuit, but most of the concerns being expressed are exactly the opposite -- that the board is putting its view of the nonprofit's purpose ahead of commercial interests that people wish the firm would pursue. Which is exactly what the entire structure (and the laws governing charities) are set up to assure...
The LLM craze would certainly exist without Ilya. Google was doing LLMs much before OpenAI and had so many top tier researchers including Hinton. OpenAI didn't beat Google in pure research, only in product development. And Oppenheimer is known more for his leadership at Los Alamos and not due to his scientific theories, so a comparison to Altman is apt.
GPT–2 was the first model that created long-form and coherent text.
The research led to a technology with a simple use case: input / output.
The "product" is a chatbot. input / output.
There is no innovation in interface or product design.
All the magic – all the "product innovation" is in it creating coherent text output.
(OpenAI's UI is not really that great).
All attempts at "product" (as in serving a specific use case tailored to a specific business or commercial workflow) that OpenAI have led to date – i.e. plugins – have been a huge flop.
I am uncertain who is behind assistants interface design to make RAG so easy. It's usability is underpinned by the larger context window of GPT-4T.
All of the magic is in the underlying technology. And I think Ilya's at the core of it. Yes, Attention Is All We Need is the Google paper that invented the transformer.
Ilya also worked at Google and was a key part of the ecosystem, and I imagine had indirect (and most likely very direct) contributions.
He is one of the most cited computer scientists of all time.
Without Ilya, I don't think we'd have the hype right now.
Someone would have come to the same conclusion, I'm sure.
Honestly at this point the theory that makes most sense to me is that OpenAI got a new internal result which was notable enough that there was a major disagreement at the board level about how to respond to it.
Which feels like complete science fiction, but it comes closest to explaining why the non-profit board would move so quickly and disruptively.
This seems to align with the stuff Kara Swisher was hearing, and suggests Sam was "not consistently candid" about product announcements or something similar.
The sudden and mysterious ousting of a popular CEO at 4pm on a Friday (!) from what is the most talked about company currently is a very interesting story. Can’t understand this position.
Even if I agree with Sutskever's general position, I think his actions here were colossally stupid. He's basically managed to piss off a ton of people, and I have no doubt lots of employees will just shift to whatever Altman and Brockman end up doing, or else there will otherwise be some huge splintering of OpenAI talent.