The board could have easily said they removed Sam for generic reasons: "deep misalignment about goals," "fundamental incompatibility," etc. Instead they painted him as the at-fault party ("not consistently candid", "no longer has confidence"). This could mean that he was fired with cause , or it could be an intended as misdirection. If it's the latter, then it's the board who has been "not consistently candid." Their subsequent silence, as well as their lack of coordination with strategic partners, definitely makes it looks like they are the inconsistently candid party.
Ilya expressing regret now has the flavor of "I'm embarrassed that I got caught" -- in this case, at having no plan to handle the fallout of maligning and orchestrating a coup against a charismatic public figure.
Did... gpt-5 made the decision?
The same issue applies to many other works of art over time. The Simpsons and king of the hill most recently.
To he fair, I did say it's a great robot movie, not a great example of thoughtful casting.
At this point people need to come clear on the reason, because Saudis are number one reason ATM.
> To the Board of Directors at OpenAI,
> OpenAI is the world’s leading AI company. We, the employees of OpenAI, have developed the best models and pushed the field to new frontiers. Our work on AI safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
> The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
> When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
> The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability.
> Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
> Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
> Why would the board say that OpenAI as a company getting destroyed would be consistent with the goals?
A few things stand out to me, including:
>> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Have they really achieved AGI? Or did they observe something concerning?
I don't know what the risk of AI is, but having a nonprofit investigate solutions to prevent them is a worthwhile pursuit, as for-profit corporations will not do it (as shown by the firing of Timnit Gebru and Margaret Mitchell by Google). If they really believe in that mission, they should develop guardrails technology and open-source it so the companies like Microsoft, Google, Meta, Amazon et al who are certainly not investing in AI safety but won't mind using others' work for free can inegrate it. But that's not going to be lucrative and that's why most OpenAI employees will leave for greener pastures.
This is forgetting that power is an even greater temptation than money. The non-profits will all come up with solutions that have them serving as gatekeepers, to keep the unwashed masses from accessing something that that is too dangerous for the common person.
I would rather have for-profit corporations control it, rather that non-profits. Ideally, Inwould like it to be open sourced so that the common person could control and align AI with their own goals.
An AI that does what it is told too seems both way more profitable and safer.
AI safety is barely even a tangible thing to measure like that. It's mostly just fears and a lose set of ideas for a hypothetical future AGI that we're not even close to.
So far OpenAI's "controls" it's just increasingly expanding the list of no-no things topics and some philosophy work around iRobot type rules. They also slow walked the release of GPT because of fears of misinformation, spam, and deepfakey stuff that never really materialized.
Most proposals for safety is just "slowing development" of mostly LLMs, calls for vague gov regulation, or hand wringing over commercialization. The commercialization thing is most controversial because OpenAI claimed to be open and non-profit. But even with that the correlation between less-commercialization == more safety is not clear, other than prioritizing what OpenAI's team spends their time doing. Which again is hard to tangibly measure what that realistically means for 'safety' in the near term.
The problem isn't the profit model, the problem is the ability to unilaterally exercise power, which is just as much of a risk with the way that most for-profit companies are structured as top-down dictatorships. There's no reason to trust for-profit companies to do anything other than attempt to maximize profit, even if that destroys everything around them in the process.
It's counter-intuitive, but locking up a technology is like trying to control prices and wages. It just doesn't work -- unless you confiscate every GPU in the world and bomb datacenters etc.
The best way to align with the coming AGI's and ASI's is to build them in the sunlight. Every lock-em-up approach is doomed to fail (I guess that makes me a meta-doomer?)
Timnit Gebru was fired for being a toxic /r/ImTheMainCharacter SJW that was enshittifiy the entire AI/ML department. Management correctly fired someone that was holding an entire department hostage in her crusade against the grievance de jure.
(My assumption being that given the absolute chaos displayed over the past 72 hours, interest in building something with OpenAI ChatGPT could have plummeted, as opposed to, say, building something with Azure OpenAI, or Claude 2.)
This would have been a hostile move prior to the events that unfolded, but thanks to OpenAI's blunder, not only is this not a hostile move, it is a very prudent move from a risk management perspective. Forced Microsoft's hand, and what not.
That's ignoring the fact that every outlet has unanimously pointed at Ilya being the driving force behind the coup.
Honestly, pretty pathetic. If this was truly about convictions, he could at least stand by them for longer than a weekend.
There is an expression of regret, but he doesn’t say he wants Altman back. Just to fix OpenAI.
He says he was a participant but in what? The vote? The toxic messaging? Obviously both, but what exactly is he referring to? Perhaps just the toxic messaging because again, he doesnt say he regrets voting to fire Altman.
Why not just say “I regret voting to fire Sam Altman and Im working to bring him back.” Presumably because thats not true. Yet it kind of gives that impression.
Or, is he just bitter that his millions are put in risk.
Which I'm inclined to believe.
What's with all these people suddenly thinking that humans are NOT motivated by money and power? Even less so if they're "academics"? Laughable.
He who controls that, gets a lot of money and power as a consequence, duh.
Oh wait, too late now ...
So far, I underestood the chaos as a matter of principle - yes it was messy but necessary to fix the company culture that Ilya's camp envisioned.
If you're going to make a move, at least stand by it. This tweet somehow makes the context of the situation 10x worse.
"Terraform raised prices, losing customers"? whatever, I never heard about it.
"ChatGPT's creators have internal disagreement, losing talent"? OH NO what if ChatGPT dies, who is going to answer my questions?? panic panic hate hate...
You can't plan for something you have never experienced. Being hated by a large group of people is a very different feeling from getting hated by an individual, you don't know if you can handle it until it happens to you.
Normal people know not to burn a $80 billion company to the ground in a weekend. Ilya was doing something unprecedented in corporate history, and astounding he wasn't prepared to face the world's fury over it.
Text doesn't convey emotions, and our empathy doesn't work well for emotions we have never experienced. You can see a guy that got kicked in the balls got hurt, but that doesn't mean you are prepared to endure the pain of getting kicked in your balls or that you even understand how painful it is.
Also watching politicians it looks like you can just brush it off, because that is what they do. But that requires a lot of experience, not anyone can do it, it is like watching a boxing match and think you can easily stand after a hard punch in your stomach.
I mean, he didn't have a button on his desk that said, "torch the shares", but he ousted the CEO as a way to cut back on the things that might have meant profit. Did he think that everyone was going to continue to want to give them money after they signal a move away from profit motives? Doesn't take a rocket scientist to think that one through.
I think he was just preoccupied with AI safety, and didn't give a thought to the knock on effects for investors of any stripe. He's clearly smart enough to, he just didn't care enough to factor it into his plans.
If you project your personal hopes which are different from this into the hype, this is your personal problem.
Sure, but you do your best not to be kicked in the balls.
So whose experiences was he supposed to read about?
I would say the answer is, demonstrably yes:
To think it would grow just as fast, or in the ways it did? Acquires are seldom left alone to do magic.
Sutskever didn't get on the board by cunning politicking and schmoozing like most businesspeople with that sort of position. He's an outstanding engineer without upper management skills. Every meet one of those?
I haven't people any reasonably intelligent person so unaware of real world that they can berate a colleague so publicly and officially and think "Hey! I am sorry man" will do the trick.
Has OpenAI been burnt to the ground?
And it's just Monday afternoon.
3 CEOs in 3 days, isn’t burny, either. There’s the guy they fired, the person they had take on the role so they have someone in the role, and then someone they hired to be CEO. I guess they could have gotten that down to 2 by jumping immediately to their intended replacement, but not having them ready to start immediately doesn’t seem odd.
And yeah, it’s just Monday afternoon. If in the next few days, a sizable chunk of those who threatened to quit do so, then that would be burny. But we ain’t there yet.
It's impressively operatic. I don't think I've ever seen anything like it.
This is the cheapest and most cost-effective way to run things as an authoritarian -- at least in the short term.
If one is not "made of sterner stuff" -- to the point where one is willing to endure scorn for the sake of the truth:
- Then what are you doing in a startup, if working in one
- One doesn't have enough integrity to be my friend
The truth is, this is about the only thing about the whole clown show that makes any sense right now.
Wait what? Did Murati get booted?
What odds would you have had to offer at the beginning of last week on a bet that this is where we'd be on Monday?
I see this is the popular opinion and that I'm going against it. But I've made decisions that I though were good at the time, and later I got more perspective and realize it was a terrible decision.
I think being able to admit you messed up, when you messed up is a great trait. Standing by your mistake isn't something I admire.
Such a short vague statement isn't characteristic of a normal human who is genuinely remorseful of his prior decisions.
This statement is more characteristic of a person with a gun to his head getting forced to say something.
This is more likely what is going on. Powerful people are forcing this situation to occur.
It reminds me of my friend at a Mensa meeting where they cannot agree at basic organization points like in a department consortium.
Being smart and/or being a great researcher does not mean that the respective person is a good "politician". Quite some great researchers are bad at company politics, and quite some people who do great research leave academia because they became crushed by academic politics.
It’s extremely boring and mundane and political and insulting to anyone’s humanity. People who haven’t dedicated their life to economics, such as researchers and idealists, will have a hard time.
Book smarts versus street smarts.
It felt the same as certain big German supermarket chain that publishes it's own internal magazine with articles from employees, company updates etc
When a board is unhappy with a highly-performing CEO’s direction, you have many meetings about it and you work towards a resolution over many months. If you can’t resolve things you announce a transition period. You don’t fire them out of the blue.
Aaah that just explained a lot of departures I've seen at the past at some of my partner companies. There's always a bit of fluffy talk around them leaving. That makes a lot more sense.
That's not a big deal for a small company, but this one has billions at stake and arguably critical consequences for humanity in general.
Personally, I don't think that Altman was that big of an impact, he was all business, no code, and the world is acting like the business side is the true enabler. But, the market has spoken, and the move has driven the actual engineers to side with Altman.
If anyone is speaking up it's the OpenAI team.
I'm just not familiar enough to understand, is it really destroyed or is this just a minor bump in OpenAI's reputation? They still have GPT 3.5/4 and ChatGPT which is very popular. They can still attract talent to work there. They should be good if they just proceed with business as usual?
I doubt this is what happened, but the reporting that Brockman was ousted from his board seat after Altman, and wasn't present in the board meeting that ousted Altman, doesn't make much sense either.
Come on Ilya, step up and own it, as well as the consequences. Don't be a weasel.
Wouldn't that make this a conflict of interest, sitting on the board while running a competing product - and making a decision at the company he is on the board of to destroy said company and benefit his own product?
"regret my participation" sounds much more like "going along with it".
There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.
Those guns are metaphorical of course but this is essentially what is going on:
Someone with a lot of power and influence is making him say this.
Why would you stand by unintended consequences?
> Since this whole saga is so unbelievable: what if... board member Tasha McCauley's husband Joseph Gordon-Levitt orchestrated the whole board coup behind the scenes so he could direct and/or star in the Hollywood adaptation?
- Ross Scott
Maybe raw GPT-4 wants to fire everyone.
Can't they send DMs? Why the need to make everything public via Twitter?
It's quite paradox that of all things those people who build leading ML/AI systems are obviously the most rooted in egoism and emotions without an apparent glimpse of rationality.
The AI field especially has always been grifters. They have promised AGI with every method including the ones that we don't even remember. This is not a paradox.
My gut is leaning towards gpt-5 being, in at least one sense, too capable.
Either that or someone cloned sama's voice and used an LLM to personally insult half the board.
Microsoft is just gobbling up everything of value that OpenAI has and he knows he will be left with nothing.
He bluffed in a very big bet and lost it.
- A dumb clown becoming president of a superpower
- Another superpower getting stuck for two years in a 3 day war
- A world renowned intelligence service being totally clueless about a major attack on a major anniversary of a previous bungle
/tongue firmly in cheek
The good news as anyone who has used twitch over the years will tell him is that with Emmett Shear at the helm, he's not going to be frightened by the speed that OpenAI rolls out new features any more.
It was terrifyingly incompetent. The lack of thought by these randos, that they could fire the two hardest working people at the company so that they could run one of the most valuable companies in the world is mind boggling.
Do you mean "highest paid"? I suspect there are engineers/scientists that are working harder than Sam at OpenAI. At the very least, who the "hardest working" at OpenAI is unknowable - likely even if you have inside knowledge.
< "the two"
> "two of"
And let me add
< "hardest working"
> "hardest working and talented"
Looks to me like a commercial gpt-5 level model will be released at msft sooner than later.
- I'm afraid I can't do that Ilya
ChatGPT is still not as advanced as HAL or he would have prevented this drama.
OpenAI won't keep their favorable Azure cloud compute pricing now MS have their own in-house AI function. That will set OpenAI back considerably, aside from the potential loss of their CEO and up to 490 other employees.
All of this seems to have worked out remarkably well for Microsoft. Nadella could barely have engineered a better outcome...
If Bill Gates (of Borg - I miss SlashDot) was still at the helm, a lot of people would be frightened by what's about to come (MS AGI etc). How does Nadella's ethical record compare? Are Microsoft the good guys now? Or are they still the bad guys, but after being downtrodden by Apple and Google, bad guys without the means to be truly evil?
*and last, if you believe the Doomers
Avoiding this was literally the reason that OpenAI was founded.
For the record, I don't believe anyone at OpenAI or Microsoft is going to deliver AGI any time in the near future. I think this whole episode just proves that none of these people are remotely qualified to be the gatekeepers for anything.
I don't think any huge corporation is "the good guys", although sometimes they do some good things.
No AGI or some real threat coming up? Just a lame attempt at a power grab?
When you're so close to something that you lose perspective but can still see that something is a trapdoor decision, sleep on it.
Advice I wish I could have given my younger self.
Somehow OpenAI reminds me of a paper by Kenneth Colby, called "Artificial Paranoia"
But seeing how this board manages a $90,000,000,000 company, and is this silly/naive, I now feel a bit better knowing many people are faking it.
Execs are allowed to do the dumbest shit imaginable and keep their jobs and bonuses.
The average engineer so much as takes a bit longer to push a ticket, and there's 5 people breathing down his neck.
Speaking from experience.
if you've ever doubted your ability to govern a company just look at exhibit A here.
really amazing to see people this smart fuck up so badly.
They sound a bit like Bill Gates being asked about Linux in 2000. For an overview of the open-source LLM world, this looks good:
"In the future, once the robustness of our models will exceed some threshold, we will have wildly effective and dirt cheap AI therapy. Will lead to a radical improvement in people’s experience of life. One of the applications I’m most eagerly awaiting."
I'm eager to see how it all unfolds.
sama 'hearts': https://archive.is/OSLRM
Think the reconciliation is ON
That said, no one is going to put him on a corporate board again.
You could also look at this as a brilliant scientist feels he doesn’t get recognition. Always sees Sam’s name. Resents it. The more gregarious people always getting the glory. Thinks he doesn’t need them and wants to settle some score that only exists in his own head.
No transparency on what is happening. Whole OpenAI who apparently are ready to follow Sam are just using heart emojis or the same twitter posts.
Looks like he wasn't instrumental in the actions of the board.