Theory: the rumoured employee share purchase being prepared was, or at least was suspected to be, a ploy for liberating top talent from the shackles of their golden handcuffs, so that they can leave and work at a new org with a different structure. If the board had good reasons to believe they have been deceived about something along these lines they would have had no choice but to act as they did.
What did Sam lie to the board about that caused them to basically eject him like he’s radioactive? Why did Greg take his toys away and quit on the spot? Are the other departures “solidarity exits”, or because they’re possibly involved in whatever got Sam ejected??
It might also be the case that the board is not used to things like this, as the board doesn't contain any seasoned board members/executives apart from Adam. Ideally board would be advised by lawyers but who knows.
I don’t buy this narrative that the board is made of “nobodies”. Sure, they might be not be well-known on the SV-board-circuit, but that doesn’t make them unskilled. Nor do I buy that they’re not advised by lawyers; the way I read the board statement is that Altman has lied about _something_ radioactive enough to cause them to dump him basically on the spot, a move that to me, sounds like their legal team went “this is bad, get rid of it”. The speed of it especially makes me think it’s something that had to act quickly on to get ahead of-which would explain the speed with which he was let go, MS was informed, and the public statement was made.
Yeah I think most of the speculation is way too mundane. Those factional disputes may exist, but it's clear that something big and serious happened to prompt a very sudden firing with that particular statement.
My theory is that there are some hidden, super important details to the whole convoluted structure and its relationship and ownership structure with MSFT that has perhaps just come to light. it seems like such a weird structure, I can imagine it not being too hard to obscure from the board some big sticking points.
I just wanted to spend a relaxing evening playing Fallout: New Vegas, and instead the world's leading AI company went all Tessier-Ashpool on my timeline
How do you distinguish between an incompetent non-profit board and a board you disagree with? Outright "incompetence" in these types of things is hard to quantify. So I would deal with that by accepting there are people in the world who disagree with me and then using the normal methods of dealing with that, as difficult and frustrating as that can be.
"I firmly believe in my right, and will do whatever it takes to get my way" is one of the quicker ways to completely corrupt both the general conversation and yourself. Because what if you're wrong? I can no longer address your actual arguments, because you keep them hidden from me. We can no longer compromise because I no longer know what you want.
I have no idea what you're referring to as I don't follow these types of things day-to-day, but it absolutely baffles me that we're having this conversation in the first place and that people are seriously making a "it's okay to lie through your teeth if people are being stupid" argument. It's morally bankrupt, corrupting, and is utterly vile.
Never said it was okay to lie. I am saying a competent board could have handled things differently. Depending on what actually happened, everyone can be in the wrong here.
All your messages in this thread strongly suggest you do. e.g. "What’s he supposed to do?" certainly does not come across as a mark of disapproval and I have no idea how else to interpret this.
If you meant to say something else then you should have said something clearer than a string of vague one-liners that signal approval in any common sense reading.
Are people going to start throwing around the term "AGI" now that "AI" has become diluted? Eventually we are going to have to start using "RAGI" to indicate that we are talking about real artificial general intelligence.
We've diluted the term AI before. Eventually the hype will wear off and we'll call them LLMs, just like what happened to all the previous versions of machine learning or various expert systems.
Vending Machines used to be called robots. Then they stopped seeming magic.
To have a "real" AGI is a dream ML researchers have been chasing their whole lives. The top engineers in ML, earning six figures, sometimes claim to have achieved it. People all over the world are expected to benefit when a practical AGI reaches them.
It's become increasingly clear that they're two different concepts so we need two different names.
And nobody involved with currently commercialized projects is going to stop using the term AI, so a new term was needed. AGI seems as good as any other -- do you object?
I see no reason we'll need a third term as you suggest, unless we come to a new gigantic breakthrough that is miles beyond our current conception of AI, but is not yet AGI.
> It's become increasingly clear that they're two different concepts so we need two different names.
Here's a suggestion; stop calling LLMs "AI". Yes, I know; the shareholders will hate it. But then, you're not building towards any expectation of intelligent behavior. The fact that we have to qualify the existence of intelligence with a different acronym says it all; people are disappointed with what we have. AI simply isn't enough, we need it to be generalized before we get reliable results!
So... yeah, I do object. Users won't object because they're hungry for a better experience, and developers won't because they need every excuse they can get to charge recurring service revenue. Suspicious onlookers like me and the parent are the only ones who end up questioning the whole thing.
This says the mission is creating AGI, i.e. that's the primary goal/purpose. It doesn't mean it's something they think they've been doing already, just what they've been trying to work towards. There are actually some really good blog posts by Altman diving into this much deeper.
Fair enough if that is what they believe. Personally I find AGI is a bit unrealistic and mentioning it is just for the purpose of creating hype. It feels like something Musk would say to give people this vague futurism belief of their tech that won't actually happen this century if ever.
This sounds more plausible than the other explanations, but doesn't explain why the board couldn't proceed more slowly and why did it have to accuse Altman of lying.
The most likely explanation for accusing Altman of lying is that, whether or not other political issues were involved in the response, Altman actually was lying.
If there was a real running ideological factional conflict inside OpenAI, that's not at all implausible to have occurred as part of the maneuvering.
I wonder how many people want to really leave right now after this drama, but will stay due to their golden handcuffs and competitive packages. That number, unfortunately, I guess we will never know (probably slow attrition once their vests complete and they can sell their stock somewhere if possible.)
Something of that sort is my top guess, but I think it's not that clear and I'm not sure if this journalist is to be fully believed. In particular here is some strong evidence against it:
* They say in the statement that Sam "was not consistently candid in his communications with the board" which is pretty strong and sounds like lying. If it was just a disagreement in direction it feels like they would have said something about vision here
* This happened supper abruptly, it sounds like maybe the head of the board wasn't even told earlier or maybe was mad enough to wait to announce he was quitting? Idk, regardless it doesn't seem like anything that was brewing for awhile, because if it was why not wait half an hour for the markets to close?
edit: I guess she addresses some of this and said this in reference to why they said Sam was lying "Not sure. About plans for development day. Unless their statement was just cloddish. They certainly have made it feel sorted. Unless it was that it is a loaded word."
> I'm not sure if this journalist is to be fully believed.
"This journalist" is Kara Swisher, probably the most prominent tech journalist since the dot com boom. She's known for her deep knowledge of the silicon valley tech world, her sharp commentary, and not letting charismatic executives get away with bobbing and weaving around tough questions in her interviews.
I don't really enjoy her writing or interviews because she's so willing to take the conversations to uncomfortable places, but sometimes that's what's required to cut through all the bullshit and dig up the truth.
She's a good journalist, and I'd be quite surprised if she didn't verify this information from more than one quality source before tweeting about it.
There seems to be a lot of hostility here to nonprofits and I have no idea why. It’s okay to have a company, try to do something novel, and not have profit as a motive.
Imagine you sold your soul and you see someone else succeeding (wildly) without also selling theirs. Like crabs in a bucket, some people get mad and don't want to face the possibility they've been lying to themselves.
So, if this is true, the safety folks won despite not having any evidence to support their position.
Am I the only one who remembers the numerous takes describing how GPT-4 was going to automate all jobs and possibly end the world? They have proven themselves completely incapable of evaluating the impact of their technology.
I think few people were saying GPT-4 would do that. They're concerned about further developments of the same basic technology, on a rather uncertain timeline.
Fear mongering works really well because we are cognitively biased toward it.
There’s probably an evolutionary basis. If you mistake a bush for a lion you are fine. If you mistake a lion for a bush you are dead. We are all descendants of the former not the latter.
Unlikely.. if I understand the situation right it's the folks more dedicated to safety that kicked out those more interested in profit. Amodei (Anthropic's CEO) "split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus." So if anything those who walk now are less likely to be aligned with Anthropic's mission.
Let me get this straight, OpenAI can't even align their own organization's values, and anyone expects them or other companies to align their AI's values with humanity?
From which evidence, people will conclude that there's no reason to fear that an independent entity might take unpredictable actions for inscrutable reasons after all.
Microsoft can not own the entire thing, it is owned by the non profit, which is governed by it's board. There must be a storm at MSFT going on about due diligence on the OpenAI board and this kind of possibility before spending billions of dollars.
Not at this point, they've already gotten $13B from Microsoft, and reportedly making $1.3B in revenue in annualized revenue, with increasing growth. OpenAI is not hurting for money anymore.
This is going to be hilarious if Sam, Greg and a bunch of the research team wind up joining Grok. We do appear to be in the most ridiculous timeline after all.
It is unlikely that this would be the case here, but there is analogy in world of politics: law-maker does not need to own the company to profit from the law.
So much speculation and gossip here, but it's neither constructive nor useful and unlikely to be precise and accurate. AFAIK @sama is/was a good guy, creator of value, and big supporter of YC until proven otherwise.
Notes to self:
A. Go the Google route with a board that can't fire you.
B. Avoid dealing with all of the BS of publicly-traded C-corps with private equity. Instead of a board of directors, have a board of advisors and listen to their feedback.
C. Transparency and avoid surprises.
D. Know when a different style of leader is necessary for the phase of a venture and proactively succession/transition plan.
E. Don't get financially involved with other people without interest alignment, or you could end up being fired by a conspiracy theorist.
Not surprised the reason was this, though definitely surprised we ended up knowing about it this way.
All things said and done, I'm glad it turned out this way. Sam Altman reeked of scummy "tech-bro" vibes, not to mention the whole WorldCoin debacle (no offense to any "techbros" who might actually be building cool stuff to actually improve humanity and aren't only in it for the money)
Kara is an access journalist who has been holding Altman up as the "next Jobs" for a while now. She has no credibility as anything other than an opinionated "journalist" and it would not be surprising if her "sources" are none other than Altman himself.
> She has no credibility as anything other than an opinionated "journalist"
Oh come on, Swisher has been a tech journalist for 20 years. She has actual documented credibility. Now whether or not she holds up Altman as the next Jobs or SBF, yeah who fucking knows. But to dismiss her integrity as a journo is going to require some evidence.
On this kind of story, where access is going to give you some insights, I'm happy enough to see her reporting. But yeah, absolutely spot on about Swisher in general. She has no technical chops to tell when someone is bullshitting her and egregiously pulls her punches, until you piss her off individually (see her falling out with Musk)
But Altman would be a good source. I don't like Kara Swisher, but Altman seems like a credible source here (if that is her source). He surely knows high up people at OpenAI and may have talked to them.
No he wouldn't. This is the definition of a biased source. I am sure he would provide a totally unbiased view of why his own Board found it necessary to suddenly take him out completely of his own company when he is THE public face of a massively growing industry.
This space is so incestuous and protecting of its own at its own expense. Good grief.
Unbiased? That's such a ridiculous thing to type. You think someone intimately involved and leaking information to the press is going to be unbiased? Crazy.
Swisher is reporting what her sources say - not writing an encyclopedia article. Sam would've been a good source - even if a biased one. Looks like it was true by the way, even if Sam was her source.
Still very confused as to how you could think "unbiased" would even be a possible qualification for a source here much less a necessary one. Just curious - do you think news articles are unbiased too?
If this is the actual reason and a bunch of top talent walks away then this might honestly topple Satya. The deal with OpenAI looked like a slam dunk, but may now turn out to be mostly worthless if R&D stalls out.
He took over as CEO of a fairly directionless company in Feb. 2014 -- since then, MSFT has paid out nearly $145 billion in cash as dividends, they have over $100 billion cash in the bank, and the value of their stock has increased by nearly 1,000%. One of the most successful tenures as CEO maybe ever?
OpenAI could shutter it's doors come monday and Satya would be in no danger of losing his position. The man totally reversed course for an increasingly irrelevant Microsoft - he's going to have quite a lot of rope.
This BOD went out and shot their own company. Sam & friends can now cherry-pick whoever they want from OpenAI and recreate most of the value in a few months under their own governance.
Firing Sam is absolutely one of the dumbest things a BOD has ever done in history. He's the founding CEO of the company that is insanely successful, he is the company. If you wanted him gone for some trivial reason you needed to get him onboard first, "here's a billion dollars, now spend more time with friends and family for the next 18 months".
idk maybe wait to find out what the official reason was before predicting the rest of the season. Large corporations don't shit the bed on a momentary whim.