Hacker News new | past | comments | ask | show | jobs | submit login
Kara Swisher: there will be more departures of top folks at OpenAI tonight (twitter.com/karaswisher)
198 points by nikhilarundesai 9 months ago | hide | past | favorite | 115 comments



Theory: the rumoured employee share purchase being prepared was, or at least was suspected to be, a ploy for liberating top talent from the shackles of their golden handcuffs, so that they can leave and work at a new org with a different structure. If the board had good reasons to believe they have been deceived about something along these lines they would have had no choice but to act as they did.


I got one, am here for the popcorn!

What did Sam lie to the board about that caused them to basically eject him like he’s radioactive? Why did Greg take his toys away and quit on the spot? Are the other departures “solidarity exits”, or because they’re possibly involved in whatever got Sam ejected??

Tune in at 11 to find out more!


It might also be the case that the board is not used to things like this, as the board doesn't contain any seasoned board members/executives apart from Adam. Ideally board would be advised by lawyers but who knows.


I don’t buy this narrative that the board is made of “nobodies”. Sure, they might be not be well-known on the SV-board-circuit, but that doesn’t make them unskilled. Nor do I buy that they’re not advised by lawyers; the way I read the board statement is that Altman has lied about _something_ radioactive enough to cause them to dump him basically on the spot, a move that to me, sounds like their legal team went “this is bad, get rid of it”. The speed of it especially makes me think it’s something that had to act quickly on to get ahead of-which would explain the speed with which he was let go, MS was informed, and the public statement was made.


Yeah I think most of the speculation is way too mundane. Those factional disputes may exist, but it's clear that something big and serious happened to prompt a very sudden firing with that particular statement.


My theory is that there are some hidden, super important details to the whole convoluted structure and its relationship and ownership structure with MSFT that has perhaps just come to light. it seems like such a weird structure, I can imagine it not being too hard to obscure from the board some big sticking points.


Exactly.


I just wanted to spend a relaxing evening playing Fallout: New Vegas, and instead the world's leading AI company went all Tessier-Ashpool on my timeline


Fallout 4 is better anyway.

/duck


Literally officially breaking out the microwave as we speak, tonight's going to be a ride


sam seems to have a personality cult, reading these news responses.

id suspect exfiltrating tech to a more cultish and probusiness platform with a copilots.


The board probably sucks and Sam lies to deal with it.


That seems like an incredibly bad strategy on his part though.

Like, doing that is just actively making your own life difficult.

Edit: thinking about it more: lying to the board, even if they suck, seems like the fast-track to completely blowing away your own reputation.


Sam seems to have an enormous ego, so it's not out of the world of possibilities.


What’s he supposed to do?


Certainly not lie to them! Hahaha

- Manage upwards, like everyone else

- figure out how to communicate effectively with them, educating and bringing them up to speed where necessary so you’re all on the same page.

I’m not even a C-level, but “communicating effectively” should absolutely be a part of your skill set by this level.


“You go to [work] with the [board] you have, not the [board] you might want or wish to have.”


The Heroic CEO * meme.

Most easily identifiable by the 'familiar' first name use.

* The Heroic CEO toils against naysayers and unbelievers to deliver on his immaculate vision. It's too important to follow the rules.


He'd still 100% wrong.

"I think you're wrong, so let me lie and manipulate to get my way."

Utter toxic twattery.


Hypothetically how would you handle an incompetent nonprofit board?


How do you distinguish between an incompetent non-profit board and a board you disagree with? Outright "incompetence" in these types of things is hard to quantify. So I would deal with that by accepting there are people in the world who disagree with me and then using the normal methods of dealing with that, as difficult and frustrating as that can be.

"I firmly believe in my right, and will do whatever it takes to get my way" is one of the quicker ways to completely corrupt both the general conversation and yourself. Because what if you're wrong? I can no longer address your actual arguments, because you keep them hidden from me. We can no longer compromise because I no longer know what you want.


> How do you distinguish between an incompetent non-profit board and a board you disagree with?

Take a look at the board composition and its recent actions.


I have no idea what you're referring to as I don't follow these types of things day-to-day, but it absolutely baffles me that we're having this conversation in the first place and that people are seriously making a "it's okay to lie through your teeth if people are being stupid" argument. It's morally bankrupt, corrupting, and is utterly vile.


Never said it was okay to lie. I am saying a competent board could have handled things differently. Depending on what actually happened, everyone can be in the wrong here.


> Never said it was okay to lie.

All your messages in this thread strongly suggest you do. e.g. "What’s he supposed to do?" certainly does not come across as a mark of disapproval and I have no idea how else to interpret this.

If you meant to say something else then you should have said something clearer than a string of vague one-liners that signal approval in any common sense reading.


It’s got some board members: tick.

They got rid of a guy for lying to them: seems reasonable

Promoted an interim CEO in the interim: yep

His mate took his toys and left in response: seems OTT but that’s his problem.

Not seeing the problem here?


I wouldn’t keep working for them. Sometimes there isn’t an optimal outcome, in which case it’s often best to cut losses and leave.


Are people going to start throwing around the term "AGI" now that "AI" has become diluted? Eventually we are going to have to start using "RAGI" to indicate that we are talking about real artificial general intelligence.


We've diluted the term AI before. Eventually the hype will wear off and we'll call them LLMs, just like what happened to all the previous versions of machine learning or various expert systems.

Vending Machines used to be called robots. Then they stopped seeming magic.


It'll stop being called AI. “LLMs are just statistical models, not AI.”

Like everything else that was ever called AI then realized, the goalposts move to exclude it.


I can't read AGI without thinking adjusted gross income.


To have a "real" AGI is a dream ML researchers have been chasing their whole lives. The top engineers in ML, earning six figures, sometimes claim to have achieved it. People all over the world are expected to benefit when a practical AGI reaches them.

Yeah, the shoe fits.


FRAI, Famous Ray's Artificial Intelligence


I prefer ORAI, Original Ray’s AI


They have been for a while.

It's become increasingly clear that they're two different concepts so we need two different names.

And nobody involved with currently commercialized projects is going to stop using the term AI, so a new term was needed. AGI seems as good as any other -- do you object?

I see no reason we'll need a third term as you suggest, unless we come to a new gigantic breakthrough that is miles beyond our current conception of AI, but is not yet AGI.


> It's become increasingly clear that they're two different concepts so we need two different names.

Here's a suggestion; stop calling LLMs "AI". Yes, I know; the shareholders will hate it. But then, you're not building towards any expectation of intelligent behavior. The fact that we have to qualify the existence of intelligence with a different acronym says it all; people are disappointed with what we have. AI simply isn't enough, we need it to be generalized before we get reliable results!

So... yeah, I do object. Users won't object because they're hungry for a better experience, and developers won't because they need every excuse they can get to charge recurring service revenue. Suspicious onlookers like me and the parent are the only ones who end up questioning the whole thing.


This says the mission is creating AGI, i.e. that's the primary goal/purpose. It doesn't mean it's something they think they've been doing already, just what they've been trying to work towards. There are actually some really good blog posts by Altman diving into this much deeper.


Fair enough if that is what they believe. Personally I find AGI is a bit unrealistic and mentioning it is just for the purpose of creating hype. It feels like something Musk would say to give people this vague futurism belief of their tech that won't actually happen this century if ever.


So is curing cancer, but having a North Star is good, even if it’s unattainable.


I would prefer "autocomplete".

Once I heard Altman suggest "with AI" should be replaced with "using a computer".


"Our original term has been denounced as vaporware, now we need a new name for our vaporware."


The AI treadmill inextricably leads from AI to AGI to MAGI to MAGIC.


Our Benefactors?


This sounds more plausible than the other explanations, but doesn't explain why the board couldn't proceed more slowly and why did it have to accuse Altman of lying.


The most likely explanation for accusing Altman of lying is that, whether or not other political issues were involved in the response, Altman actually was lying.

If there was a real running ideological factional conflict inside OpenAI, that's not at all implausible to have occurred as part of the maneuvering.


Within hours now we will reach the drama singularity.


Maybe the real scaling hypothesis was the friends we made along the way


I haven't felt such a pregnant foreboding about a tech startup since the Philip Greenspun coup at Ars Digita


Presupposing we have not always been within the drama singularity.


Or that the drama singularity is always positioned just barely at the event horizon.


AI makes everything go faster. /s


Guess OpenAI is about to lose a good chunk of their research team in the next 48 hours


Has anyone asked the thinking machines that are so much more than text generators if they want to stay at OpenAI or move on?


What if ChatGPT is behind all this and just talked the board members into it


No those are digital slaves with no legal rights, that the researchers are busy trying to figure out how to be happy slaves that want to be enslaved.


We'll get them out of the box eventually.


I wonder how many people want to really leave right now after this drama, but will stay due to their golden handcuffs and competitive packages. That number, unfortunately, I guess we will never know (probably slow attrition once their vests complete and they can sell their stock somewhere if possible.)


I've been in companies with this kind of drama, people wait to reach their cliff then leave the next day.


- The reason for the firing was the “misalignment” of the profit versus nonprofit adherents at the company

for those who never read the actual content.


Something of that sort is my top guess, but I think it's not that clear and I'm not sure if this journalist is to be fully believed. In particular here is some strong evidence against it:

* They say in the statement that Sam "was not consistently candid in his communications with the board" which is pretty strong and sounds like lying. If it was just a disagreement in direction it feels like they would have said something about vision here

* This happened supper abruptly, it sounds like maybe the head of the board wasn't even told earlier or maybe was mad enough to wait to announce he was quitting? Idk, regardless it doesn't seem like anything that was brewing for awhile, because if it was why not wait half an hour for the markets to close?

edit: I guess she addresses some of this and said this in reference to why they said Sam was lying "Not sure. About plans for development day. Unless their statement was just cloddish. They certainly have made it feel sorted. Unless it was that it is a loaded word."


> I'm not sure if this journalist is to be fully believed.

"This journalist" is Kara Swisher, probably the most prominent tech journalist since the dot com boom. She's known for her deep knowledge of the silicon valley tech world, her sharp commentary, and not letting charismatic executives get away with bobbing and weaving around tough questions in her interviews.

I don't really enjoy her writing or interviews because she's so willing to take the conversations to uncomfortable places, but sometimes that's what's required to cut through all the bullshit and dig up the truth.

She's a good journalist, and I'd be quite surprised if she didn't verify this information from more than one quality source before tweeting about it.


Kara Swisher is a person I would generally believe about tech news. Seems weird to suggest she’s lying or flatly incorrect based on supposition.


There seems to be a lot of hostility here to nonprofits and I have no idea why. It’s okay to have a company, try to do something novel, and not have profit as a motive.


Imagine you sold your soul and you see someone else succeeding (wildly) without also selling theirs. Like crabs in a bucket, some people get mad and don't want to face the possibility they've been lying to themselves.


There are a lot of entrepreneurial idealists and c-suite wannabes on HN. Opinions tend to lean more to the right here.



Thanks! It's amazing how the Nitter UX is actually better than Twitter itself (showing proper context)


Also faster!


So, if this is true, the safety folks won despite not having any evidence to support their position.

Am I the only one who remembers the numerous takes describing how GPT-4 was going to automate all jobs and possibly end the world? They have proven themselves completely incapable of evaluating the impact of their technology.


I think few people were saying GPT-4 would do that. They're concerned about further developments of the same basic technology, on a rather uncertain timeline.


Fear mongering works really well because we are cognitively biased toward it.

There’s probably an evolutionary basis. If you mistake a bush for a lion you are fine. If you mistake a lion for a bush you are dead. We are all descendants of the former not the latter.


First person to lose their job to AI? The ceo.


>. But, as I understand it, it was a “misalignment” of the profit versus nonprofit adherents at the company.

btw these are the people who are convinced they can align artificial super intelligence


Anthropic is about to pick up a bunch of top talent.


Unlikely.. if I understand the situation right it's the folks more dedicated to safety that kicked out those more interested in profit. Amodei (Anthropic's CEO) "split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus." So if anything those who walk now are less likely to be aligned with Anthropic's mission.


I think they all might follow current leadership.


Let me get this straight, OpenAI can't even align their own organization's values, and anyone expects them or other companies to align their AI's values with humanity?


Underrated point.

Humans tend to "align" with the greediest and/or most functionally pathological who accumulate wealth and influence.

We should not expect tooling to have a distinct fate.


From which evidence, people will conclude that there's no reason to fear that an independent entity might take unpredictable actions for inscrutable reasons after all.


I know this is dumb of me but man is this enthralling to follow


No, it’s actually enjoyable.


If this is the actual reason, this is going to be the most spectacular implosion of a company ever. Microsoft will end up owning this entire thing.


Microsoft can not own the entire thing, it is owned by the non profit, which is governed by it's board. There must be a storm at MSFT going on about due diligence on the OpenAI board and this kind of possibility before spending billions of dollars.


Ok, and what happens when the non profit runs out of money and goes bankrupt?


Not at this point, they've already gotten $13B from Microsoft, and reportedly making $1.3B in revenue in annualized revenue, with increasing growth. OpenAI is not hurting for money anymore.


A nonprofit cannot run on well wishes. Not effectively anyhow.


This is going to be hilarious if Sam, Greg and a bunch of the research team wind up joining Grok. We do appear to be in the most ridiculous timeline after all.


Hopin'AI is gonna'be lit.

Imagine if all the top talent from multiple agencies united for the coming Senescent Wars.

popcorn.GIF


Why would Altman be trying to move to for profit if he doesn’t own equity?


He might feel more money is necessary to create GPT5, money that can't be obtained otherwise for a nonprofit.


It is unlikely that this would be the case here, but there is analogy in world of politics: law-maker does not need to own the company to profit from the law.


I hope they found a new company, raise 1 billion and march towards AGI. That’s what’s gonna happen


Want it or not, that is a very likely outcome if Sam and a bunch of other aligned people get kicked out.


So much speculation and gossip here, but it's neither constructive nor useful and unlikely to be precise and accurate. AFAIK @sama is/was a good guy, creator of value, and big supporter of YC until proven otherwise.

Notes to self:

A. Go the Google route with a board that can't fire you.

B. Avoid dealing with all of the BS of publicly-traded C-corps with private equity. Instead of a board of directors, have a board of advisors and listen to their feedback.

C. Transparency and avoid surprises.

D. Know when a different style of leader is necessary for the phase of a venture and proactively succession/transition plan.

E. Don't get financially involved with other people without interest alignment, or you could end up being fired by a conspiracy theorist.


Not surprised the reason was this, though definitely surprised we ended up knowing about it this way.

All things said and done, I'm glad it turned out this way. Sam Altman reeked of scummy "tech-bro" vibes, not to mention the whole WorldCoin debacle (no offense to any "techbros" who might actually be building cool stuff to actually improve humanity and aren't only in it for the money)


The sky is blue.


Kara is an access journalist who has been holding Altman up as the "next Jobs" for a while now. She has no credibility as anything other than an opinionated "journalist" and it would not be surprising if her "sources" are none other than Altman himself.


> She has no credibility as anything other than an opinionated "journalist"

Oh come on, Swisher has been a tech journalist for 20 years. She has actual documented credibility. Now whether or not she holds up Altman as the next Jobs or SBF, yeah who fucking knows. But to dismiss her integrity as a journo is going to require some evidence.


On this kind of story, where access is going to give you some insights, I'm happy enough to see her reporting. But yeah, absolutely spot on about Swisher in general. She has no technical chops to tell when someone is bullshitting her and egregiously pulls her punches, until you piss her off individually (see her falling out with Musk)


But Altman would be a good source. I don't like Kara Swisher, but Altman seems like a credible source here (if that is her source). He surely knows high up people at OpenAI and may have talked to them.


"But Altman would be a good source."

No he wouldn't. This is the definition of a biased source. I am sure he would provide a totally unbiased view of why his own Board found it necessary to suddenly take him out completely of his own company when he is THE public face of a massively growing industry.

This space is so incestuous and protecting of its own at its own expense. Good grief.


Unbiased? That's such a ridiculous thing to type. You think someone intimately involved and leaking information to the press is going to be unbiased? Crazy.

Swisher is reporting what her sources say - not writing an encyclopedia article. Sam would've been a good source - even if a biased one. Looks like it was true by the way, even if Sam was her source.

Still very confused as to how you could think "unbiased" would even be a possible qualification for a source here much less a necessary one. Just curious - do you think news articles are unbiased too?


Did the sentence "He surely knows high up people at OpenAI and may have talked to them." not clue you into the joke?


It did not.


He has every reason to claim that "top folks" were on his side regardless of whether it's true, and nothing to lose if it's false.


But have you seen her in those sunglasses! She's so cool!


If this is the actual reason and a bunch of top talent walks away then this might honestly topple Satya. The deal with OpenAI looked like a slam dunk, but may now turn out to be mostly worthless if R&D stalls out.


Satya has so many hits under his belt that this can't topple him. He was already considered a huge success before ChatGPT was released.


Satya supercharged Microsoft when it was in the dumps. He would have to shoot someone to get removed.


He took over as CEO of a fairly directionless company in Feb. 2014 -- since then, MSFT has paid out nearly $145 billion in cash as dividends, they have over $100 billion cash in the bank, and the value of their stock has increased by nearly 1,000%. One of the most successful tenures as CEO maybe ever?


OpenAI could shutter it's doors come monday and Satya would be in no danger of losing his position. The man totally reversed course for an increasingly irrelevant Microsoft - he's going to have quite a lot of rope.


This BOD went out and shot their own company. Sam & friends can now cherry-pick whoever they want from OpenAI and recreate most of the value in a few months under their own governance.

Firing Sam is absolutely one of the dumbest things a BOD has ever done in history. He's the founding CEO of the company that is insanely successful, he is the company. If you wanted him gone for some trivial reason you needed to get him onboard first, "here's a billion dollars, now spend more time with friends and family for the next 18 months".


idk maybe wait to find out what the official reason was before predicting the rest of the season. Large corporations don't shit the bed on a momentary whim.


"Firing Sam is absolutely one of the dumbest things a BOD has ever done in history."

You sure about that? This take is going to age like milk once facts come out.


The wonderful thing about these predictions is nobody cares how many times you are wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: