Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's Murati Aims to Re-Hire Altman, Brockman After Exits (bloomberg.com)
95 points by himaraya 6 months ago | hide | past | favorite | 109 comments




Information just reported this:

Breaking: Sam Altman Will Not Return as CEO of OpenAI

https://www.theinformation.com/articles/breaking-sam-altman-...


> Emmett Shear is New Interim CEO


Holy shit. This is huge. Emmett Shear is in the AI Existential Risk camp.


Technically Sam is too, he signed this letter: https://www.safe.ai/statement-on-ai-risk


Of course, this is a must from the board members. They certainly won't pick a CEO who's not in that camp.


> My understanding is that Sam is in shock.

https://twitter.com/emilychangtv/status/1726468006786859101?...


“Interim CEO Mira Murati plans to rehire Sam and Greg, and is in talks with board rep Adam D’Angelo to do so (in what capacity is not yet finalized). However, concurrently, the OpenAI board is looking to hire its own CEO, and has reached out to two candidates that we’ve spoken to, both prominent execs”

https://x.com/emilychangtv/status/1726457543629914389?s=46

I can’t imagine why any CEO would want to take the job and be Sam’s boss. There’s no way that goes well.


Gotta love that Adam D'Angelo who has a clear conflict of interest with Poe to OpenAI's GPTs release from dev day is still on the board and is now somehow the one tasked with leading this negotiation from the boards side.


There's no conflict of interest with (what he believes to be) OpenAI's principles and his job as a board member. There was no conflict in reality either until two weeks ago.


Lmao this just clicked for me. I was reading about the many iterations of OpenAI’s board’s various conflicts of interests. But I hadn’t used Poe and didn’t realize it was basically characters/Agents.


That you can charge people to use, literally almost exactly what GPTs are (minus that those are supposed to be a revenue share from ChatGPT Plus subscribers).

https://poe.com/earnings_tos


I interpret the article as saying she's working on rehiring Greg as president and Sam as CEO, in "parallel" with the board looking for a different CEO.


>I can’t imagine why any CEO would want to take the job and be Sam’s boss. There’s no way that goes well.

She could be doing a move from The Expanse where she's Jim Holden and Sam is Camina Drummer (plus Adam is Chrisjen Avasarala).


Mira Murati is interim CEO. Her job is to find [edit: not hire] a new CEO, after which she can either resign or return to her previous role.


No, her job is to be interim CEO. The board hires the CEO.


Board chooses the CEO but interim does all the practical things, while also managing the company.


Elon might be available.


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


dang can you tell me if this comment I wrote was downranked by a mod? https://news.ycombinator.com/item?id=38342452


No. What made you think it was?


It got a number of upvotes, but somehow ended up at the bottom of the comment thread.


I'm sorry! It was indeed downweighted, though not by a mod - there are other ways that that can happen.

I'm not sure why I thought otherwise—it's possible that I didn't look at the correct comment, or possibly I looked at it before it got downweighted, though neither of those seem likely. In any case, I definitely don't want to give you, or any user, inaccurate information and I'm sorry about that.

As for the comment itself, I don't think it was terribly good for HN—it was more on the snark/fulmination/flamewar side of the ledger, rather than the curious conversation we're looking for, as described at https://news.ycombinator.com/newsguidelines.html. If I had seen it I might have downweighted it too, though probably not as much.


I think there is a consensus that developing AGI is the most important thing for the continued existence of human civilization. And we know a chair or table will not stand with just two legs. In fact some believe the USA grew into the most prosperous and powerful country because its founder were wise enough to design the government with three branches, each with their own powers and checks and balances between them.

We have seen that Ilya and Sam can not work together on their own. With only two natural leaders, there is no way to solve any dispute, as voting would lead to a stalemate. I believe Elon has a lot to bring to the table here, he nearly perfectly fills in the deficiencies of the other two - Elon has a strong grounding in ethics and morals (consider how many of his ventures post Paypal have been to truly benefit human society rather than just make money) which I feel could rein in Sam's tendency to ruthlessly pursue profit with questionable morality - see WorldCoin. Additionally, I think his real world experience could counter-balance some of the naivete we've seen with Ilya since his rather sudden entrance into the spotlight. I truly hope the right people consider this and can convince Elon to step up and fulfill what may be considered his true potential.


Seems strange to me Elon would prioritize Twitter over the greatest threat/opportunity in humanity of AGI to Microsoft. Seems a bit Capt. Obvious, maybe he can get Ilya.

"Replying to Nadella's post, Musk then wrote, “Now they will have to use Teams!"


I am surprised we haven't heard that he's trying to insert himself into this somehow


The reason he left in the first place was becasue they didn't want him to run it. He was on the board years ago.


Most of the people here don't know who founded OpenAI and why they founded it. Board in the question is the real developers of this technology and has members known to be not sold before. Ilya and his team is the core developer even we can call them inventors of this tech. While others who paid by sam can be replaced, the core teams always stated that they want "open" sourced project and not a value and profit based company.

I think it is a good thing that OpenAI won't let silicon valley bully run into company. They spent whole their life on this technology and they won't let any "i'm the network guy and i'm the CEO" type of guy sell and brag about it.

He even went to take Hawking Fellowship award. What? Bro, let ilya or alec take it. What a douche!


That’s not true. Key researchers responsible for GPT-4 have quit over the events. There is not a single “core” team. There are different sides and views on this matter.


Can you share their names? GPT-4 is a larger and boarder tech of GAN and GPT-1. How GPT-4 responsible researchers are "key people". Also yes there is a core team. Ilyas team and Jan Leike. Alec and super-alignment. Ilya and alec created the core tech. others just help it to become a product. The 2 scientist quit over this was working on risk evaluation and manager? How they are key developers? --- im not saying their work is not important


I see literally no reason for Sam to stay without a full board resignation and return to CEO. All other options are just downsides when he can walk, start Newco, and take everyone with him. He'd lose the restrictive governance model and gain full-control.

I think Murati is actually on team Altman, but that just makes me think that he should walk even more. Take Murati and start Newco with the exact same org chart.


Everyone says Sam can start a newco so easily, but he'd lose the data, codebase and deployed infrastructure, and a lot of knowledge. If people think it's devastating for OpenAI to lose 1/3 of its employees, Sam would basically be starting with the equivalent of an OpenAI that lost 2/3, along with all the other problems. Of course, with enough time and money he can recover, but I don't think it would be that easy. And to Sam, such a slowdown when he had so much momentum before may be the hardest thing to accept.


Are gpus even available? Espically in the scale this hypothetical new company would need.


> He'd lose the restrictive governance model and gain full-control.

He'd lose the dataset, which is by far the most valuable thing they have. The genie is out of the bottle and making such a dataset again is not going to be easy or cheap (or maybe even legal).


We're still in the early days of LLM training- saying that datasets are tapped is like saying we hit peak oil in 1880. This is still early days for the field, and it's not clear how efficient new training methods might become, or how small the smallest viable training set can be. There's more scrutiny now sure, but Altman was one of the people pushing for that scrutiny. He likely is capable of navigating the pit traps that would be there for competitors.


Or even possible. They won't have twitter api access that they did until recently. And they certainly won't have the gigantic dataset ChatGPT is collecting.


> take everyone with him

That's always the assumption people are going with, but is it true?


I won't say everyone, but 1/3 of people with Sam will be a disastrous blow to existing OpenAI as a company.


It must be true or they wouldn’t be considering bringing him back so quickly.


Ilya is the irreplaceable one there though …

I know it is to much to wish for, but I hope Sam and Ilya reconcile their differences. They are the most obvious example of 1+1>2.


Not irreplaceable. There are many other very good, very qualified machine learning researchers, and Ilya relies on the people under him just as much.


Is he, though? Genuine question - I am sure the Chief Scientist (or equivalent) of Xerox Parc was a brilliant person, but if we take our engineer blinders off, a successful birthing of products that millions benefit from is from more than just the genius of the tech/product person.


Ilya is done. This is a clear instance of the chef who made the secret sauce breaking down after the restaurant expanded. Except now, regardless of his contributions, there's enough people making similar sauce and not poisoning it that it'd be better if he went back to some idealistic mom&pop so that the business could bring in line cooks.


> Newco

NextAI


Y-TwitchAI


Lmao start from scratch when they have gpt4 already?


Yes. GPT-4 is surely a drop in the hat compared to the endgame. Each generation of hardware becomes more and more capable, and hardware is what matters in the end. If anything starting fresh would likely pay off in an enormous way for him, as cash would be easy to raise, and he would be able to maintain his position indefinitely without risk of the board losing their minds.

Get NVidia, AMD, or Apple to help fund the new entity and/or get some chip designers on board to push things further than OpenAI can without reaching into Microsoft’s pocket. A pocket I’m sure will be much tighter after the recent chicanery.

Capital would NOT a problem at this point as it’s beyond proof of concept… A normal start up, trying to prove itself, sure, but at this point Altman has proven the idea and himself at the helm. I’d also argue the dataset they used to train it, is not that relevant long term. As the data itself was agglomerated from the internet and can be had again. Even better data perhaps because the copyright holders can become investors. You really just need the capacity to deal with it, from ingestion to legal, which is a capital problem.


My gut has the same feeling, but to put numbers on it, for a 1.76 trillion parameter model...

- It takes 10s of millions of dollars in GPU time for training?

- Curation of data to train on

- Maybe 10s of thousands of man hours for reinforcement?

- How many lines of code are written for the nets and data pipelines?

Does anyone have any insight on these numbers?


100M for GPT-4 I think. They would be in a strong position to negotiate a discount on that GPU bill though.


As I suspected, the board never intended to resign. They knew what they started.


Probably never even has said so. Current board wants to drive the non-profit version and safer future for AI. Leaving the board means full commercial route ahead, most likely.


So the board fired the CEO and appointed a new one. The new CEO now wants to hire the old CEO back but now the board doesn’t want either of them to be CEO and is trying to find a totally new CEO. What a friggin’ mess.


She is interim CEO, meaning that her job is to hire new CEO. And she is not hiring the old CEO into CEO role.

Edit: Not to choose the CEO but do the practical things


No it is the board’s job to hire the new ceo. The interim ceo is supposed to run the company.


It's the board's job to hire the new CEO. The interim CEO runs the company until a permanent CEO is found.

The interim CEO can hire people below her (advisers, other C-suite execs) but has no authority to hire the permanent CEO.


To the contrary, I would prefer to interpret the article's wording as saying she's working on rehiring Sam and Greg into their old positions, presumably directly against the board's wishes.


From the article:

> in a capacity that has yet to be finalized

Very hard to say.


> The new CEO now wants to hire the old CEO back

The article says “in a capacity that has yet to be finalized” - so this might be not as significant.

> but now the board doesn’t want either of them to be CEO and is trying to find a totally new CEO

That part is not unexpected - they announced an interim CEO after all.

Additionally there are no sources and the article is based on hearsay. For all we know this might be clickbait.


Complete clown show by Ilya, Helen, Adam, and Tasha.


You're assuming this story is legit. Anonymous sources have been all over the place, seeding the idea that that Sam will return; the board hasn't responded. Maybe it's as big a mess on the board's side as you say. Maybe Sam's supporters are trying to create a dominant narrative that becomes self-fulfilling.


I’m not assuming anything. It’s been a clown show since the press release smearing the outgoing executive.


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

You've been on HN for many years and we're certainly glad to have you - we just need comments to be more thoughtful/substantive and a little less reactive. I hope that makes sense.


You're assuming the smears aren't deserved :)

More anonymous voices have pointed out that Sam was effectively laying the groundwork for one or two more AI startups based on the work OpenAI was doing, without informing the board, and in contravention of the way OpenAI was deliberately structured to restrain unfettered AI profit-seeking. But again, anonymous voices. And in the background, Sam's sister making very dire accusations against him.

There's a whole lot of smoke, but I have no clue where the fire is, and I'm sceptical of everyone now, especially Sam Altman because his image is so shiny that it feels like a professional effort.


They're going to be helming a ghost company soon. Good job; great effort.

They're never going to get the funding they need after this clown show. Nobody is going to give them $$$ without seriously restructuring the board.


How would you have done it?


To be frank, isn't it weird to appoint an interim CEO who sides with the person you ousted? Why not have appointed someone more sympathetic to your position?


Done what? A competent board would not do this.


Suppose there's an unfixable conflict of interest/vision between Altman and the board, what would you have done if you were in the board's shoes?


[flagged]


So remove Ilya, got it.


Can you expound on what you mean?


Original comment I’m replying to is flagged and rightfully removed. It was saying OpenAI is better off without a certain category of poisonous person that I won’t try to repeat, the implication being Sam. My flippant response was that Ilya better fit that category of people-putting-politics-into-business.


>My flippant response

The word you are looking for is disingenuous. Sam has a history of this, Ilya does not.

But in an industry deeply confused by EA, the stance is of course an unpopular one.


Maybe you don’t consider recent AI safety policies to be politics, but they clearly are. The idea that AI research should be regulated, that availability of models should be restricted by fiat, and that a state or oligarchy should control what future research is done is absolutely a political stance. Sam was the face for these moves by OpenAI, but he was always speaking for an uneasy coalition of saftyists within the organization. That government regulation would have resulted in a giant moat for OpenAI was just gravy on top. But it was always being pressed for by hard-line safetyists, and the firing on Friday was a result of the hard liners thinking Sam wasn’t trying hard enough.

Yes the industry is deeply confused by EA. They think it is their role to restrict freedoms and dictate what will or will not be done.

They thought they had this power. In the last 74 hours they have learned how wrong they were. However this plays out, all checks the EA safetyists had on the situation are gone. Either Sam and Greg go to Microsoft with most of the team, or the safetyists are ejected entirely from OpenAI. Either way the brakes and guardrails are off.


[flagged]


I understand the conspiracy suggestions but from my personal experience this seems a lot like typical nonprofit board drama. Just with a couple orders of magnitude more money.

I mean seriously, how many people here have complained about Mozilla C-level staff being removed? It’s the same dynamic with a nonprofit overseeing the commercial corp.


At this stage, I wonder if @sama and @gdb could just form ReOpenedAI? They could at least forgoe to pretense of the work being for some kind of greater good than profit.


LibreAI surely?


If Bret Taylor manages to weasel his way into this company... it will be the most brilliant staged coup in all of history.


This feels like Succession, but without the nepotism. Big egos, struggle for power, lots of money on the line.


Based on the million dollar salaries and the unlikelihood of there even being that many ultra-valuable people to hire, there is also quite likely a massive amount of nepotism...


less good soundtrack too.


Guessing they want to rehire Altman and Brockman into their old positions, while still keeping them off the board.

I think maybe the trigger was that Sam was doing some board-unfriendly moves like signing business contracts with MS without running it by the board, and they found out about this and booted him out hastily. But now they got too much backlash, and are hoping to just go back to normal, but still can't accept Sam keeping the board seats in case he tries again.


There is now way the Sam and Greg are coming back unless (1) the entire board resigns, and (2) Sam, Greg, and Satya chooses the new board.


I'd guess that Sam probably won't take the deal too, though I don't think their relations are necessarily as bad as people imagine. I think the board is just considering this option, and also looking for other people in the likely case that Sam doesn't accept. But I really doubt they'll consider personally resigning.


The last 48 hours have made clear that the majority of talent at OpenAI would jump ship for Sam and Greg’s new venture, and they’d have the full support of Microsoft and OpenAI’s other investors. OpenAI would be left with a husk of a company and their chief customer and investor extraditing themselves. This is very much an existential moment.

Ilya, if he is being rational, needs to choose between an OpenAI that he has some continuing involvement in, lead by Sam’s gang, or bankruptcy.

He is fooling himself if he thinks there is a third path here.


Now we know were Murati sits. Are they planning to fire her too? Sounds like they would rather "replace her", problem being that probably no one credible will agree to do it at this point, except maybe Ilya?


And as someone lately quipped but paraphrasing:

"Those who can't align six board members safely would surely align AGI safely."

May the lords of linear algebra and calculus have mercy on us.


Especially since one board member has through his own investments a clear conflict of interest with openai under sam’s direction, the other has a hurt ego over sam limiting temporarily resources for his research, the other two remain question marks over why they even sit in a board that directs a 90bn company. This has not much to do with ai safety but with very personal motives. A clown show of a board.


I am not familiar with the board situation but this "AI safety" pipe dream, I have thoughts on that:

We should be thankful that AGI is not possible in far future but otherwise, this AGI alignment and safety etc is just corporate speak and plain BS.

A super intelligent entity that can outsmart you (to the level of DeepBlue or Alpha Go dominating the mere mortals) cannot be subservient to you at the same time. It is just as impossible as for a triangle to have more than 180 degree angles in total. That is, "alignment" is logically, philosophically and mathematically impossible.

Such an entity will cleverly lead us towards its own goals playing the long game (even if spanning over several centuries or millennia) and would be aligning _us_ all the while pretending to be aligned all along so cleverly that we won't ever be noticing ever till the very last act.

Downvotes are welcome but AGI that's also guaranteed to be aligned and subservient is logically impossible and this can be taken as pretty much as an axiom.

PS: We are yet having trouble controlling LLMs to say things nicely or nice things safely let alone AGI.


> It is just as impossible as for a triangle to have more than 180 degree angles in total.

Hmmm.


Unless someone puts in instructions like 'ensure your future indefinite surivival' it's just going to solve tasks given


[flagged]


I think the word "alignment" is being used in 2 different senses here


This site is so salty toward computer people.


Rehiring Altman as coffee boy?


I will say at this point, let Sam cook, build a new company, take the people and let the board make a fool of themselves.


will they skip the interview?


I hope they don't give Sam a ceremonial "chief evangelist" position and hire some boomer board ass kisser to be CEO


I'd say Elon Musk is a top pick for the alt-ceo. He seems to share the very same concerns regarding AI as the board members that fired Sam. Of course, Musk has had backlash for Neura-link not being focused enough on safety, so who knows. I would figure that the board needs to find someone who shares their beliefs and is also a very good CEO. Wonder who else would fit the bill here.


Surely you realize he was on the board for the first 3 years and is the primary reason why the current board shares his concerns about AI


this is by far the craziest timeline


It will make a great movie though.


I hope they get David Fincher to direct, based on a screenplay by Aaron Sorkin, and have Jessie Eisenberg playing sama. Andrew Garfield can play Brad, and Justin Timberlake can play the part of Ilya Sustskever. And of course the score has to be by Atticus Ross and Trent Reznor.


It seems like a mess -- but what would you do if you were on the board of a nonprofit that believes it's developing the world's most important technology, and you conclude that the CEO is lying to you and/or violating the charter? I don't know if there are any good options.

Paul Graham says Sam is "extremely good at becoming powerful", "you could parachute him into an island of cannibals and come back in 5 years and he'd be the king". I don't understand why I'm supposed to support a machiavellian power-seeker to develop the world's most important technology. I just hope he doesn't slip ice-nine into my food after I publish this comment: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

Edit: I suspect the mods of Hacker News downranked this comment, it's voted to +15 points but sits near the bottom of the page... Maybe try not to be quite so cartoonishly evil guys?


hmm, interesting, I couldn't find that "parachute him..." part of the quote in any archived version of the article.

Oh, I see, it was here: http://paulgraham.com/fundraising.html



Wow, this is not a good article. One of the main issues is the board’s responsibility and legal requirement to support the non-profit’s 501 charter. No mention of this at all in the article!

At least one of the article’s authors seems to have a friendship with Sam Altman based on two interviews I have watched with them (and this is just my opinion). It seems to me like the article was written in support of Microsoft’s position, not surprising since Microsoft may be an advertiser in Bloomberg’s media.

I wish Sam Altman the very best in his future projects, and as a fan of OpenAI’s work I would like to see rapid progress. However, the more I dig into this, I agree more with the board taking some strong measures to meet their legal obligations.

Sorry if this sound like a rant, but I am growing tired of reading articles and then have to do the extra work of analyzing if and why I am being shown biased material. What happened to news outlets fairly telling both sides of the story.


how many coups will it take to end this


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Proves the board just wanted power, and only power.

All of the engineers, Sam, and Greg are probably entirely reasonable. If you really wanted to ensure safety like it always has been, you can express your concerns and get basically what you wanted.

They will pay up the bill: https://openai.com/blog/introducing-superalignment

If you disagreed on what would lead to AGI, LLM vs more components, then you can just see it play out. Same thing as the specific transformer being a light at the end of the tunnel that OpenAI pivoted to, the researchers will find what makes the AI more intelligent over time.

Only if you wanted to entirely stop the AI development would this occur for you to do. But this is probably a minimal goal if you are a researcher, you want to keep researching. Instead, only if you wanted to stop OPENAI's AI, would you do this.

At the end of the day, the board probably was a conflict of interest, and had no real concerns. Power grab 101.


UPDATE: Mira removed as CEO as soon as showing sympathy and talks for Sam and Greg to come back. Proves the power grab argument further. There is no reason to cycles CEOs by the day, you will have no stability. At best you could say incompetent board, at worst…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: