“Interim CEO Mira Murati plans to rehire Sam and Greg, and is in talks with board rep Adam D’Angelo to do so (in what capacity is not yet finalized). However, concurrently, the OpenAI board is looking to hire its own CEO, and has reached out to two candidates that we’ve spoken to, both prominent execs”
Gotta love that Adam D'Angelo who has a clear conflict of interest with Poe to OpenAI's GPTs release from dev day is still on the board and is now somehow the one tasked with leading this negotiation from the boards side.
There's no conflict of interest with (what he believes to be) OpenAI's principles and his job as a board member. There was no conflict in reality either until two weeks ago.
Lmao this just clicked for me. I was reading about the many iterations of OpenAI’s board’s various conflicts of interests. But I hadn’t used Poe and didn’t realize it was basically characters/Agents.
That you can charge people to use, literally almost exactly what GPTs are (minus that those are supposed to be a revenue share from ChatGPT Plus subscribers).
I interpret the article as saying she's working on rehiring Greg as president and Sam as CEO, in "parallel" with the board looking for a different CEO.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
I'm sorry! It was indeed downweighted, though not by a mod - there are other ways that that can happen.
I'm not sure why I thought otherwise—it's possible that I didn't look at the correct comment, or possibly I looked at it before it got downweighted, though neither of those seem likely. In any case, I definitely don't want to give you, or any user, inaccurate information and I'm sorry about that.
As for the comment itself, I don't think it was terribly good for HN—it was more on the snark/fulmination/flamewar side of the ledger, rather than the curious conversation we're looking for, as described at https://news.ycombinator.com/newsguidelines.html. If I had seen it I might have downweighted it too, though probably not as much.
I think there is a consensus that developing AGI is the most important thing for the continued existence of human civilization. And we know a chair or table will not stand with just two legs. In fact some believe the USA grew into the most prosperous and powerful country because its founder were wise enough to design the government with three branches, each with their own powers and checks and balances between them.
We have seen that Ilya and Sam can not work together on their own. With only two natural leaders, there is no way to solve any dispute, as voting would lead to a stalemate. I believe Elon has a lot to bring to the table here, he nearly perfectly fills in the deficiencies of the other two - Elon has a strong grounding in ethics and morals (consider how many of his ventures post Paypal have been to truly benefit human society rather than just make money) which I feel could rein in Sam's tendency to ruthlessly pursue profit with questionable morality - see WorldCoin. Additionally, I think his real world experience could counter-balance some of the naivete we've seen with Ilya since his rather sudden entrance into the spotlight.
I truly hope the right people consider this and can convince Elon to step up and fulfill what may be considered his true potential.
Seems strange to me Elon would prioritize Twitter over the greatest threat/opportunity in humanity of AGI to Microsoft. Seems a bit Capt. Obvious, maybe he can get Ilya.
"Replying to Nadella's post, Musk then wrote, “Now they will have to use Teams!"
Most of the people here don't know who founded OpenAI and why they founded it.
Board in the question is the real developers of this technology and has members known to be not sold before. Ilya and his team is the core developer even we can call them inventors of this tech. While others who paid by sam can be replaced, the core teams always stated that they want "open" sourced project and not a value and profit based company.
I think it is a good thing that OpenAI won't let silicon valley bully run into company. They spent whole their life on this technology and they won't let any "i'm the network guy and i'm the CEO" type of guy sell and brag about it.
He even went to take Hawking Fellowship award. What? Bro, let ilya or alec take it. What a douche!
That’s not true. Key researchers responsible for GPT-4 have quit over the events. There is not a single “core” team. There are different sides and views on this matter.
Can you share their names? GPT-4 is a larger and boarder tech of GAN and GPT-1. How GPT-4 responsible researchers are "key people". Also yes there is a core team. Ilyas team and Jan Leike. Alec and super-alignment. Ilya and alec created the core tech. others just help it to become a product.
The 2 scientist quit over this was working on risk evaluation and manager? How they are key developers? --- im not saying their work is not important
I see literally no reason for Sam to stay without a full board resignation and return to CEO. All other options are just downsides when he can walk, start Newco, and take everyone with him. He'd lose the restrictive governance model and gain full-control.
I think Murati is actually on team Altman, but that just makes me think that he should walk even more. Take Murati and start Newco with the exact same org chart.
Everyone says Sam can start a newco so easily, but he'd lose the data, codebase and deployed infrastructure, and a lot of knowledge. If people think it's devastating for OpenAI to lose 1/3 of its employees, Sam would basically be starting with the equivalent of an OpenAI that lost 2/3, along with all the other problems. Of course, with enough time and money he can recover, but I don't think it would be that easy. And to Sam, such a slowdown when he had so much momentum before may be the hardest thing to accept.
> He'd lose the restrictive governance model and gain full-control.
He'd lose the dataset, which is by far the most valuable thing they have. The genie is out of the bottle and making such a dataset again is not going to be easy or cheap (or maybe even legal).
We're still in the early days of LLM training- saying that datasets are tapped is like saying we hit peak oil in 1880. This is still early days for the field, and it's not clear how efficient new training methods might become, or how small the smallest viable training set can be. There's more scrutiny now sure, but Altman was one of the people pushing for that scrutiny. He likely is capable of navigating the pit traps that would be there for competitors.
Or even possible. They won't have twitter api access that they did until recently. And they certainly won't have the gigantic dataset ChatGPT is collecting.
Is he, though? Genuine question - I am sure the Chief Scientist (or equivalent) of Xerox Parc was a brilliant person, but if we take our engineer blinders off, a successful birthing of products that millions benefit from is from more than just the genius of the tech/product person.
Ilya is done. This is a clear instance of the chef who made the secret sauce breaking down after the restaurant expanded. Except now, regardless of his contributions, there's enough people making similar sauce and not poisoning it that it'd be better if he went back to some idealistic mom&pop so that the business could bring in line cooks.
Yes. GPT-4 is surely a drop in the hat compared to the endgame. Each generation of hardware becomes more and more capable, and hardware is what matters in the end. If anything starting fresh would likely pay off in an enormous way for him, as cash would be easy to raise, and he would be able to maintain his position indefinitely without risk of the board losing their minds.
Get NVidia, AMD, or Apple to help fund the new entity and/or get some chip designers on board to push things further than OpenAI can without reaching into Microsoft’s pocket. A pocket I’m sure will be much tighter after the recent chicanery.
Capital would NOT a problem at this point as it’s beyond proof of concept… A normal start up, trying to prove itself, sure, but at this point Altman has proven the idea and himself at the helm. I’d also argue the dataset they used to train it, is not that relevant long term. As the data itself was agglomerated from the internet and can be had again. Even better data perhaps because the copyright holders can become investors. You really just need the capacity to deal with it, from ingestion to legal, which is a capital problem.
Probably never even has said so.
Current board wants to drive the non-profit version and safer future for AI. Leaving the board means full commercial route ahead, most likely.
So the board fired the CEO and appointed a new one. The new CEO now wants to hire the old CEO back but now the board doesn’t want either of them to be CEO and is trying to find a totally new CEO. What a friggin’ mess.
To the contrary, I would prefer to interpret the article's wording as saying she's working on rehiring Sam and Greg into their old positions, presumably directly against the board's wishes.
You're assuming this story is legit. Anonymous sources have been all over the place, seeding the idea that that Sam will return; the board hasn't responded. Maybe it's as big a mess on the board's side as you say. Maybe Sam's supporters are trying to create a dominant narrative that becomes self-fulfilling.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
You've been on HN for many years and we're certainly glad to have you - we just need comments to be more thoughtful/substantive and a little less reactive. I hope that makes sense.
More anonymous voices have pointed out that Sam was effectively laying the groundwork for one or two more AI startups based on the work OpenAI was doing, without informing the board, and in contravention of the way OpenAI was deliberately structured to restrain unfettered AI profit-seeking. But again, anonymous voices. And in the background, Sam's sister making very dire accusations against him.
There's a whole lot of smoke, but I have no clue where the fire is, and I'm sceptical of everyone now, especially Sam Altman because his image is so shiny that it feels like a professional effort.
To be frank, isn't it weird to appoint an interim CEO who sides with the person you ousted? Why not have appointed someone more sympathetic to your position?
Original comment I’m replying to is flagged and rightfully removed. It was saying OpenAI is better off without a certain category of poisonous person that I won’t try to repeat, the implication being Sam. My flippant response was that Ilya better fit that category of people-putting-politics-into-business.
Maybe you don’t consider recent AI safety policies to be politics, but they clearly are. The idea that AI research should be regulated, that availability of models should be restricted by fiat, and that a state or oligarchy should control what future research is done is absolutely a political stance. Sam was the face for these moves by OpenAI, but he was always speaking for an uneasy coalition of saftyists within the organization. That government regulation would have resulted in a giant moat for OpenAI was just gravy on top. But it was always being pressed for by hard-line safetyists, and the firing on Friday was a result of the hard liners thinking Sam wasn’t trying hard enough.
Yes the industry is deeply confused by EA. They think it is their role to restrict freedoms and dictate what will or will not be done.
They thought they had this power. In the last 74 hours they have learned how wrong they were. However this plays out, all checks the EA safetyists had on the situation are gone. Either Sam and Greg go to Microsoft with most of the team, or the safetyists are ejected entirely from OpenAI. Either way the brakes and guardrails are off.
I understand the conspiracy suggestions but from my personal experience this seems a lot like typical nonprofit board drama. Just with a couple orders of magnitude more money.
I mean seriously, how many people here have complained about Mozilla C-level staff being removed? It’s the same dynamic with a nonprofit overseeing the commercial corp.
At this stage, I wonder if @sama and @gdb could just form ReOpenedAI? They could at least forgoe to pretense of the work being for some kind of greater good than profit.
Based on the million dollar salaries and the unlikelihood of there even being that many ultra-valuable people to hire, there is also quite likely a massive amount of nepotism...
Guessing they want to rehire Altman and Brockman into their old positions, while still keeping them off the board.
I think maybe the trigger was that Sam was doing some board-unfriendly moves like signing business contracts with MS without running it by the board, and they found out about this and booted him out hastily. But now they got too much backlash, and are hoping to just go back to normal, but still can't accept Sam keeping the board seats in case he tries again.
I'd guess that Sam probably won't take the deal too, though I don't think their relations are necessarily as bad as people imagine. I think the board is just considering this option, and also looking for other people in the likely case that Sam doesn't accept. But I really doubt they'll consider personally resigning.
The last 48 hours have made clear that the majority of talent at OpenAI would jump ship for Sam and Greg’s new venture, and they’d have the full support of Microsoft and OpenAI’s other investors. OpenAI would be left with a husk of a company and their chief customer and investor extraditing themselves. This is very much an existential moment.
Ilya, if he is being rational, needs to choose between an OpenAI that he has some continuing involvement in, lead by Sam’s gang, or bankruptcy.
He is fooling himself if he thinks there is a third path here.
Now we know were Murati sits. Are they planning to fire her too? Sounds like they would rather "replace her", problem being that probably no one credible will agree to do it at this point, except maybe Ilya?
Especially since one board member has through his own investments a clear conflict of interest with openai under sam’s direction, the other has a hurt ego over sam limiting temporarily resources for his research, the other two remain question marks over why they even sit in a board that directs a 90bn company. This has not much to do with ai safety but with very personal motives. A clown show of a board.
I am not familiar with the board situation but this "AI safety" pipe dream, I have thoughts on that:
We should be thankful that AGI is not possible in far future but otherwise, this AGI alignment and safety etc is just corporate speak and plain BS.
A super intelligent entity that can outsmart you (to the level of DeepBlue or Alpha Go dominating the mere mortals) cannot be subservient to you at the same time. It is just as impossible as for a triangle to have more than 180 degree angles in total. That is, "alignment" is logically, philosophically and mathematically impossible.
Such an entity will cleverly lead us towards its own goals playing the long game (even if spanning over several centuries or millennia) and would be aligning _us_ all the while pretending to be aligned all along so cleverly that we won't ever be noticing ever till the very last act.
Downvotes are welcome but AGI that's also guaranteed to be aligned and subservient is logically impossible and this can be taken as pretty much as an axiom.
PS: We are yet having trouble controlling LLMs to say things nicely or nice things safely let alone AGI.
I'd say Elon Musk is a top pick for the alt-ceo. He seems to share the very same concerns regarding AI as the board members that fired Sam. Of course, Musk has had backlash for Neura-link not being focused enough on safety, so who knows. I would figure that the board needs to find someone who shares their beliefs and is also a very good CEO. Wonder who else would fit the bill here.
I hope they get David Fincher to direct, based on a screenplay by Aaron Sorkin, and have Jessie Eisenberg playing sama. Andrew Garfield can play Brad, and Justin Timberlake can play the part of Ilya Sustskever. And of course the score has to be by Atticus Ross and Trent Reznor.
It seems like a mess -- but what would you do if you were on the board of a nonprofit that believes it's developing the world's most important technology, and you conclude that the CEO is lying to you and/or violating the charter? I don't know if there are any good options.
Paul Graham says Sam is "extremely good at becoming powerful", "you could parachute him into an island of cannibals and come back in 5 years and he'd be the king". I don't understand why I'm supposed to support a machiavellian power-seeker to develop the world's most important technology. I just hope he doesn't slip ice-nine into my food after I publish this comment: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Edit: I suspect the mods of Hacker News downranked this comment, it's voted to +15 points but sits near the bottom of the page... Maybe try not to be quite so cartoonishly evil guys?
Wow, this is not a good article. One of the main issues is the board’s responsibility and legal requirement to support the non-profit’s 501 charter. No mention of this at all in the article!
At least one of the article’s authors seems to have a friendship with Sam Altman based on two interviews I have watched with them (and this is just my opinion). It seems to me like the article was written in support of Microsoft’s position, not surprising since Microsoft may be an advertiser in Bloomberg’s media.
I wish Sam Altman the very best in his future projects, and as a fan of OpenAI’s work I would like to see rapid progress. However, the more I dig into this, I agree more with the board taking some strong measures to meet their legal obligations.
Sorry if this sound like a rant, but I am growing tired of reading articles and then have to do the extra work of analyzing if and why I am being shown biased material. What happened to news outlets fairly telling both sides of the story.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
Proves the board just wanted power, and only power.
All of the engineers, Sam, and Greg are probably entirely reasonable. If you really wanted to ensure safety like it always has been, you can express your concerns and get basically what you wanted.
If you disagreed on what would lead to AGI, LLM vs more components, then you can just see it play out. Same thing as the specific transformer being a light at the end of the tunnel that OpenAI pivoted to, the researchers will find what makes the AI more intelligent over time.
Only if you wanted to entirely stop the AI development would this occur for you to do. But this is probably a minimal goal if you are a researcher, you want to keep researching. Instead, only if you wanted to stop OPENAI's AI, would you do this.
At the end of the day, the board probably was a conflict of interest, and had no real concerns. Power grab 101.
UPDATE: Mira removed as CEO as soon as showing sympathy and talks for Sam and Greg to come back. Proves the power grab argument further. There is no reason to cycles CEOs by the day, you will have no stability. At best you could say incompetent board, at worst…