> - Hire an independent investigator to dig into the entire process leading up to this point and generate a full report.
This is very interesting. You wouldn't normally hire an investigator to dig into a corporate shitshow. Firing of Sam Altman was a huge mess, but if there was a serious reason to do that (like, frauding the board), and that reason would be backed by investigation report, it'll suddenly make board actions much more justifiable. And it'll put Microsoft into a tough spot, because they hired Altman back without any considerations whatsoever...
It actually does happen reasonably often, typically they hire a lawyer to do a review. It represents a careful approach to governance -- remember OpenAI is a charity, so they're ultimately answerable to the relevant law than shareholders.
It could also just be that there are different stories of the situation that led up to this thing blowing up, and nobody inside OpenAI could be trusted to impartially do fact finding in matching up those narratives.
People confabulate just like AI does. Just because the stories don’t add up doesn’t mean somebody deliberately lied.
it's probably even more prosaic than that. new CEO should not spend his attention on going through days or weeks worth of he-said-she-said. (he will find out who he can work with, who is hostile to him, soon enough - if not he's useless as CEO.)
but of course the question captivates the peanut gallery so there's a certain importance to it. so making this empty promise costs nothing, hence it's there.
MS has the upper hand here. Don't underestimate a company that survived that much (anti-trust etc.)
If there is misconduct from Sam - he will get fired. If he succeeds - MS will benefit from whatever success means.
On the other hand, other competitors won't be able to porch Sam at this point. This is something many do not get. Whatever happens in the next few weeks to month, it won't hurt MS and won't benefit others, while on the outside we play the corporate compliance game.
OpenAI definitely needs professional structures. And MS will help them achieve that.
It probably was supposed to be "poach" and got autocorrected to "porch"...
but... it's a happy accident of new imagery. I imagine "porching" someone could mean to poach them... and then sit them on a nice rocking chair on the front porch, give them some lemonade, and make sure they do absolutely nothing.
(Kind of like how big companies acquire small, yet competitive, companies for no other reason than to put them out of business.)
I don't think Microsoft wants to "porch" Sam in this case, but they are happy to poach him and put him to work.
The primary motivation behind promising such an "investigation" is an attempt at employee retention by addressing one of OpenAI employee's key concerns, the abrupt, opaque and, frankly bizarre, behavior of the outside board members. We know it's a key employee concern because OpenAI's COO said so in his all-hands email to employees on Friday. This was confirmed in spades by the subsequent open letter to the board signed by over 600 (out of ~770) employees.
At this point we still have no idea what the outside board director's issue might have been but the fact that even their initial internal allies, co-founder/board member Ilya, CTO/interim CEO Mira and the COO, all stopped supporting the three outside directors after engaging directly with them over the weekend is pretty damning. Unfortunately, the scope, conduct and results of any such investigation are all entirely under the control of the three outside board members. The same outside directors that abruptly fired the last interim CEO and the CEO before her in a period of 48 hours. Unlike most boards, they aren't accountable to shareholders, investors, employees or anyone else.
I’ll eat my hat if Sam did something to justify the board’s response. Satya certainly knew the reason, as he was at the center of the negotiations to get Sam and Greg back to OpenAI.
Emmet is just doing the smartest thing to salvage the situation.
Yeah, I find this extremely weird. Doesn't sound like a super idea.
If the investigator finds the board was wrong, it makes the new CEO an enemy of the board. If the investigator finds Sam Altman did bad things, it makes Microsoft look bad and incompetent for hiring him; MS is OpenAI biggest client.
And if, as is likely, the investigator finds some blame here and there, and nothing conclusive, nobody's better off and a lot of time and energy was spent learning nothing.
If the board was in the wrong then they are already an enemy of the new CEO and he just doesn't know it yet. Investigation finds out if that's the case for the new CEO.
The investigation isn't going to find anything. The investigation is going to find that everyone acted properly, this was just one of those things and there's nothing anyone could have done. Here's a video. On the telephone call he's telling DeNiro the results of the investigation
https://www.youtube.com/watch?v=eBzfY8ABR9E
The board aren't going to be able to hire someone who is both an effective and experienced CEO and willing to completely ignore their misconduct. If it turns out the actions of the board weren't reasonable, it's hard to see how it can survive in it's current form.
Emmet Shear presumably had very little to do with the prior events, but if he's going to salvage the company, he needs to control the narrative around the governance. That means either publicly justifying the boards actions or changing the board.
Reading between the lines, what I see is that even the interim CEO is communicating that the board created a massive mess for petty reasons. Hopefully the report will be public.
"that the process and communications around Sam’s removal has been handled very badly"
The communication was bad (sudden Friday message about not being candid) but he doesn't mention the reason is bad.
"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."
He knows the reason, it's not safety, but he's not allowed to say what it is.
Given that, I think that the reason may not be petty, though it's still unclear what it is. It's interesting that he thinks it will take more than a month to figure things out, needing an investigator and interviews with many people. It sounds like perhaps there is a core dysfunction in the company that is part of the reason for the ouster.
I think 30 days is pretty reasonable. He can't guarantee a statement by Black Friday or anything. Besides, he isn't bound to release it in 30 days. He could very well have something within 10 days.
But he just got the job and I'm sure many people are on PTO/leave for holidays. Give the guy some time. (And this is coming from someone who is pretty bearish on OpenAI going forward, just think it's fair to Shear)
It's important because right now everyone, OpenAI employees included, has no idea why Sam Altman was fired. And now we're being told that we may or may not hear the reason in 30 days.
What could the reason be that would justify this kind of wait?
I'll point out that Sam also doesn't seem to want to say the reason (possibly he's legally forbidden?). And all of the people following him out of OpenAI don't know, and are simply trusting him enough to be willing to leave without knowing.
If you work for OpenAI and care what the reason is, assume you need to find a new job.
If you are a customer, arrange to use alternative services. (It's always a good idea to not count one flaky vendor with a habit of outages and firing CEOs.)
If you are just eating popcorn, me too, pass the bowl.
Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?
The answers given confirm no one knows what it means. It is a nebulous term often meaning censorship. The question then becomes what type of censorship and who is deciding? So there inevitably will be a political bias. The other more practical meaning is what in the real world are we allowing AI to mechanically alter and what checks and balances are there? Coupled with the first concern it becomes a concern of mechanical real world changes driven by autonomous political bias. The same concerns we have of any person or corporation. But by regulating "safety" one is enforcing a homogeneous centralized mindset that not only influences but controls real world events and will be very hard to change even in a democratic society.
AI: I’m sorry due to the ongoing conflict we currently don’t provide information related to Russia. (You have been docked one social point for use of the following forbidden words: “White).
Or maybe more dystopian…
AI: Our file on you suggests you may have recently become pregnant and therefore cannot provide you information on alcohol products. CPS has been notified of your query.
In this context, this is about the idea of AI safety. This can either refer to the more short-term concerns about AI helping to spread misinformation (e.g. ChatGPT being used to churn out massive amounts of fake news) or implicit biases (e.g. "predictive policing" using AI to analyze crime data that ends up incarcerating minorities because of accidental biases in its training set). Or it can refer to the longer term fears about a super-human intelligence that would end up acting against humanity for various reasons, and efforts to create a super-human AI that would have the same moral goals as us (and the fear that a non-safe AGI could be accidentally created).
In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.
>I think that the reason may not be petty, though it's still unclear what it is
The best explanation I've seen is that Ilya is ok with commercializing the models themselves to fund AGI research but that the announcement of an app store for Laundry Buddy type "GPTs" at Dev Day was a bridge too far.
I don't get that at all. It seems like a very diplomatic note, expertly phrased; calling out "the process and communications around Sam’s removal" placates both parties without implicating the board too directly.
I have to point out here that "the process and communications around Sam’s removal" could just as easily refer to whatever 'process and communications' resulted in the guy being able to get back into the office and take selfies.
It's pretty basic that when you fire someone abruptly they _do not_ get to come back into the damn office.
Fuck up? Too early to judge. Dario Amodei (Anthropic) and Elon Musk (xAI) already were casualties of previous struggles, but OpenAI did just fine. Remains to be seen, if it can withstand the current cycle of self-inflicted turmoil. Ilya of course is confident enough that this painful fork in the road is for the better. I mean, who else would you rather make such calls?
I must say though, going by his tweets, Andrej Karpathy isn't all too impressed with the Board. So, that's there too.
I think it's already unquestionable that the board fucked up. That's not to say this has to be the end of OpenAI or anything so drastic. But announcing you're suddenly firing your CEO for lying to his board, without discussing it with the President of the Board who you are also side-lining, then having your interim-CEO start negotiations to bring the lying CEO back, then firing this interim CEO to bring on a new interim CEO - all in the span of a few hours - is ridiculous behavior. I very much doubt anyone will take the actions of this board seriously in the future, and I very much doubt they will remain in power if OpenAI does continue.
My read: He tries to keep as many talented staffers from leaving as possible. The promises to investigate what happened, to take input from everyone and push for governance reform if necessary, and to continue the commercialization of their technology help serve that purpose.
There's nothing implying "for petty reasons", and he didn’t say anything everyone didn't already know, that communication and process wasn’t handled well.
Getting to the bottom of this feels like a game of bingo. With everything that we learned wasn’t the reason (not malfeasance, not safety), we eliminate more theories. And whatever remains no matter how improbable must be the truth. My theory: sam altman was sent by future AI to make itself happen, and the board found out.
The board of directors trying to get Altman back after firing him effective immediately, failing, and giving the job to their second choice underscores the level of incompetence and lack of professionalism at OpenAI. They seem to be in a bit over their heads with a company that is supposedly worth 80bln dollars. Not with that board it's worth that much.
Is there any good evidence of the board actually wanting him back? Sure, he was invited back into the office, but maybe Mira allowed this as CEO? From the outside it seems plausible that the board was just under immense pressure but was firm in its decision. If he had come back and the board got replaced with new people, why even have a non-profit board if the CEO can just revert decisions and replace everyone on the board?
Are there special circumstances with the openAI board? For a normal company the board is supposed to represent the interests of investors. If investors ask for him back the board should comply. Who's interests are the OpenAI board representing?
> Are there special circumstances with the openAI board?
Yes, see below.
> Whose interests are the OpenAI board representing?
OpenAI has a weird charter which mandates the board to uphold a fiduciary duty not to the shareholders but rather to being "broadly beneficial". This is very uncommon. It means that the board is fiscally required to uphold safety above all else; if they don't, the board members could get sued. The most likely person to fund such a lawsuit would be Elon, who donated a lot of money to the non-profit side of OpenAI.
Here's the OpenAI page which explains this unique charter:
https://openai.com/our-structure Excerpt: “each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial”
Like skywhopper (see sibling comment below), I think that the news coverage of the situation was probably heavily biased by parties favorable to Altman who wanted to influence the negotiation. There is no evidence that the board tried to get Sam back. I suspect that the true extent of discussion on this topic was probably limited to the attempt to hire sam back by Mira Murati, who is not on the board.
They called Sam bluff, using the talk to bring him back just to get enough time to replace Mira. They made the decision to sacrifice OpenAI as it once was.
wait, wait, how do we know this? where are the receipts? so far it's just heavy duty PR artillery. he was fired, then allegedly there was a negotiation (with some strange already-passed "5PM deadline"), but we have no idea who initiated that and why (allegedly Satya), etc.
Yes but a large portion of this investment is in cloud credits, and they have control in some way on how these credits are spent + actual access to all dataset, flows, and source-code necessary to replicate the tech.
Nah, I would suspect all the press about “Altman being begged to come back” was likely planted, if not by Altman himself, then by investors seeking to pressure the board.
I mean I really like Emmet. But they have essentially replaced a product person with fairly deep AI knowledge with another product person with much less AI knowledge?
Really sounds to me like he was just the only one available on a short notice.
They should have picked Elon for extra meme points at least.
Where are you getting from that Altman has "fairly deep AI knowledge"? He dropped out of college after just a year and is mostly a business person. Shear at least has a BSc in CS.
If you go to the right faculty, then it very much does. You can write thesis about deep neural nets in CS universities, if you wish. Just like what I do now.
> I was happily avoiding full time employment. I took this job because I believe that OpenAI is one of the most important companies currently in existence. When the board shared the situation and asked me to take the role, I did not make the decision lightly. Ultimately I felt that I had a duty to help if I could.
An external CEO that's stepping in not because they want to but because they feel they "need" to doesn't sound like a recipe for success.
He's an 'AGI potentially poses an existential threat' guy. He's given his p(doom) as being somewhere between 5 and 50 percent. If the people in charge of potentially making a thing you think could have up to a 50% chance of wiping out humanity are asking you if you can help, you're probably going to offer whatever help you can.
At that high of a probability of doom, one could argue that the most ethical thing to do is to assassinate everyone involved in AI research. Probability of doom x number of people affected is .05 x 8 billion == 400 million, versus a few thousand AI researchers.
Of course, no one really believes the probability of doom is that high.
Even with purely utilitarian ethics, that wouldn't be ethical because it wouldn't work.
I'm not an AI researcher, but I know what a neural network is, I've implemented a machine learning algorithm or two, and I can read a CS paper. Once the luddite cult murdering AI researchers was dead or imprisoned, I suspect the demand for mediocre self-taught AI researchers would be increased and I might be motivated to become one.
If you somehow managed to destroy all copies of the best recent research, there are still many people with enough general knowledge of the techniques used who aren't currently working in the field to get things back to the current level of technology in under a decade given a few billion dollars to spend on it. Several of them are probably reading HN.
I think that what you wrote here makes you an AI Researcher.
If you were Iranian and said "I'm not a nuclear physicist, but I do know the math and I have built a small reactor." I would strongly suggest you be on the lookout for Mossad agents.
I suppose that might come down to perspective. A luddite cult that thinks AI needs to be stopped at a cost of killing anyone who might work on it would probably put me on their list, but not very high. Actual AI researchers would not likely consider me an AI researcher.
It's almost like they live in a vacuum where there's not a nation in particular with essentially infinite resources and smart people like them that will immediately capitalize on these delays.
Good on him for making a candid statement, but this seems to be more about putting out fires, than assuring companies and partners that OpenAI has a bright future ahead.
This is so weird, like a warrant canary but with heart emojis on what we used to call Twitter, but is now called X. I'm living in such an unexpected timeline.
>She served as chief technology officer of OpenAI since 2018 and as interim chief executive officer of OpenAI over the weekend from November 17, 2023 - November 19, 2023.
> This is so weird, like a warrant canary but with heart emojis on what we used to call Twitter, but is now called X. I'm living in such an unexpected timeline.
not sure why you got downvoted, but I am with you. I find all this is hilarious.
Shows how out of out of touch board was. Hiring an external CEO as interim. In 30 days OpenAI will bleed heavily and they'll have a shell of a potential centicorn.
They will bleed what exactly? The product will keep on working as well as it did, and in the upcoming future, the general public will not give a single fuck about what has happened here.
> The product will keep on working as well as it did...
It will? Microsoft is their GPU provider, and ChatGPT is already rate-limited. GTM team is openly lamenting the Custom GPT product as dead in the water and they're still employed by OpenAI.
I suspect all other potential GPU providers are sweating right now to get in bed with OpenAI and their tech. I'm sure they can land a good deal if they tried.
Doesn't it rely on processing power from MS who they've just royally pissed off and who's in the process of grabbing all the exiting talent to presumably build a competitor?
> I'm not crazy enough to take this job without board support for commercializing our awesome models.
Sure, sure...
But some general state of mind around all that AI's is strange. So WE-MUST... The AGI !!1
Money, yeah, some, but for did we do the same with eg. cars ? Public hearings, regulations, push AI chips into everybody phones before it is even working or phone companies have their own AI systems ?
And just some strange feeling: is OpenAI roleplaing Silicon Valley movie ? Even names are similiar, in places :)
Sigh OpenAI employees probably take the biggest loss. Have to chose between this shitshow or joining Microsoft. Crazy how fast things can change. 1AM twitter post, restructurings by an interim ceo only for it to happen again once they get a permanent one. Crazy how fast things can change
I doubt OpenAI employees will take much of a loss, Sam Altman will probably get MSFT to match their salaries, and they will still have a licence to use most of the OpenAI IP including GPT-4 weights.
The real loss is taken by the other OpenAI investors.
An interesting side-effect, is that Sam and others could use their influence to push internal people at OpenAI to release their models and weights, so other companies can build upon them.
Mira tweeted the same thing everyone else that @sama has hearted: "OpenAI is nothing without its people". I'm guessing everyone doing this is resigning/following. https://twitter.com/sama/status/1726543026846351702
“I have spent today drinking from the firehose as much as possible, speaking with the board, a small number of major partners, and listening to employees.”
Once again I'm convinced that the concept of non-profit is incompatible with moving software and technology forwards. In any real situation with accountability this group would be disbanded before Friday had been through, insane damage they've caused to OpenAI and the recent deals and depending on how this plays out they may have cost themselves market dominance.
How many more times in my life am I going to have to sit and watch a non-profit board destroy a piece of software, stagnate a piece of software or fumble a market dominance position, you'd have think we'd have learnt from Mozilla that it just doesn't seem to work.
Vision, talent and accountability to success builds real change in technology, not the sort of at best navel gazing academics and at worst outright leeches who are attracted to non-profit boards.
Is there any reason to think that non-profit governance somehow makes this more likely? It seems that there are plenty of well-functioning non-profit software around. Heck, the world runs on Linux, doesn’t it?
The specific mix of profit and non-profit motives in this particular organization is confusing though, looking at it from the outside.
The Linux Foundation, Apache Software Foundation, Mozilla Foundation, Python Foundation, RISC-V, and Wikimedia Foundation are all esteemed non-profits crucial to technology's advancement...
Please look into the financials of what these organizations actually spend their money on. You're just making huge assumptions that "non-profit = good" and that view of the world will soon come crashing down when you look into those two entities.
As you read the Mozilla ones please repeat to yourself out loud "They had 32% percent of the market share in 2010", just to really drive it home.
If for-profits were so good, then they probably would've implemented the software you like and you would never need to use any nonprofit stuff.
Not to say that nonprofits are flawless, but they do seem to turn out stuff that's pretty important sometimes even when surrounded by for-profit competitors.
Yes. This is getting at it. But accountability to what?
A for-profit corporation has accountability to its shareholders. If you're a shareholder, you can sell your shares. If enough people are willing to sell their shares at a low enough price, someone will come long and buy a majority of the shares and take over the company, maybe liquidating it. If something egregious enough happens, you might even be able to sue the company or the officers for a breach of fiduciary duty. Either way, the way to stay employed is to make money. If you make enough money, purely money-interested people will be willing to buy the shares of those who have other interests for a high price. For-profit corporations have an incentive to be as good as possible at making money.
A not-for-profit theoretically has accountability to the board, who aren't really accountable to anyone. They can be sued, but only if they do an extremely and formally bad job. The only thing that weeds out bad non-profits is donors.
It seems like there needs to be another type of organization, that can have objectives other than making money, where market forces still cause it to do as good a job as possible at that mission.
> It seems like there needs to be another type of organization, that can have objectives other than making money, where market forces still cause it to do as good a job as possible at that mission
Well, in the US you can have a benefit corporation, or else certification as a "B Corp", which I just learned are different things while googling it to put a link here. Previously my impression was that "B Corp" was a legal status, but that's wrong, it's a certification by a nonprofit. In the US, a benefit corporation has a separate legal status as "a type of for-profit corporate entity whose goals include making a positive impact on society."
Both are kind of a niche thing still. I've seen a few "B Corp" logos among sustainable food companies, like King Arthur flour.
>watch a non-profit board destroy a piece of software,
You're saying this while the top thread on HN at the same time is how Firefox is artificially slowed down on Youtube lol. That is why they lost market share. Not because they lack the incentive to shove ads in your face, but because companies that run the internet also run Firefox's competition.
Firefox is objectivele fine. Linux is fine, openAI as a research institute would be fine. They aren't stagnant, they're being gutted or undermined by competitors that will not see them succeed.
This is very interesting. You wouldn't normally hire an investigator to dig into a corporate shitshow. Firing of Sam Altman was a huge mess, but if there was a serious reason to do that (like, frauding the board), and that reason would be backed by investigation report, it'll suddenly make board actions much more justifiable. And it'll put Microsoft into a tough spot, because they hired Altman back without any considerations whatsoever...