Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI board in discussions with Sam Altman to return as CEO (theverge.com)
1243 points by medler 11 months ago | hide | past | favorite | 1611 comments



But what about the legal responsability of Microsoft and investors there?

To explain, it's the board of the non-profit that ousted @sama .

Microsoft is not a member of the non-profit.

Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.

Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.

But not a change of control in the for-profit company they invested in.

To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.

Microsoft is not happy with that change of control and they favourited the previous representative of their partner.

Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.

And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.

This sounds like highly unethical and potentially illegal to me.

How come no one is pointing that out?

Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?

What does it say about the seriousness of it all?

But of course, that's Silicon Valley baby.


I think a lot of commenters here are treating the nonprofit as if it were a temporary disguise with no other relevance, which OpenAI now intends to shed so it can rake in the profits. Legally this is very much not true, and I’ve read that only a minority of the board can even be a stakeholder in the for-profit (probably why Altman is always described as having no stake). If that’s true, it’s very obviously why half the board are outside people with no stake in the finances at all.


Exactly my point.


No one is saying they are now valued at 0.

They are likely valued a lot less than 80 billion now.

OpenAI had the largest multiple - >100X their revenue for a recent startup.

That multiple is a lot smaller now without SamA.

Honestly the market needs a correction.


SamA is nowhere even close to relevant to the value that OpenAI presents. He's def. less than half a billion and likely much less than that. What makes OpenAI so transformative is the technology it produces and SamA is not an engineer that built that technology. If the people that made it were to all leave it would reduce the value of the company by a large amount, but the technology would remain and it is not easy to duplicate given the scarcity of GPU cycles, the training data now being very hard to acquire and lots of other well invested companies chasing with the likes of Google, Meta, Anthropic. That doesn't even begin to mention the open source models that are also competing.

SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.

Enough with the hero worship for SamA and all the other salesmen.


SamA is nowhere even close to relevant to the value that OpenAI presents.

The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.


That's a good point, but any responsible investor would have looked at the charter and priced this in. What I find ironic is the number of people defending SamA and the like who are now tacitly admitting that his promulgation of AI risk fears was essentially bullshit and it was all about making the $$$$ and using AI risk to gain competitive advantage.


any responsible investor would have looked at the charter and priced this in

This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.


The threat is existential, and if they're trying to save the world, that's commendable.


If they intended to protect humanity this was a misfire.

OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.

Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.

Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.

If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.


We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.

[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...


That's not what OpenAI is doing.

Their entire alignment effort is focused on avoiding the following existential threats:

1. saying bad words 2. hurting feelings 3. giving legal or medical advice

And even there, all they're doing is censoring the interface layer, not the model itself.

Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.

I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.


There are people that think Xenu is an existential threat. ¯\_(ツ)_/¯


"Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?"

Please think about this. Sam Altman is the face of OpenAi and was doing a very good job leading it. If the relationships are what kept OpenAI from always being on top and they removed that from the company, corporations may be more hesitant to do business with them in the future.


Well, once again, then it's Satya's mistake to have allowed the representative of an independant third party entity become the public face of a company he invested in.

OpenAI might have wasted the 10B of Microsoft. But whose fault is it in the first place? It's Microsoft's fault to have invested it in the first place.


Regardless of whether or not it was a "mistake" (I don't think it was... OpenAI is so far ahead of the competition that it's not even funny), the fact remains that a) Microsoft has dumped in tons of money that they want to get back and b) Microsoft has a tremendous amount of clout, in that they're providing the compute power that runs the whole shebang.

While I'm not privy to the contracts that were signed, what happens if Nadella sends a note to the OpenAI board that reads, roughly, "Bring back Altman or I'm gonna turn the lights off"?

Nadella is probably fairly pissed off to begin with. I can't imagine he appreciates being blindsided like this.


That would effectively exit Microsoft from the LLM race and be an absolutely massive hit to Microsoft shareholders. Unlike the OpenAI non-profit board, the CEO of MS actually is beholden to his shareholders to make a profit.

In other words, MS has the losing hand here and CEO of MS is bluffing.


> That would effectively exit Microsoft from the LLM race

I don't see why. As I understand it, a significant percentage of Microsoft's investment went into the hardware they're providing. It's not like that hardware and associated infrastructure are going to disappear if they kick OpenAI off it. They can rent it to someone else. Heck, given the tight GPU supply situation, they might even be able to sell it at a profit.


But I think the 'someone else' would be in competition with MS, as opposed to OpenAI who was pretty much domesticated in terms of where the profit would go.


They would have done that already if that is possible in the terms. Which clearly means they don’t have such leverage.


It depends on what assurances they were given and by whom. Perhaps it was Sam Altman himself that made verbal promises that weren’t his to give, and he may end up in trouble over them.

We don’t know what was said, and what was signed. To put the blame with microsoft is premature.


> Sam Altman is the face of OpenAi and was doing a very good job leading it.

Its not like every successful org needs a face. Back then Google was a wildly successful as an org, but unlike Steve Jobs then, people barely knew Eric Schmitt. Even with Microsoft as it stands today, Satya is mostly a backseat driver.

Every org has its own style and character. If the board doesn't like what they are building, they can try change it. Risky move nevertheless, but its their call to make.


And I thought AI is about the brain and not the face.


The company still has assets and a balance sheet. They could fire everyone and simply rent out their process to big orgs and still make a pretty penny.


Loss of know-how is a risk. A vendor needs to be able to prove that it has sufficient headcount and skills to run and improve a system.

While OpenAI would have the IP, they would also need to retain the right people who understand the system.


Very good point (even tho i think the right move is for sam to come back as ceo).


I don't see any citations provided by you showing legal threats, though.


Highly unethical would be throwing the CEO of the division keeping the lights on under a bus with zero regard for the consequences.

The non-profit board acted entirely against the interest of OpenAI at large. Disclosing an intention to terminate the highest profile member of their company to the company paying for their compute, Microsoft, is not only the ethical choice, it's the responsible one.

Members of the non-profit board acted recklessly and irresponsibly. They'll be paying for that choice for decades following, as they should. They're lucky if they don't get hit with a lawsuit for defamation on their way out.

Given how poorly Mozilla's non-profit board has steered Mozilla over the last decade and now this childish tantrum by a man raised on the fanfiction of Yudkowsky together with board larpers, I wouldn't be surprised if this snafu sees the end of this type of governance structure in tech. These people of the board have absolutely no business being in business.


Except it's not a "division" but an independent entity.

And if that corporate structure does not suit Satya Nadella, I would say he's the one to blam for having invested 10B in the first place.

Being angry at a decision he had no right to be consulted on does not allow him to meddle in the governance of its co-shareholder.

Or then we can all accept together that corruption, greed and whateverthefuckism is the reality of ethics in the tech industry.


> Except it's not a "division" but an independent entity.

This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]

If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.

[1] https://en.m.wikipedia.org/wiki/Removal_of_Sam_Altman


Hanging your hat on quibbles over division vs subsidiary eh? That's quite a strident rebuttal based on a quibble.


I'm happy to defend any of my points. The commenter took issue with one. I responded to it. If you have something more to add, please critique what you disagree with.

I will say that using falsehoods as an attack doesn't put the rest of the commenter's points into particularly good light.


I don't understand why you think what the board of the non-profit did was unethical. Your presupposition seems to be that the non-profit has a duty to make money - aka "keep the lights on" but it is a "non-profit" precisely because it does not have that duty. The duty of the board is to make sure the non-profit adheres to its charter. If it can't do that and keep the lights on at the same time, then so much worse for the lights.


As a non-profit with the charter they have, their board was not supposed to be in business (at this scale). I guess this is where all of this diverged, a while ago now..


Update on the OpenAI drama: Altman and the board had till 5pm to reach a truce where the board would resign and he and Brockman would return. The deadline has passed and mass resignations expected if a deal isn’t reached ASAP

https://twitter.com/alexeheath/status/1726055095341875545


Pretty incredible incompetence all around if true.

From the board for not anticipating a backlash and caving immediately... from Microsoft for investing into an endeavor that is purportedly chartered as non-profit and governed by nobodies who can sink it on a whim. And having 0 hard influence on the direction despite a large ownership stake

Why bother with a non-profit model that is surreptitiously for profit? The whole structure of OpenAI is largely a facade at this point.

Just form a new for profit company and be done with it. Altman's direction for profit is fine, but shouldn't have been pursued under the loose premise of a non profit.

While OpenAI leads currently, there are so many competitors that are within striking distance without the drama. Why keep the baggage?

It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO. OpenAI has first mover advantage, and perhaps better talent, but not by an order of magnitude. There is no special sauce here.

Altman may be charismatic and well connected, but the hero worship put forward on here is really sad and misplaced.


> While OpenAI leads currently, there are so many competitors that are within striking distance without the drama.

It's hard to put into words, that do not seem contradictory: GPT-4 is barely good enough to provide tremendous value. For what I need, no other model is passing that bar, which makes them not slightly worse but entirely unusable. But again, it's not that GPT-4 is great, and I would most certainly go to whatever is better at the current price metrics in a heartbeat.


What is your use-case? I have not worked with them extensively, but both PALM and LLAMA seem as good as GPT-4 for most tasks I have thrown at them


I’ve used all 3 a lot. Gpt 4 is definitely better. That being said if I was to rank a close second it would be Claude 2, which I think is really good


But would you say the others besides GPT-4 are unsuitable? That's the claim I find surprising


look at the backgrounds of those board members... cant find any evidence that any of them have experience with corporate politics. theyre in way over their heads.


It is also crazy that the "winning move" was to just do nothing and look like a genius and coast off that for the rest of their lives. Who in their right mind would consider them for a board position now.


This is assuming motivations similar to a board for a for-profit company, which the OpenAI board is not.

Insisting, no matter how painful, that the organization stays true to the charter could be considered a desirable trait for the board of a non-profit.


Fair. I don't know why they wouldn't just come out and say that though, if that were the case. It would be seen as admirable, instead of snake-ish.

Instead of "Sam has been lying to us" it could have been "Sam had diverged too far from the original goal, when he did X."


It's hard to say. Lots of things don't really make sense based on the information we have.

They could have meant that Sam had 'not been candid' about his alignment with commercial interests vs. the charter.


that is what the press release says. they didn't go into specifics but it is clear that the conflict is in Comercialisation vs original purpose


>that is what the press release says.

In the initial press release, they said Sam was a liar. Doing this without offering a hint of an example or actual specifics gave Sam the clear "win" in the court of public opinion.

IF they would have said "it is clear Sam and the board will never see eye to eye on alignment, etc. etc" they probably could have made it 50/50 or even favored.


A strange game. The only winning move is not to play. How about a nice game of chess?


that's because it was never supposed to be a Corporate. It was a non-profit dedicated to AI research in the benefit of All. This is also why all this happened, they trying to stay true to the mission and not turn into a corporate.


In which case you could say the three non-employee members of the board have no background in AI. Two of them have no real background in tech at all. One seems to have no background in anything other than being married to a famous actor.

If Sam returns, those three have to go. He should offer Ilya the same deal Ilya offered Greg - you can stay with the company but you have to step down from the board.


They don’t have experience with non-profit leadership either, do they? They have some experience leading for-profits, such as the Quora CEO, but not non-profits.


> It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO.

This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

Ilja can follow Google's Bard by holding it back until they have countermodels trained to remove conflicts ("safety"), but this will not win them any compute contracts, nor keep them the existing GPU hours. It's only mass, not smarts. Ilja lost this one.


When google came out it had the best algothitm backed by good hardware (as far as I understand often off the shelf hardware - anyway the page simply "just worked"). Difference between google and competitors was like night and day when it came out. It gained marker share very quickly because once you started using it - you didnt have any incentive to go back.

Now google search has a lot of problems, much better competition. But seriously you probably dont understand how it was years ago.

Also I thought that in ML still the best algorhitms win, since all the big companies have money. If someone came and developed a "pagerank-equivalent" for AI that is better than the current algs, customerd would switch quickly since there is no loyalty.

On a side note: Microsoft is playing the game very smart by adding AI to their products what makes you stick to them.


Oh, the pagerank myth.

Google won against initially Alta Vista, because they had so much money to buy themselves into each countries interxion to produce faster results. With servers and cheap disks.

The pagerank and more bots approach kept them in front afterwards, until a few years ago when search went downhill due to SEO hacks in this monoculture.


This is anegdotical evidence, but I was there when Google came out and it was simply much better than the competition. I learned one day about this new websitr - and it was so much better than the other alternatives that I never went back. Same with gmail, trying to get that invite for that sweet 1GB mailbox when the ones from your country offered only 20MB and sent you 10 spammy ads per day, every day.

As an anegdote: before google I was asked to show the internet to my grandmother. So I asked her what she wants to search for. She asked me about some author, let's say William Shakespeare - guess what did the other search engine find for me and my grandma: porn...


I don't remember response speed mattered until at least ten years after Google's start.

Certainly not when they won.

They were better. Basic PageRank was better than anything else. And once they figured out advertisement, they kept making it better to seal their dominance.


Google gave better results. Few people cared about faster servers at the time, not when most of the world was still on dialup or ADSL.


> This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

This is an absurd retcon. Google won because they had the best search. Ask Jeeves and AltaVista and Yahoo had poor search results.

Now Google produces garbage, but not in 2004.


> Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

What in the world are you talking about? Internet search? I remember Inktomi. Basch's excuses otherwise, Google won because PageRank produced so much better results it wasn't even close.


The faster results came after they had already won the race for best search results. Initially, Google wasn't faster than the competition in returning a full page. I vividly remeber the joy of patiently waiting 2-3 seconds for an answer, and jolting up every time Google Search came back with exactly what I wanted.


[flagged]


You've posted some version of this at least half a dozen times now. Please stop.


“Tech entrepreneur”


[flagged]


Working at Google doesn't mean you're intelligent, regardless of gender.


It did 15 years ago. And I have a feeling it still does for the people not checking the right diversity hire boxes.


my question is: why not both? why not pursue the profit and use that to fuel the research into AGI. seems like a best of both worlds.


That's the intent of the arrangement, but there's also limits - when that pursuit of profit begins to interfere with the charter of the non-profit, you end up in this situation.

https://openai.com/charter

> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.

Of course no one knows for sure except the people directly involved here.


> Of course no one knows for sure except the people directly involved here.

The IRS will know soon enough if they were indeed non-profit.


I was not implying they were not a non-profit. I am saying that we do not know the exact reason why the board fired Altman.


Really weird phrasing in this tweet. The idea is that Altman and/or a bunch of employees were demanding the board reinstate Altman and then resign. And they’re calling it a “truce.” Oh, and there’s a deadline (5 pm), but since it’s already passed the board merely has to “reach” this “truce” “ASAP.”

Edit: an update to the verge article sheds some more light, but I still consider it very sus since it’s coming from the Altman camp and seems engineered to exert maximal pressure on the board. And the supposed deadline has passed and we haven’t heard any resignations announced

> Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.


"Missing a key 5PM PT deadline by which many OpenAI staffers were set to resign."

Says who? And did they resign?


one thing that I am curious about: aren't there non-competes in place here? and even without them, you just cannot start your own thing that just replicates what your previous employer does - this has lawsuit written all over it.


It's California. Non-competes are void. It is one of the few states where non-competes are not legally enforceable.


It'll be tough going with no Azure compute contracts, no GPUs, no billions from Microsoft, no training data, OpenAI capturing all of the value from user-generated content resulted in sites like Reddit and Twitter significantly raising the cost to scrape them.


The same thing got said about Elon Musk and Twitter, and yet X is still somehow alive.


Elon had a massive preexisting AI-compute capacity from Tesla and ann enormous training set from X. That’s very different.


No nothing similar at all was said about that. Sam Altman is also not Elon Musk


Yeah, Sam will not turn 40billion into 0 billion


Nah this is California, that won’t work


Maybe they used the old Soviet Russia trick / good old KGB methods to seek out those who supported Altman. Now the board has a list of his backers - and they will slowly fire them one by one later. "Give me the man and I will give you the case against him".

https://en.m.wikipedia.org/wiki/Give_me_the_man_and_I_will_g...


I am just baffled for so many reasons.

Why is the board reversing course? They said they lost confidence in Altman - that’s true whether lots of people quit or not. So it was bullshit

Why did the board not foresee people quitting en masse? I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired

Why did the interim CEO not warn Ilya about the above? Sure it’s a promotion but her position is now jeopardized too. Methinks she’s not ready for the big leagues

Who picked this board anyway? I was surprised at how…young they all were. Older people have more life experience and tend not to do rash shit like this. Although the Quora CEO should’ve known better as well.


From what we can see, it looks like the majority of the reporting sources are Altman aligned. Look at how the follow up tweet from this reporter read - the board resigning and the governance structure changing is being called a "truce" when it's a capitulation.

We might get a better understanding of what actually happened here at some point in the future, but I would not currently assume anything we are seeing come out right now is the full truth of the matter.


It seems to me that Altman uses his influence to manipulate public opinion, which he always does.


> I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired

Funny how people only use words like revolting for sudden firings of famous tech celebrities like Sam with star power and fan bases. When tech companies suddenly fire ordinary people, management gets praised for being decisive, firing fast, not wasting their time on the wrong fit, cutting costs (in the case of public companies with bad numbers or in a bad economy), etc.

If it’s revolting to suddenly fire Sam*, it should be far more revolting when companies suddenly fire members of the rank and file, who have far less internal leverage, usually far less net worth, and far more difficulty with next career steps.

The tech industry (and US society generally) is quite hypocritical on this point.

* Greg wasn’t fired, just removed from the board, after which he chose to resign.


That comparison doesn't make much sense, they didn't fire the CEO to reduce costs.

What looks quite unprofessional (at least on the outside) here is that a surprise board meeting was called without two of the board members present, to fire the CEO on the spot without talking to him about change first. That's not how things are done in a professional governance structure.

Then there is a lot of fallout that any half competent board member or C-level manager should have seen coming. (Who is this CTO that accepted the CEO role like that on Thursday evening and didn't expect this to become a total shit show?)

All of it reads more like a high school friends club than a multi billion dollar organization. Totally incompetent board on every dimension. Makes sense they step down ASAP and more professional directors are selected.


I’m not saying it was handled well. It wasn’t.

My point was that the industry is hypocritical in praising sudden firings of most people while viewing it as awful only when especially privileged stars like Altman are the victim.

Cost reduction is a red herring - I mentioned it only as one example of the many reasons the industry trend setters give to justify the love of sudden firings against the rank-and-file, but I never implied it was applicable to executive firings like this one. The arguments on how the trend setters want star executives to be treated are totally different from what they want for the rank and file, and that’s part of the problem I’m pointing out.

I generally support trying to resolve issues with an employee before taking an irreversible action like this, whether they are named Sam Altman or any unknown regular tech worker, excepting only cases where taking the time for that is clearly unacceptable (like where someone is likely to cause harm to the organization or its mission if you raise the issue with them).

If this case does fall into that exception, the OpenAI board still didn’t explain that well to the public and seems not to have properly handled advance communications with stakeholders like MS, completely agreed. If no such exception applies here, they ideally shouldn’t have acted so suddenly. But again, by doing so they followed industry norms for “normal people”, and all the hypocritical outrage is only because Altman is extra privileged rather than a “normal person.”

Beyond that, any trust I might have had in their judgment that firing Altman was the correct decision evaporated when they were surprised by the consequences and worked to walk it back the very next day.

Still, even if these board members should step down due to how they handled it, that’s a separate question from whether they were right to work in some fashion toward a removal of Altman and Brockman from their positions of power at OpenAI. If Altman and Brockman truly were working against the nonprofit mission or being dishonest with their board, then maybe neither they nor the current board are the right leaders to achieve OpenAI’s mission. Different directors and officers can be found. Ideally they should have some directors with nonprofit leadership experience, which they have so far lacked.

Or if the board got fooled by a dishonest argument from Ilya without misbehavior from Altman and Brockman, then it would be better to remove Ilya and the current board and reinstall Altman and Brockman.

Either way, I agree that the current board is inadequate. But we shouldn’t use that to prematurely rush to the defense of Altman and Brockman, nor of course to prematurely trust the judgment of the board. The public sphere mostly has one side of the story, so we should reserve judgment on what the appropriate next steps are. (Conveniently, it’s not our call in any case.)

I would however be wary of too heavily prioritizing MS’s interests. Yes, they are a major stakeholder and should have been consulted, assuming they wouldn’t have given an inappropriate advance heads-up to Altman or Brockman. But OpenAI’s controlling entity is a 501(c)(3) nonprofit, and in order for that to remain the correct tax and corporate classification, they need to prioritize the general public benefit of their approved charitable mission over even MS’s interests, when and if the two conflict.

If new OpenAI leadership wants the 501(c)(3) nonprofit to stop being a 501(c)(3) nonprofit, that’s a much more complicated transition that can involve courts and state charities regulators and isn’t always possible in a way that makes sense to pursue. That permanence is sometimes part of the point of adopting 501(c)(3) nonprofit status in the first place.


Some of those board picks make zero sense to me.


The board was likely stacked with people who were easily influenced by the big personalities and to check some marks (safety person, academic, demographic etc).


The latest update is that investors have been reporting that Sam Altman was talking to them about funding a new venture separate from OpenAI, together with Greg Brockman. This seems to paint the picture that the board was reacting to this news when dismissing Altman.

https://www.theguardian.com/technology/2023/nov/18/earthquak...


"Those responsible for sacking the people who have just been sacked, must be sacked."


Reminds me of the story of Chinggis Khan's burial:

"It's also said that after the Khan was laid to rest in his unmarked grave, a thousand horsemen trampled over the area to obscure the grave's exact location. Afterward, those horsemen were killed. And then the soldiers who had killed the horsemen were also killed, all to keep the grave's location secret."


Sounds like a line from HGTTG


It’s from the opening credits of Monty Python and the Holy Grail.

https://www.youtube.com/watch?v=79TVMn_d_Pk


Who sacks the person who sacks?


Whoever's nominally responsible for sacking the people who sacked the people who have just been sacked.


A Møøse once bit my server


"Quis dimittet ipsos dimissores?"


It's sacks all the way down.


David O Sacks


Curious to see if turning something off and back on will work out for the OpenAI board like it does in IT generally.


reach a truce where the board would resign and he and Brockman would return

That's a funny use of the word truce.


I guess the alternative is more like a war where Altman and Brockman form a new for profit company that kills OpenAI?


Truce for me, but not for thee.


These updates all seem to be coming from one side. Have they said anything at all?


There is no scenario here where Sam returns and OpenAI survives as a nonprofit. The board will be sacked.


I agree. The pretense that OpenAI is still an open or a nonprofit has been a farce for a while now, it is an aggressively for-profit, trying to be the next Google company, and everybody knows it.


Clearly people in the non-profit part are trying to bring the organization back to its non-profit origins - after Altman effectively high jacked their agenda and corporatized the organization for his own benefit; turning its name into a meme.


It's possible that it's already too late to course correct the organization. We'll know for sure if/when Altman gets reinstated.

If he's reinstated, then that's it, AI will be used to screw us plebs for sure (fastest path to evil domination).

If he's not reinstated, then it would appear the board acted in the nick of time. For now.


If they actually care about that part they'd instantly open source gpt4. Wouldn't matter what altman does after that point then


> The board will be sacked.

How does sacking a board work in practice?


> How does sacking a board work in practice?

For a nonprofit board, the closest thing is something "the members of the board agree to resign after providing for named replacements". Individual members of the board can be sacked by a quorum of the board, but the board collectively can't be sacked.

EDIT: Correction:

Actually, nonprofits can have a variety of structures defining who the members are that are ultimately in charge. There must be a board, and there may be voting members to whom the board is accountable. The members, however, defined, generally can vote and replace boards members, and so could sack the board.

OTOH, I can't find any information about OpenAI having voting members beyond the board to whom they are accountable.


MSF (Médecins sans Frontières) is in most jurisdictions an association, where the board is elected by and works for the association membership. In that case, a revolt from the associative body could fire the board.

OpenAI does not have an associative body, to my knowledge.


Mass resignations from whom, I wonder. Other researchers?


Presumably a significant amount of OpenAI employees are motivated by money, at least in some form.

The board just vaporised the tender offer, and likely much of their valuation. It’s hard to have confidence in that.


Also, most of the human race has an instinctual aversion to plotters and machinations. The board's sudden and rather dubious (why the need to bad-mouth Altman?) actions probably didn't sit well with many.

Dante places Brutus in the lowest circle of hell, while Cato is placed outside of hell altogether, even if both fought for the same thing. Sometimes means matter more than ends.

If the whole process had been more regular, they could have removed Altman with little drama.


We still don’t know if the one plotting was Altman. There is still room for this to be seen as a bold and courageous action.


Sadly, optics matter too. Even if Altman was the schemer, Ilya sure has made himself look like the one.


And with the popularity and success of GPT whatever they do next will likely be wildly successful. The timing couldn't be more perfect.


It's simple collective bargaining. I wonder how many of them oppose unions... until they have a need to work together.


I can't speak for every American but I find that plenty of Americans are fine with collective bargaining they just don't want to do it through a union if they're in a lucrative line of work already. Which isn't terribly hard to understand, they don't need or want an advocate whose main role is constantly issuing new demands they never cared about on their behalf. They just want to be able to pool their leverage as high value workers within the organization collectively in times of crisis.


On the contrary, they seem to be doing it quite fine without a union


If you're an engineer at open ai, you just saw probably millions of dollars of personal wealth get potentially evaporated on friday. You're going to quit and go wherever Altman goes next.


> You're going to quit and go wherever Altman goes next.

I won’t be surprised if it’s the open arms of Microsoft. Microsoft embraced and extended OpenAI with their investment. Now comes the inevitable.


Altman maybe, but not rank&file OpenAI engineers. They'd be leaving the millions in paper money for Microsoft's peanuts.


Deca-unicorns don't come along every day. How would Sam Altman build another one? (I'll be impressed if he does.)


Why follow Altman? Most smart people are more driven by the mission than a personality cult.


People who joined OpenAI because the organizations they left were stuck self-sabotaging the way OpenAI's board just did (for the same reasons the board did it)


It’s still common for people are people and triggered often by a list of common things like power, money, and fame.


But, but... what company will that guy from Quora go on to ruin next, if he's kicked off the OpenAI board now?


Don't worry about him: failure is the surest sign of an impending incidence of "white man about to get another chance to not learn from his failures".


This does not solve the company's California AG problem.

https://www.plainsite.org/posts/aaron/r8huu7s/


Hey I know something about this! I just mailed my organization's RRF-1 a couple of days ago. The author of this post seems to be confused. My organization is on the same fiscal year as OpenAI, and our RRF-1 had to be mailed by November 15th. That explains the supposed "six month" delay. Second, if it's mailed on November 15th, it might not have even been received yet, let alone processed. This post feels like grasping at straws on the basic facts, setting aside the fact that it just doesn't make any sense to imagine a board member filling out the RRF-1 and going "oh wait, was there financial fraud?" the morning of November 15th. (That's ... not how the world works? Under CA law, any nonprofit with 2M of more in revenue has to undergo an audit, which is typically completed before filling out the 990, and the 990 is a pre-req for submitting the RRF-1. That's where you'd expect to catch this stuff, and the board's audit committee would certainly be involved in reviewing the results well in advance.)


The six-month delay is probably due to an automatic extension if you get an extension from the IRS, and also, you can file the form electronically, in which case mail delays are not a problem. But neither of those issues is the point. The point is that the form needed to be filed at all, and representations needed to be made accordingly.

OpenAI handled their audit years ago and hasn't had another one since according to their filings. So that does not seem like it would have been an issue this year.

Take a look at the top of the RRF-1 for the instructions on when it's due. Also, the CA AG's website says that OpenAI's was due on May 15th. They just have been filing six months later each year.


This could all be easily covered over with a few billion dollars. This is just some guy that thinks too small.


The board has to stick to the charter. unfortunately employees there wants to align with the profit part when they know they can damn lot of money.. obviously they will be with Altman size.


I’m sure everyone at OpenAI thought they hit the winning lottery ticket and will walk away with tens of millions at minimum and the early employees with significantly more. When you vaporize all that for some ideological utopian motives I’m sure many were incredibly pissed and ready to follow Sam into his next venture. If you gonna sacrifice everything and work 60-100hr weeks then you better get your moneys worth.


Been reading up on the insight offered up on this site.

  Seems like a lot of these board members have deep ties around various organizations, governmental bodies, etc. and that seems entirely normal and probable. However, prior to chatgpt and dalle we, the public , had only been allowed brief glimpses into the current state of AI (eg Look this robot can sound like a human and book a reservation for you at the restaurant -Google ; look this robot can help you consume media better -many). As a member of the public it went from “oh cool Star Trek idea, maybe we’ll see it one day with flying cars” to “holy crap, I just felt a spark of human connection with a chat program.”
So here’s my question, what are the chances that openAI is controlled opposition and Sam never really was supposed to be releasing all this stuff to the public? I remember he was on his Lex podcast appearance and said paraphrasing “so what do you think, should I do it? Should I open source and release it? Tell me to do it and I will.”

Ultimately, this is what “the board is focused on trust and safety” mean right? As in safety is SV techno HR PR dribble for go slow, wear a helmet and seatbelt and elbow protectors , never go above 55, give everyone else the right of way because we are in the for the good humanity and we know what’s best. (vs the Altman style of: go fast, double dog dare smart podcast dude to make unprecedented historical decision to open source, be “wild” and let people / fate figure some of it out along the way.”)

The question of openai’s true purpose being a form of controlled opposition is of course based on my speculation but an honest question for the crowd here.


I don't buy the whole the board is for safety and Sam is pushing too fast argument. This is just classic politics and backstabbing unless there is some serious wrongdoing in the middle that left the board with no option to fire the CEO.


Agreed. 'Who benefits' is a good question to ask in situations like these and it looks like a palace coup to me rather than anything with a solid set of reasons behind it. But I'll keep my reservations until it is all transparent (assuming it ever will be).


the board is the ones that fired him, why would they resign if Sam isn't back?


Because they won't have a company to "run the board for" anymore if Sam doesn't come back (since so many people have threatened to resign).


Question: is there a public statement signed by a large number OpenAI employees saying that they will resign over this? I don’t know. I have seen that three people resigned. If I were an OpenAI employee I think I would wait a month and see how things shake out. Those employees can probably get very highly paid jobs elsewhere, now, or later.

The Anthropic founders left OpenAI after Altman shifted the company to be a non-profit controlling a for profit entity, right?


They also won't have a company if they resign. Not much benefit to them here, is there?


I guess since they're doomed anyway, resignation saves face a little bit more.


If you're going to die, die with honor, not without.

Basically the board's choices are commit seppuku and maybe be viable somewhere else down the line, or try to play hardball and fuck your life forever.

It's not really that hard a choice, but given the people who have to make it, I guess it kinda is...


Do they need to be viable? I think the point is that they are not motivated by this crap


Could be too far gone with both those who left and those who remain.


Has anyone else notice how many techies are on Twitter but still badmouth Twitter?


you can't critisize the government if you live in the country?


It's easier to leave twitter than your county


It’s more like lamenting your decision to eat at Burger King everyday


this was unfortunately a popular sentiment in the early 2000s in the US


Using Twitter causes it to lose money so it's fine.


Ummm...how exactly?


The only things you could do to make them money are paying for it, clicking on ads, or working there. Looking at ads without clicking costs them.


I recommend you look into ad "impressions" and the compensation model.

Clicking an ad is not the only way it is monitized.


They have both but it's mostly billed per click/app install/follow/video watch. The "brand awareness" advertisers already left except for like, Saudi Arabia.


It's like some Americans claiming they're going to move to Canada if their presidential candidate loses.

All that tough talk means doodly-squat.


The bad mothers are a vocal minority


This is just everyone swallowing the crap Sam Altman drops as truth.

I’d guess this sort of narcissist behavior is what got him canned to begin with. Good riddance.


The board seems truly incompetent here and looking at the member list it doesn't seem very surprising. A competent board should have asked for legal and professional advice before taking a drastic step like this. Instead the board thought it was a boxing match and tried to deliver a knockout punch before the market closes with blunt language. This might be the most incompetent board for an organisation of this size.


The major investors whose money is on the line and who are funding the venture, Microsoft, Sequoia, and Khosla, were not given advanced warning or any input in to how this would impact their investment.

I would definitely say the board screwed up.

https://www.forbes.com/sites/alexkonrad/2023/11/17/openai-in...


The board of the non-profit (one that fired Sam) has no fiduciary duty to those investors, I believe. Microsoft invested in the for-profit Openai, which is owned by the non-profit. The other ones I don't know.

The board has no responsibility to Microsoft whatsoever regarding this. Sam Altman structured it this way himself. Not to say that the board didn't screw up.


While this may be technically true, the reality is that when you take $10 billion from a company there are strings attached. Consultation on a decision of this magnitude is one of those strings. You can choose to push ahead anyway after this is done but dropping the news on them 1 minute before you pull the trigger is unacceptable and MSFT will go for the throat here. You can't be seen to be a company that can be treated like this at MSFT level when you have invested this much money in any org.


Once you take in 10 billions then it’s pretty much the opposite, legality is the only things that matter.


Did they take a wire transfer for $10bn in cash, now sitting in their bank account? Or did they get a promise of various funding over N years, subject to milestones, conditions, in a variety of media including cash, Azure credits, loan lines etc.

I'd imagine the latter, and that it can be easily yanked away.


You mean the latter, but yeah. Financing like that is doled out based on a number of things; it would be wildly irresponsible to do otherwise for reasons exactly like this.


Fixed, thanks!


No, that's not it; relationships play gigantic roles in large deals.

Besides, even if you had an outstanding contract for $10bn, a judge would not pull a "well technically, you did say <X> even though that's absurd, so they get all the money and you get nothing."


Depends what you mean. Legally they might be in the clear but guarantee when you fuck around with billions of other people's money, it gets more complicated that that.


There are lots of other people and companies with $10 billion though. Why does it have to be Microsoft? Even after this circus, Open AI could still probably raise a ton of money from new entities if they wanted to. Maybe that is the point of this.


Totally true. One can even argue they are forbidden to discuss this with MS. They would be mixing up the interests of the non-profit and its for-profit subsidiary. Legally, it’s only a change of control in the majority shareholder of a company where MS has invested in. They dont have a say, and pressuring them could be higly illegal.


That Microsoft agreed to such a deal is negligence of the highest order.


It might have been the only deal on the table. Perhaps they thought the risk was worth it - good processes don't always lead to good outcomes. Perhaps they felt that the rights they gained to the GPT models was worth it even if they don't get direct influence over OpenAI.

Between Bing, o365, etc. etc. etc. it's possibly they could recoup all of the value of their investment and more. At the very least it is a significant minimization of the downside.


As I understand it, they got all the model details and most of their investment was actually cloud credits on Azure. So technically they can cancel those going forward if they want to and deal with whatever legal ramifications exist. All of GPT4 (and other models) for probably $1-2b may not actually be a bad deal for them even if that's all they get.


They put out a statement saying they have what they need. I don't see how Microsoft loses here. Either they get altman back at openai and get rid of the ethics crowd and make bank, or they find his new startup without the move slow crowd and make bank. No matter what they win.


We have no idea what the terms of the deal are. It's probably "up to" $20 billion.


how can a non-profit own a for-profit?

honest question


I'd say easily, especially outside the US. Check out Germany for example: - Bertelsmann Foundation, owns or is the majority shareholder of Bertelsmann - Robert Bosch Foundation, owns or is the majority shareholder of Bosch - Alfred Krupp von Bohlen and Halbach Foundation, owns or is the majority shareholder of Krupp - Else Kröner Fresenius Foundation, owns or is the majority shareholder of Fresenius - Zeppelin Foundation (yes, those Zeppelins...) owns or is the majority shareholder of ZF Friedrichshafen - Carl Zeiss Foundation, owns or is the majority shareholder of Carl Zeiss and Schott - Diehl Foundation, owns or is the majority shareholder of Diehl Aerospace

And a bunch more. A lot of you will never have heard of them, but all of them are multi billion dollar behemoths with thousands of subsidiaries, employees, significant research and investment arms. And they love the fact that barely anyone knows them outside Germany.


Easy, they own shares. For example, the nonprofit Mormon church owns 47 billion in equity in private companies including Amazon, Exxon, Tesla, and Nvidia[1].

Nothing stopping a non-profit from owning all the shares in a for-profit.

[1] https://finance.yahoo.com/news/top-10-holdings-mormon-church...


You can do everything by the rules, and still do the wrong thing


Wrong by what metric? What if they believe the only way to fulfill their duty to the charter is for open ai to die? Why would it be wrong? Is it worse that it living to be the antithesis of itself? Just so the investors can have a little more honey?


They don't have any duty as far as governing the non-profit, but as majority shareholder of the for-profit subsidiary, the non-profit would still have a fiduciary duty to the subsidiary's minority shareholders.


Duties to not dilute them or specifically target them, but majority can absolutely make decisions about executives even if those decisions are perceived as harmful.


I'm surprised that none of these investors secured a board seat for themselves before handing over tens of billions. The board is closer to a friendship circle than a group of experienced business folks.


> The board is closer to a friendship circle than a group of experienced business folks.

Isn't this true for most of S.V.?


FOMO


Non profit board therefore for profit investors have no say


It was complete amateur hour for the board.

But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?


I think when it started it was not the most important tech company but just some open research effort.


Not many and even fewer if you consider folks that have a good grasp of themselves, their psychology, their emotions — and how they can mislead them, and their heart.

IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.

Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.


Is this a way of saying that AI safety is unnecessary?


It's a way of saying that what has been historically been considered "studying AI safety" in fact bears little relation to real life AIs and what may or may not make them more or less "safe".


Yes, with the addition that I do feel that we deserve something better than I perceive we’ve gotten so far and that safety is super important; but also I don’t work at OpenAI and am not Ilya so idk


Pretty sure that Sutskever understands the technology, and it looks like he persuaded the others.


>> A competent board should have asked for legal and professional advice...

I will bite. How do you know they didn't?


Typically it would be framed amicably, without so much axe-grinding, particularly for public release. Even ChatGPT itself would have written a more balanced release, and advised against such shenanigans. I enjoy that irony.


That's the thing. Lawyers can give them the letter of the law but might have no idea how popular Sam was inside and outside the company, or how badly he was needed. And that's what really matters here.


Why does it matters to a board that sticks to the principles of the charter of a non-profit? Why would they look at anything else other than the guiding principles?


Because their charter says their goal is to get to AGI, or something like that.

If 99% of their employees quit and their investors pull their support because Sam was fired, they're not getting anywhere and have failed to deliver on their charter.


>house collapses in 15mph wind

Why didn’t they hire a competent builder?

You:

>how do you know they weren’t? It could be pure happenstance! All the nails could… could have been defective! Or something! waves hands


Enron had independent auditors and a law firm approving what they did.


I wonder if any of this is related it to it being envisioned as a non-profit board, but in the past ~year, the for-profit part has outgrown what they were really ready to handle.


Maybe they asked ChatGPT for legal advice.


Maybe they and it didn't help them. Guardrails for chatgpt will prevent it from predicting outcomes, or providing any personalized advice. I asked it and just said to consult with counsel and have a succession plan.

>Predicting specific outcomes in a situation like the potential firing of a high-profile executive such as Sam Altman from OpenAI is quite complex and involves numerous variables. As an AI, I can't predict future events, but I can outline some possible considerations and general advice:


Surely there’s a wholly uncensored chatGPT 5 at OpenAI running on some engineering sample H200 cluster with a Terabyte of video RAM or something.


Better yet, Sutskever’s version with AGI!


i see what you did there.


Even an episode of Succession and they would have known better than to have attempted this


They're the board of a non-profit not a Fortune 500 company. Everyone should just chill.


a non-profit that controls one of the most valuable private tech companies that rivals the importance of a lot of F500 companies.


It didn't start out that way now did it?


> Instead the board thought it was a boxing match

Or maybe chess[1].

[1]: https://www.youtube.com/watch?v=0cv9n0QbLUM


They almost certainly consulted both lawyers and chatGPT and still proceeded with the dismissal. So, in a way, this could be a test of the alignment of chatGPT (and corporate lawyers).

One scenario where both parties are fallible humans and their hands are forced: Increased interest has to close down plus signups, because compute can't scale. Sam goes to Brockman and they decide to use compute meant for GPT-5 to try to scale for new users, without informing board. Can perfectly fine break that rule with GPT-4, but what if Sam does this again in the future when they have AGI on their hands?


>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

From Forbes [1]

Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”

Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)

McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.

Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.

More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.

[1] https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...


That board is going to face a wrath of shit from Microsoft, Khosla, and other investors.

This isn't a university department. You fuck around with $100B+ dollars of other people's money, you're gonna be in for it.


Sergei Frolov seems to be thriving these days.


Perhaps the AGI convinced the board to make a wild move like this as part of its first chess move


I’ve mused that an advanced AGI would probably become suicidal after dealing with humans for a while and realizing there’s no escape. Maybe this is an attempt.


New developments faster than you can read the stories about them... https://www.nytimes.com/2023/11/18/technology/ousted-openai-... (https://archive.vn/4U6tu)


“He also spoke with Masayoshi Son, the chief executive and billionaire founder of the tech conglomerate SoftBank”

That made me laugh a knowing laugh even though I know nothing.



Your link doesn't work with cloudflare DNS i think ?


Yes, Archive blocks Cloudflare DNS. People say it’s intentional, but whether that’s true isn’t clear to me.

https://news.ycombinator.com/item?id=19828702


The archive guy has been very upfront they use custom code to block resolution from Cloudflare's IP space. archive doesn't like them since they don't send edns client subnet information to archive; it all seems like bullshit since they support non cloudflare edns resolvers so it's probably some other beef.


Archive explaining their reasoning: https://twitter.com/archiveis/status/1018691421182791680

CEO of Cloudflare explaining: https://news.ycombinator.com/item?id=19828702

I don't understand how it isn't clear to you.


It’s absolutely intentional, they made a blog post about it.


This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. Boards are just appointed by shareholders aka investors aka capital holders to do their bidding.

The capped-profit / non-profit structure muddles that a little bit, but the reality is that entity can't survive without the funding that goes into the for-profit piece

And if current investors + would-be investors threaten to walk away, what can the board really do? They have no leverage.

Sounds like they really didn't "play the tape forward" and think this through...


A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.

No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.

If anything they would be miffed about how it was handled, but to be frank, unless GPT4 is sam Altman furiously typing, I don’t know he’s that important. The instability caused by the suddenness, that’s different.


Nothing matters if you don’t have the money to enforce the system. Come on get real. Whatever the board says MS can turn off the money in a second and invalidate anything.


Microsoft depends on OpenAI much more than OpenAI depends on Microsoft. If you work with OpenAI as a company very often this is extraordinarily obvious.


This doesn't seem very obvious to me. The fact this article exists, and that Microsoft is likely exerting influence over the CEO outcome, implies there's codependence at a minimum.


Microsoft depends on OpenAI as long as they're rapidly advancing. It seems the new leadership wants to halt or slow the rapid advancement.


I'm not sure this is true- Microsoft put something like 10 billion into OpenAI, which they absolutely needed to continue the expensive computing and training. Without that investment money OpenAI might quickly find themselves at a huge deficit with no way to climb back out.


Only a small fraction of the $10b was delivered and is apparently largely in azure credits.


Ah yes, no other company would step in and get this deal from OpenAI if Microsoft pulls out. It's not like Amazon and Google pump billions into the OpenAI competitor.


I’m pretty sure there are contracts, and one way or another, everyone would get a stay on everyone else and nothing would happen for years except court cases


> I’m pretty sure there are contracts

Which one side or the other would declare terminated for nonperformance by the other side, perhaps while suing for breach.

> and one way or another, everyone would get a stay on everyone else

If by a stay you mean an injunction preventing a change in the arrangements, it seems unlikely that "everyone would get a stay on everyone". Likelihood of success on the merits and harm that is not possible to remediate via damages that would occur if the injunction wasn't placed are key factors for injunctions, and that's far from certain to work in any direction, and even less likely to work in both directions.

> and nothing would happen for years except court cases

Business goes on during court cases, it is very rare that everything is frozen.


They could use Llama instead. OpenAI’s moat is very shallow. They’re still coasting on Google’s research papers.


If you’ve used the models for actual business problems GPT4 and its successive revisions are way beyond llama. They’re not comparable. I’m a huge fan of open models but it’s just different worlds of power. I’d note OpenAI has been working on GPT5 for some time as well, which I would expect to be a remarkable improvement incorporating much of the theoretical and technical advances of the last two years. Claude is the only actual competitor to GPT4 and it’s a “just barely relevant situation.”


Hm, it’s hard for me to say because most of my prompts would get me banned from OpenAI but I’ve gotten great results for specific tasks using finetuned quantized 30B models on my desktop and laptop. All things considered, it’s a better value for me, especially as I highly value openness and privacy.


For an individual use case Llama is fine. If you start getting to large workflows and need reliable outputs, GPT wins out substantially. I know all the papers and headlines about comparative performance, but thats on benchmarks.

Ive found that benchmarks are great as a hygiene test, but pointless when you need to get work done.


Even the best unquantized finetunes of llama2-70b are, at best, somewhat superior to GPT-3.5-turbo (and I'm not even sure they would beat the original GPT-3.5, which was smarter). They are not even close to GPT-4 on any task requiring serious reasoning or instruction following.


What specs are needed to run those models in your local machine without crashing the system?


I use Faraday.dev on an RTX 3090 and smaller models on a 16gb M2 Mac and I’m able to have deep, insightful conversations with personal AI at my direction.

I find the outputs of LLMs to be quite organic when they are given unique identities, and especially when you explore, prune or direct their responses.

ChatGPT comes across like a really boring person who memorized Wikipedia, which is just sad. Previously the Playground completions allowed using raw GPT which let me unlock some different facets, but they’ve closed that down now.

And again, I don’t really need to feed my unique thoughts, opinions, or absurd chat scenarios into a global company trying to create AGI, or have them censor and filter for me. As an AI researcher, I want the uncensored model to play with along with no data leaving my network.

The uses of LLMs for information retrieval are great (Bing has improved alot) but the much more interesting cases for me are how they are able to parse nuance, tone, and subtext - imagine a computer that can understand feelings and respond in kind. Empathetic commuting, and it’s already here on my PC unplugged from the Internet.


+1 Greg. I agree with most of what you say. Also, it is so much more fun running everything locally.


Another data point: I can (barely) run a 30B 4 bit quantized model on a Mac Mini with 32G on chip memory but it runs slowly (a little less than 10 tokens/second).

13B and 7B models run easily and much faster.



Microsoft is also OpenAI's main cloud provider, so they certainly have some leverage.


Aws is JP Morgan’s main cloud provider, and Apples too. Do you think aws has leverage over JPMC and Apple due to that? Or does JPMC and Apple have leverage over aws?

Azure gets a hell of a lot more out of OpenAI than OpenAI gets out of azure. I’ll bet you GPT4 runs on nvidia hardware just as well regardless of who resells it.


I think the larger issue here is that there's just not enough of that nvidia hardware out there if Microsoft decided to really play hardball, even if it hurts themselves in the short term. I don't know that any of the other cloud providers have the capacity to immediately shoulder OpenAI's workloads. JPMC or Apple have other clouds they can viably move to - OpenAI might not have anyone else that can meet their needs on short notice.

I think the situation is tough because I can't imagine there aren't legal agreements in place around what OpenAI has to do to access the funding tranches and compute power, but who knows if they are in a position to force the issue, or if I'm write in my supposition to begin with. Even if I am, a protracted legal battle where they don't have access to compute resources, particularly if they can't get an injunction, might be extremely deleterious to OpenAI.

Perhaps Microsoft even knows that they will take a bath on things if they follow this, but don't want to gain a reputation of allowing this sort of thing to happen - they are big enough to take a total bath on the OpenAI side of things and it not be anything close to a fatal blow.

I was more skeptical of this being the case last night, but less so now.


But why would Microsoft do anything to hurt their business in any way? They are almost certainly more furious for the way they found out than the actual action taken. Given how much Microsoft has bet their business on OpenAI (ask yourself who replaces bing chat? Why does anyone actually use azure in 2023?) being surprised by structural business decisions in their most important partner is shocking, and I think if I were the CEO of Microsoft I would be furious at being shocked more than pining in some weird Altman bromance.


> Why does anyone actually use azure in 2023?

When I see it, it has always been “Amazon is a competitor and we don’t buy from competitors”.


> I would be furious at being shocked more than pining in some weird Altman bromance.

Hypothetically he might also have very little trust in the decision making abilities of the new management and how much their future goals will align with those of Microsoft.


Microsoft finally has a leg up on Google in the public eye and they're gonna toss it away for Sam Altman? Seems dicey.


JP Morgan and Apple can actually afford to pay their cloud bills themselves. Open AI on the other hand can't.

> I’ll bet you GPT4 runs on nvidia hardware

Yes but they'll need to convince someone else like Amazon to give to them for free and regardless what happens next Microsoft will still have a signficant stake in OpenAI due to their previous investments.


Microsoft already has the models and weights, not the tech


Something I don't fully understand, from [1], Altman was an employee of the for-profit entity. So to fire him, wouldn't the non-profit board be acting in it's capacity as a director of the for-profit entity (and thus have a fiduciary duty to all shareholders of the for-profit entity)? Non-profit governance is traditionally lax, but would the other shareholders have a case against the members of the non-profit board for acting recklessly w/ respect to shareholder interests in their capacity as directors of the for-profit?

This corporate structure is so convoluted that it's difficult to figure out what the actual powers/obligations of the individual agents involved are.

[1] https://openai.com/our-structure


LLCs do not require rights be assigned fairly to all shareholders if the operating agreement and by-laws say otherwise. This is the case with OpenAI, where the operating agreement effectively makes the fiduciary duty of the for-profit the accomplishment of the non-profit's charter. The pinkish purpleish block of text on the page you linked goes into more detail here.

(Remember, fiduciary does not necessarily have anything to do with money)


> A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.

There is theory and there is reality. If someone is paying your bills by an outsized amount and they say jump, you will say how high.

The influence is rarely that explicit though. The board knowing that X investor provides 60% of their funding, for instance, means the board is incentivized to do things that keep X investor happy without X having to ask for it.

9 times out of 10, money drives decisions in a captilist environment


OpenAI hasn’t received much funding from Microsoft or other investors, and is profitable already with no lack of interested suitors for funding and partnership. Microsoft’s leverage is grossly overstated mostly because it suits Microsoft to appear important to OpenAI when it’s the other way around.


They received a 10 billion dollar investment that allows the product to operate plus they provide the servers. Without that your $20 a month goes to 2,000


They received much less than 10 billion, and it's mostly in credits (so really about half the value), in exchange for exclusive access to the world's most advanced LLM?


They’ve actually drawn very little of that $10b. They are profitable at the moment, and would have no trouble raising funds from anywhere at the moment in any quantity they wanted.


What’s the source on this?


Yes the board could claim OpenAI is nonprofit. But who is going to pay for the operation and salaries of its employees.

Definitely not OpenAI itself. They still need massive capital. With this drama, its future is put in serious doubt


The board can and does claim it because it is legally a non profit. There is no wishy washy space this isn’t true in. Sam Altman isn’t the source of their funds, regardless. Finally, OpenAI has a pretty successful business model already without outside investment, and without sam or with sam they will not have trouble accessing customers or investors should they need it, even from Microsoft. Let’s be real Altman isn’t OpenAI.


A company is just legalese + people. And people are notoriously for-profit, especially in this day and age.

The board can maintain control of the legal aspects (such as the org itself), but in the end, people are much more important.

Organizations are easy to duplicate. Persons, less so.


> No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.

The implication in Microsoft's statement is clear that they have what they need to use the tech. I read it to mean OpenAI board does not have leverage.


Microsoft has licensing rights to OpenAI tech. They do not “have it” in the sense they control it.


Well I read Nadella threatened to turn off OpenAI's servers, so yeah, Microsoft does in fact control it.

Not your premises not your compute?


Even threatening that, if disclosed publicly, would entirely threaten Azures business model. Cloud providers try to stay entirely neutral to their users business insofar as they don’t breach a ToS, law, or regulation forcing their actions. The entire business model is trusting a third party with the keys to your business. In my time working as a senior person at a cloud provider, then as a person setting system for major customers of cloud providers, this specific point was sacrosanct and invariant. Crossing that line would be a huge breach of the business model.

I think in this case I would need to see a source to believe you, and if substantiated, it would make me question Nadellas fitness to lead a cloud computing business.


Can't find the original thing I read with a more direct statement, I remember it being an anonymous source (on twitter maybe?) with inside info. I did more digging and found a few other things.

There's this [1], a NYT article saying that Microsoft is leading the pressure campaign to get Altman reinstated.

And there's this [2], a Forbes article which claims the playbook is a combination of mass internal revolt, withheld cloud computing credits from Microsoft, and a lawsuit from investors.

[1] https://archive.is/fEVTK#selection-517.0-521.120

[2] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...


This is not just a "non-profit"... it's a non-profit that owns a $90B for-profit company developing revolutionary, once-in-a-century technology. There is a LOT of money at play here.

Others have commented on how Microsoft actually has access to the IP, so the odds that they could pack their toys and rebuild OpenAI 2.0 somewhere else with what they've learned, their near infinite capital and not have to deal with the non-profit shenanigans are meaningful.

I'm not saying Sam is needed to make OpenAI what it is, but he's definitely "the investors' guy" in the organization, based on what has surfaced over the last 24 hours. Those investors would rather have him there over someone else, hence the pressure to put him back. It doesn't matter whether you and I think he's the man for the job -- what matters is whether investors think they are.

TL;DR the board thinks they have leverage, but as it turns out, they don't


Microsoft doesn’t have ownership rights to OpenAI IP. They license it. They can’t pack up anything as they just have an IAM and billing model on top of GPT4 they use to resell OpenAI tech.


> Microsoft doesn’t have ownership rights to OpenAI IP. They license it.

Honest question, do you have a source for that? Is it conceivable that Microsoft has some clause that grants them direct access to IP if OpenAI does not meet certain requirements. It is difficult to believe that Microsoft handed over $10B without any safeguards in place. Surely they did their due diligence on OpenAI's corporate structure.


OpenAI for-profit main purpose is to fulfill the desires of the non-profit. If there's a contract that goes against that, the contract would be void if necessary or that stipulation just crossed out.


I would expect that Microsoft would have negotiated terms like a perpetual license to the IP, given that they were the main investor and were in a strong negotiating position.

Microsoft has a lot of experience interacting with small companies, including in situations like this one where the small company implodes. The people there know how to protect Microsoft's interests in such scenarios, and they definitely are aware that such things can happen.


Not really. They run custom GPT model lol


Not one they own they don’t. OpenAI owns all of the GPT IP. Microsoft has a licensing arrangement with OpenAI. I’d note that azure GPT is not a custom model, only the bing chat is custom. And even the customizations aren’t owned by Microsoft.


So they are trying to backtrack which makes them look pretty foolish for no apparent reason ?


I didn’t see any actual evidence of that other than speculation and outside and uninvolved investors advocating for him in the article. I suspect this is a bait for your click.


"Ilya Sutskever @ilyasut I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."


They officially call the shot.

But right now they get a lot of shitstorm for this inexperience handling.

And it doesn't look good from the board that looks inexperienced.

Gordon-Levitt's wife?? Helen who? D'Angelo with a failing quora and a history of a coup.

Doesn't look good.

I'd bet it starts impacting their personal lives. This is equivalent to them coming out to support Donald Trump. It is that bad.


> A non profit board absolutely calls the shots at a non profit...

Doesn't look like it right now in this case.


Because of a news article saying a prior VC firm is pushing to reinstate sam or fund his new venture and didn’t care which way it goes? That’s not a lot to hang your hat on. They legally have every right to do what they did and no one can force them to change their mind under any circumstance. They might choose to, but OpenAI has all the cards. Sam Altman is a talking head, and if they churn some senior folks, OpenAI has the technology and brand to replace them. If I were the OpenAI board, I would be sleeping like a baby, especially if sam were acting out of sync with the charter of the non profit. I imagine his antics caused a lot of stress the further they drifted from their mission and the more he acted autonomously.


> If I were the OpenAI board, I would be sleeping like a baby

Well, they're all about to be out of a job, so it's a good time to catch up on sleep.


This is wildly incorrect. But a non-profit does have stakeholders, donors, beneficiaries and employees. All of those can apply pressure on a board.


> This is wildly incorrect

Great, we'll take your word for it.


Sorry, but you are just simply factually incorrect. That the board itself serves at the pleasure of other interests is clear (and even then, if they don't want to leave getting rid of them can be tricky depending on the details) but they do call the shots. The question is whether or not they can make it stick.

But until he is re-hired Sam Altman is to all intents and purposes fired. And it may well come to that (and that would almost certainly require all those board members who voted for his ouster to vacate their positions because their little coup plan backfired and nobody is going to take the risk of that happening again, especially not in this way).


Sorry, but I am just simply not factually incorrect. Again you want me to just take your opinion as fact... but stating it strongly doesn't make your argument more cogent.

Boards are agents to their principals. They call the shots only as long as their principals deem them to be calling them correctly. If they don't, they get replaced. Said differently, board members are "appointed" to do the bidding of someone else. They have no inherent power. Therefore, they do not, ultimately, call the final shots. Owners do. Like I said, this situation is a little muddier because it's a non-profit that owns a for-profit company, so there's an added layer of complexity between agents and principals.

OpenAI isn't worth $90B because of its non-profit. The for-profit piece is what matters to investors, and those investors are paying the bills. Sure, the non-profit board can fire Altman and carry on with their mission, but then everyone who is there "for profit" can also pack up their things and start OpenAI 2.0 where they no longer need the non-profit, and investors will follow them. I assume that's an undesirable outcome for the board as I suspect the amount of money raised at the for-profit level dwarfs the amount donated to the non-profit... which effectively means the for-profit shareholders own the company. Hence my original comment.


They call the shots until they are overruled (by a court, or by a new board after the board members have been forced out and that isn't all that simple otherwise no board could ever function in their oversight role in a non-profit), and even then until that process has run its course their statements are factually correct. I know this is all hairsplitting but it really does matter. When the board put out a statement saying they had fired Altman that was that. They can re-hire him or they can reverse their decision but until that happens their decision stands.

Yes, they are accountable (and I'm actually surprised at how many people seem to believe that they are not), but they are not without power. Legal and practical are not always exactly overlapping and even if the board may not ultimately hold practical power (even if they believe they do) legally speaking they do and executives function at the pleasure of the board. If the board holds a vote and the bylaws of the company allow for it and the vote passes according to those bylaws then that's that. That's one good reason to pack the board of your billions of dollars worth company with seasoned people because otherwise stuff like this may happen.

Afterwards you can do a lot about it, you can contest the vote, you can fight it in court, you can pressure board members to step down and you can sue for damage to the company based on the decision. But the board has still made a decision that is in principle a done deal. They can reverse their decision, they can yield to outside pressure and they can be overruled by a court. But you can't pretend it didn't happen and you can't ignore it.


You're missing the whole point of my comment for the sake of arguing you're quote-unquote "correct"

I'm not saying the board doesn't make decisions or that the board is powerless, or that their decisions are not enforceable or binding. That's already known to be true, there's no value in arguing that.

I'm saying the _ultimate_ decision is made by the people with the money, inevitably. The board is allowed to continue to make decisions until they go against the interests of owners. The whole point of a board is so owners don't have to waste their time making decisions, so instead they pay someone else (directors) to do make them on their behalf.

Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough.


Yes, and we're in agreement on that last part, see my other comments in the thread and in previous threads on the same subject.

In fact we are very much arguing that thing in the same way. But you do have to get the minutiae right because those are very important in this case. This board is about to - if they haven't already - find out where the real power is vested and it isn't with them. Which is kind of amusing because if you look at the people that make up that board some of them should have questioned their own ability to sit on this board based on qualifications (or lack thereof) alone.


> This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. (...) what can the board really do? They have no leverage

Which I later restated as "Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough." (emphasis added) -- which hopefully you agree is a clear restatement of my original comment.

Meanwhile you said

> This is wildly incorrect. (...) you are just simply factually incorrect. (...) But until he is re-hired Sam Altman is to all intents and purposes fired.

But I never claimed he wasn't for all intents and purposes fired

Yet you did claim I was "wildly" and "factually incorrect" and now you're saying "we are very much arguing that thing in the same way" but "you do have to get the minutiae right". To me, minutiae was sufficiently provided in the original comment for any minimally charitable interpretation of it. Said differently, the loss of minutiae was on the reader's part, not the writer's.

Regardless, lack of minutiae is not comparable to "wildly" or "factually" incorrect. Hence I was not either of these things. QED.


The staff calls the shots. The money will go wherever the talent is.


Owners call the shots, otherwise staff would never get fired.


never heard of a unions? staff can have power too. and often they do prevent wrongful firings.


Unions exist precisely to try to pool together the minuscule leverage that workers have so that they can fight with capital owners. If anything, they prove the point that staff have very limited power


The talent also goes wherever the money is


Yep. There's the apparent legal leverage,

and then there's the real leverage of money and the court of public opinion.


This suggests a plausible explanation that Altman was attempting to engineer the board’s expansion or replacement: After the events of the last 48 hours, could you blame him?

In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.


Any talented engineer or scientist who actually wants to ship product AND make money would head over to Sam’s startup. Any investor who cares about making money would fund Sam’s startup as well.

The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.


And any talented engineer or scientist who actually wants to build safe AGI in an organization that isn't obsessed with boring B2B SaaS would align with Ilya. See, there are two sides to this? Sam isn't a god, despite what the media makes him out to be; none of them are.


AGI has nothing to do with transformers. It's a hypothetical towards which there has been no progress other than finding things that didn't work. It's a cool thing to work on, but it's so different than what the popular version of openAI is, and it has such different timescales and economics... if some vestigial openAI wants to work on that, cool. There is definitely also room in the market for the current openAI centered around GPT-x et al, even if some people consider SaaS beneath them, and I hope they (OpenAI) find a way to continue with that mission.


Its been, like, two years dude. This mindset is entirely why any organization which has a chance at inventing/discovering ASI can't be for-profit and needs to be ran by scientists. You've got tik tok brain. Google won't be able to do it, because they're too concerned about image, and also got a bad case of corpo tik tok brain. Mistral and Anthropic won't be able to do it, because they have VC expectations to meet. Sam's next venture, if he chooses to walk that path, also won't, for the same reason. Maybe Meta? Do you want them being the first to ASI?

If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.

My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.

The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.


The problem is that this "AGI research group" is staffed by people who build statitiscal models, call them AI, and are delusional enough to think this is a route to general intelligene.

There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.

So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.

If sam is the only one interested in the app, its because he's the only sane guy in the room.


There is little evidence that conditional statistical models can never be a route to AGI. There's limited evidence they can, but far less they can't.

You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.


I am well aware of the literature in the area. 'Trained by reinforcement' in the case of animals includes direct causal contact with the environment, as well as sensory-motor adaption, and organic growth.

The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.

I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.

The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.

We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.

But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.

The gap between what has been provided and this burden-of-proof is laughable


There are significantly fewer people that would want to work with Ilya than there are people that would want to work with Sam/Greg.

If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.


Wrong. Ilya is the goose that laid the golden egg. Do you think other orgs don’t have engineers and data scientists?


The problem is it already became the other thing in a very impactful way.

If Sam were to be ousted it should have happened before ChatGPT was unleashed on the world.


And if Microsoft had major concerns about OpenAI's board and governance, it should have been voiced and addressed before they invested. Yet; here we are; slaves to our past decisions.


Sure, but without funding and/or massive support from MS this is not going to happen.


Would those talented engineers or scientists be content with significantly lower compensation and generally significantly less resources to work with. However good their intentions might this probably won't make them too attractive to future investors and antagonizing MS doesen't seem like a great idea.

OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.


I don't know; on a lot of those questions. I tend to think that there was more mission and ideology at OAI than at most companies; and that's a very powerful motivational force.

Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.

And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.


Citation needed on the ideology being a powerful motivational force in this context. People who think they're doing groundbreaking work that'll impact the future of humanity are going to be pretty motivated ideologically either way regardless of if they're also drinking the extra flavor from the mission statement's Kool-Aid.


It’s just an illusion that Sam is trying to be unsafe about it, it’s a scare tactic or sorts to get what they want. Example regulations, and now internally, power. It’s all bs man this AI will end the world stuff, it’s pushed for an agenda and you all are eating it up


Where do you go if you want to build an unsafe AGI with no morals? Military? China? Russia?

(I am aware that conceptually it can lead to a skynet scenario)


I don't think the people that want to move slowly and do research are necessarily working at OpenAI.


>would head over to Sam’s startup

Why? I see a lot of hero-worship for Sam, but very little concrete facts about what he's done to make this a success.

And given his history, I'm inclined to believe he just got lucky.


OpenAI is very conspicuously the only lab that (a) managed to keep the safety obsessives in their box, (b) generate huge financial upside for its employees and (c) isn't run by a researcher.

If Altman's contribution had simply been signing deals for data and compute then keeping staff fantasies under control, that already makes him unique in that space and hyper valuable. But he also seems to have good product sense. If you remember, the researchers originally didn't want to do chatgpt because they thought nobody would care.


He presumably can attract investors?


If that was the only issue, why not just go to Google, Meta, or Microsoft directly to work on their AI stuff? What do you really need Altman for?

Working at OpenAI meant working on GPT-4 (and whatever is next in line), which is attractive because it's the best thing in the field right now by a significant margin.


So can Dario Amodei and Mustafa Suleyman.


I still haven't heard an explanation of why people who use GPT would be under the impression that Sam had anything to do with the past improvements in GPT versions.


Have you really never been at a place without someone with vision leading the cause? Try it some time and you'll start understanding how and why a CEO can make or break a company.


This happens all the time. It's far more common for teams to succeed despite (or even in spite) of executive leadership.


> It's far more common for teams to succeed despite (or even in spite) of executive leadership.

People say this like it's some kind of truism but I've never seen it happened and when questioned everyone I've known who's claimed it ends up admitting they were measuring their "success" by a different metric then the company.


Of course it isn’t. Without executive sponsorship there is no staff or resources.


The vision of Worldcoin dude to get rich quick? Very inspiring.


Sam attracted money and attention, which attracted talent. If Sam departs for another venture, some - or a lot - of the talent and attention and money will leave too. This isn’t a car factory where you can replace one worker with another. If some of the top folks leave with Sam (as they already are) it’s reasonable to assume that the product will suffer.


If technical expertise is what drove all progress, Google / DeepMind would be far ahead right now.


Brockman maybe, though. Or at least in some sort of leadership capacity.


I'd understand the argument for Brockman considering he had a hand in recruiting the initial team at OpenAI, was previously the CTO, from some reports still involved himself in coding, was the only other founder on the board besides Ilya.


This is power struggle between silicon valley VC group and AI scientists. This conflict was bound to happen at some point across every company. I don't think the interest of both the group aligns after certain point. No self respecting AI scientist want to work hard for making closed model SaaS products.


Why are people calling this already? There was a coup. The people on the losing end, which includes some large investors, counterattacked. That's where we are now (or were when the article was published). Of course they counterattacked! But did the counterattack land? I'm not sure why you're assuming it did. Personally, I don't know enough to guess. Given that the board was inspired to do this by the very mission that the non-profit was set up to safeguard, there's some level of legal coverage, but enough to cover their asses from a $10 billion assault? I for one can't call it.


They might not even have believed that they'd win, just that this outcome would be better than being silently outmaneuvered.

If the coup fails in the end (which seems likely), it will have proved that the "nonprofit safeguard" was toothless. Depending on the board members' ideological positions, maybe that's better than nothing.


this is the most likely explanation. Altman was going to oust them, hence why they had to make what seems like a bad strategic move. The move seems bad from our perspective but it's actually the most logical strategy for the board in terms of self preservation. I agree. I think this is most likely what occurred.


How could he possibly oust them?


I'm sure their are ways that we aren't privy to knowing just like we don't know why Altman was fired. Why was Sam Altman being dishonest and what was he dishonest about?

This reasoning is the only one that makes sense. Where action taken by the board aligns with logic and any private action done by Sam Altman that could have offended the board.

The story of the board being incompetent to the point of mental retardation and firing such a key person is just the most convenient, attractive and least probable narrative. It's rare for an intelligent person to do something stupid, even rarer an entire board of intelligent people to do something stupid in unison.

But it's so easy to fall for that trope narrative.


We know the org structure. It's not possible.


My first thoughts yesterday were: Some really bad scandal happened at OpenAI (massive data leak, massive fraud, or huge embezzlement), or the board is really incompetent and doesn't know what they're doing. But an organization as big as OpenAI, with the backing of Microsoft and other big players would never make such a big decision without a really good reason.

Seems like Hanlon's razor won once again.


This is unfathomably depressing for me; I am solidly in the non-profit, open etc. camp, and the way the board has handled the situation seems to be putting a tombstone to any opponents to Altman's way of doing things: profits uber alles, moats galore, non-profit-wink-wink-nudge-nudge; an unmitigated disaster.


If you want more open research and development, you should be happy for a closed OpenAI. It's why we have Mistral. Let the org redefine itself and push new boundaries. If we didn't have commercial unix there would be no linux. Allow the path to be blazed by VC, it's not about open technology being first, it's about it even existing at all.


Uhh... sure; whatever.


Glad you see my point and agree with me. Happy to have helped change at least one mind. Enjoy your day!


This passive-agressive back-and-forth: a summary of what happened behind closed doors between Ilya and Sam


Other commenters here have pointed out what seems to be most plausible: Altman was making moves to fire or alter the board, so they made a (bad) first move, and it's now backfired on them.

It's a bad situation.


Folks in general are going to look much more askance of complicated corporate structures.


If the reports are true, and Ilya led the coup, then either him or Sam can be at OpenAI going forward but not both. The rest of the board members who sided with him are gone either way.

Regardless of who ends up at the helm, OpenAI is going to be a different place on Monday than it was on Thursday, and not for the better.


Not for the better why?

Obviously Sam wasn’t the best fit for OpenAI and investors aren’t even saying what the problem is. Clearly the board feels he was the wrong person for the job.

I think it’s ridiculous that everything thinks that Sam being outed means OpenAI is in trouble. Let this play out and see how it evolves


24 hours ago OpenAI fired their CEO in the most childish possible way. Now they are trying to get him back.

This is embarassing for OpenAI no matter how you slice it.


> Now they are trying to get him back.

OpenAI has never claimed they want Sam back. The article claims OpenAI's investors want him back.

I will agree that OpenAI could have done a better job of letting him go if there truly were irreconcilable differences.


While the unceremonious firing was bad I am sure this could have gone down way worse than this. Way way worse.


> unceremonious firing

What's a ceremonious firing look like? Serious question.


compare raja koduri to brian krzanich

the former went on garden leave for 6 months (actually even before the Vega launch) to make a movie with his brother, and then resigned to “spend more time with his family”, before popping up again a month later at intel. That’s what it looks like when they want you to go away but they don’t want to make a big scene over it.

the latter fucked up so badly the board found a reason to kick him out without a golden parachute etc, despite the official reason (dating another employee) being something that was widely known for years, other than being a technical no-no/bad idea in general. he wasn’t kicked out because of that, he was kicked out for the combination of endless fab woes, spectre/meltdown, and bad business/product decisions that let AMD get the traction to become a serious competitor again. That’s what it looks like when the board is absolutely furious and pushes whatever buttons inflict the most pain on you.

Ironic that it’s a bit of an auto-antonym (auto-antoidiom?), it’s ceremonious when they want you to go away quietly and it’s unceremonious when they publicly kick your ass to the curb so hard you’ve got boot marks for a week.


Isn't this a military thing? "Honorable discharge" or something like that? Bunch of people at a ceremony, maybe a speech about the person's contribution, they get given a medal, family is there in their nicest clothes?


How?


off the top of my head:

prolonged public exchange between sama and the board _before_ any firings where they throw accusations at each other followed by microsoft pulling out, followed by people quitting and immediately resulting in a chatgpt outage. followed by the firing of the ceo


Could have done it with poop emojis on twitter


Not like the board, except Ilya has some real capital or expertise to convince everyone this is the right decision.

If they do, it is the perfect time to speak out loud, not letting this news bubbling up to the front page and everyone is talking about how disastrous they were?

What is this board waiting then? The weekend??

The board isn’t bullet proof and they are not god. They can fire Sam yes, it won’t stopping people thinking this is stupid or this won’t do more harm than good to OpenAI


Perhaps they are smoothing things out with some key stakeholders after the fact, and will have more to say Monday regarding all this. I doubt they aren’t now doing some amount of information level setting with people now that the decision was made


> Obviously Sam wasn’t the best fit for OpenAI

It's quite possible that he wasn't the best fit, and that the board is an even worse fit. Judging by the behavior of the board, it's hard to see them being a good fit for the company.


Based on the firing? Because that’s all I think we (the public) have any insight into.

I’m saying there is a reason this happened and 2/3 the board agreed. It needs to play out further for us to see if there is a problem here or not, honestly.

I find it hard to believe you can effectively muster a mandate worth of votes based on opinion alone


As others have pointed out, this board has no skin in the game. They just voted out founders who do have skin in the game (although through roundabout means). It’s a very tough sell that this board is doing the right thing.


Just to clarify, one founder on the board, Ilya, has skin in the game, and was the reason behind Sam's firing.

He convinced other members of the board that Sam was not the right person for their mission. The original statement implies that Ilya expected Greg to stay at OpenAI, but Ilya seems to have miscalculated his backing.

This appears to be a power struggle between the original nonprofit vision of Ilya, and Sam's strategy to accelerate productionization and attract more powerful actors and investors.

https://nitter.net/GaryMarcus/status/1725707548106580255


Founders come and go. Doesn’t always make them a good. he wasn’t the sole founder either, it was founded by a consortium of people


I think most people don’t think it was obvious Sam wasn’t the best fit for OpenAI.


It's not only Sam, also Greg and a few other engineers have already resigned (and one can assume more to follow)


Maybe, or maybe he was in fact unpopular among the majority and you are seeing Altman supporters leave. It happens.

There is nothing to indicate that this bleeds OpenAI more generally. The rank and file aren’t as fire as I’m aware aren’t resigning en masse.

Executives come and go. Show me why these people matter so much that OpenAI has no future then we can talk. It’s in fighting that became public and I’m certain people are pulling whatever strings they have on this, but I don’t see objective evidence that these people make OpenAI successful.

This needs to play out


Rank and file perhaps aren't yet resigning en masse, but I would be extremely surprised if there won't be a bunch that jumps to the new ship solely because that puts them higher up the totem pole.

Now will that be another 3 or another 30, time will tell.


Three engineers isn’t a lot honestly after such a stunt. I’d assumed there would more loyal folks, but maybe most are really in for the mission.

The next couple of weeks will tell.


Bear in mind - most folks are loyal to a paycheck and their best estimate of future paychecks/value. Spot witting because your friend/boss got fired wrongly… is unlikely to maximize either of those unless you were already planning to resign in the next few weeks.

Now, do a bunch of Openai peeps interview at Meta/Google/Amazon/Anthropic/Cohere over the next few months? Certainly.


> I think it’s ridiculous that everything thinks that Sam being outed means OpenAI is in trouble

Even if we assume that's true, wouldn't the somewhat incompetent and seemingly unnecessarily dramatic way they handled not be a concerning sign?


They accused Sam of lying in a public statement when they don't have an evidence to back it up.

Those 4 people are not fit to run any company.

Not a single person asked: well hey what if somebody asks for an evidence of lying? Do we have one?


We don't know any of that. Only things we know are statement from the board and statement from Altman that he was caught by surprise and statement from Microsoft they're supporting new CEO and few of the people that left. That's all we know for sure. Everything else are rumors and PR spins for now. If they have some evidence of what they said in statement about lying we just don't know.


The board can easily back up their public claim. They don't.

Even the email to their own employees says it is a irreconcilable difference. Nothing about lying.

I don't think it is reasonable to go with "we don't know". It is more like: "it is crucial to back up your claim. Still, you don't.".


I don't disagree. It's just maybe they have something they haven't shared or maybe they don't. We don't know (yet).


The fact that they didn't follow up with evidence immediately shows that they are incompetent.

You don't just accuse someone of committing a heinous crime and stay silent. What is the detail?


True, but are they obliged to provide all details to the public?


No, but they are also not obliged to accuse Sam of lying either. But here we are.

You think accusing someone of lying in a public statement and don't follow up is competent?


I agree with you


Perhaps part of the problem is that when some people say OpenAI they mean the non-profit parent of the for-profit, and when other people say OpenAI they mean the for-profit subsidiary of the non-profit.


Why did the board fire Sam in such a weird way? It shows that they are the wrong people for the job. If they wanted to get rid of him they should have done a better job than alienating everyone at the company.


A typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule is a good way to start. And is a reasonable way to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.


If Altman gets to return, it’s the goodbye of AI ethics within OpenAI and the elimination of the nonprofit. Also, I believe that hiring him back because of “how much he is loved by people within OpenAI” is like forgetting that a corrupt president did what they did. In all honesty, that has precedent, so it wouldn’t be old news. Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers. Ilya is one. Greg has just been inspiring people and stopped properly coding with the team a long time ago. Sam never did any code and the vision of an AGI comes from Ilya…Even if Mira now sides with Sam, I believe there’s a lot of social pressure for the employees to support Sam and it shouldn’t be like that. Again, I do believe OpenAI was and is a collective effort. But, I wouldn’t treat Sam as the messiah or compare him to Steve Jobs. That’s indecent towards Steve Jobs who was actually a UX designer.


I have to work with code written by Data Scientists very often and, coming from a classical SWE background, I would not call what the average Data Scientist does full stack software engineering. The code quality is almost always bad.

This is not to take away from the amazing things that they do - The code they produce often does highly quantitative things beyond my understanding. Nonetheless it falls to engineers to package it and fit it into a larger software architecture and the avg. Data Science career path just does not seem to confer the skills necessary for this.


For me, anecdotally, it was moreso the arrogance that was a major putoff. When I was a junior SWE I knew I sucked, and tried as hard as I could to learn from much more experienced developers. Many senior developers mentored me, I was never arrogant. Many data scientists on the other hand are extremely arrogant. They often treat SWE and DevOps as beneath them, like servants.


I see a lot of work done by data scientists and a lot of work done by what I would call “data science flavoured software engineers”. I’ll take the SWE kind any day of the week. Most (not all, of course!) data scientists have an old school “it works on my machine” mentality that just doesn’t cut it when it comes to modern multi-disciplinary teaming. DVCS is the exception rather than the rule. They rarely want to use PMs or UI/UX, and the quality of the software is not (typically) up to production grade. They’re often blinding smart, there’s no doubt about that. But smart and wise are not the same thing.


As an actual scientist, I would also not call what “data scientists” do “science”.


> I believe that people don’t understand that Data Scientists are full stack engineers.

What do you mean by "full stack"? I'm sure there's a spectrum of ability, but frankly where I'm from, "Data Scientist" refers to someone who can use pandas and scikit-learn. Probably from inside a Jupyter notebook.


Maybe she just meant that "data scientists are engineers too", rather than saying that they work on both the ChatGPT web UI and the the machine learning code on the backend.


Wait until they learn the "engineer" in SWE is already a very liberal use of the term....


Machine learning, data science, Deep learning= backend

Plotting, Charting ,visualization, = frontend


This is proving the point of the parent comments.

My view of the world, and how the general structure is where I work:

ML is ml. There is a slew of really complex things that aren’t just model related (ml infra is a monster), but model training and inference are the focus.

Backend: building services used by other backend teams or maybe used by the frontend directly.

Data eng: building data pipelines. A lot of overlap with backend some days.

Frontend: you spend most of the day working on web or mobile technology

Others: site reliability, data scientists, infra experts

Common burdens are infrastructure, collaboration across disciplines, etc.

But ML is not backend. It’s one component. It’s very important in most cases, a kitschy bolt on in other cases.

Backend wouldn’t have good models without ML and ML wouldn’t be able to provide models to the world reliably without the other crew members.

The fronted being charts is incorrect unless charts are the offering of the company itself


Truly the modern renaissance people of our era.

Leonardo da Vinci and Michelangelo move over - the Data Scientists have arrived.


Running matplotlib is not doing frontend...


On the other hand having virtually the whole staff being willing to follow him shows they clearly think very highly of him. That kind of loyalty is pretty wild when you think about how significant being a part of OPENAI means at this point.


Loyalty is not earned, it is more like 'snared' or 'captured'.

Local guy had all the loyalty of his employees, almost a hero to them.

Got bought out. He took all the money for himself, left the employees with nothing. Many got laid off.

Result? Still loyal. Still talk of him as a hero. Even though he obviously screwed them, cared nothing for them, betrayed them.

Loyalty is strange. Born of charisma and empty talk that's all emotion and no substance. Gathering it is more the skill of a salesman than a leader.


He screwed them how? They knew they were employees not co owners.


That's the whole point of the story: Then they wouldn't have treated him as a hero and be loyal to him. If you're just an employee, your boss should be just a boss.


It’s possible he paid well and was a great boss. I don’t know if these people are gonna take a bullet for him, but maybe he was great to work for and they got opportunities they think they wouldn’t have otherwise.

Loyalty, appreciation, liking… is a spectrum. Loyalty doesn’t have one trumpish definition.


They worked hard, overtime, so the company would succeed. They were promised endless rewards - "I'm gonna take care of you! We're in this together!"

Then, bupkiss.

No, not a hero.


Said like a follower, determined to be loyal to an imagined hero, despite any amount of evidence to the contrary.


Loyalty is absolutely earned.


Which news stories mentioned that virtually the whole staff was leaving? I saw a bunch of departures announced and others rumored to be upcoming, but no discussion of what percentage of the company was leaving.


Who knows if they follow him or just don't want to work for OpenAI anymore.

That are different things.


They probably just asked a couple of guys.


I dislike AI ethnics very much, especially under the current context, it feels meaningless. The current GPT4 model only has over regulation problem, not lack of such.


go on?


The guardrails they put on it to prevent it from saying something controversial (from the perspective of the political climate of modern day San Francisco) make the model far less effective that it could be.


Uncensored everything goes AI function better than most AI. See Mistral and it's finetune kicking ass at7b


[flagged]


Yeah yeah...

This "political correctness" makes the AI measurably stupider, if nothing


It's a lot better than that. OpenAI is just very good execution of publicly available ideas / research, with some novelty that is not crucial and can be replicated. Moreover, Altman himself contributed near zero to the AI part itself (even from the POV of the product). So far OpenAI products result more or less spontaneously of what LLMs where capable of. That to say that there are crucial CEOs sometimes, like Jobs was for Apple. CEOs able to shape the product line with their ability to just tell apart outstanding from meh things, but this is not the case.


Why then has no one come close to replicating GPT-4 after 8 months of it being around?


Because of outstanding execution of OpenAI technical folks. An execution that has nothing to do with Altman. Similarly Mistral 7B model has much better performances than others. There is some smart engineering plus finding the magical parameters that produce great results. Moreover, they have a lot of training power. Unfortunately here the biggest competitor would be a company that lost its way a lot of time ago: Google. So OpenAI look magical (while it is using mostly research produced by Google).


Sounds like Apple / Xerox all over.


You'd be more likely to get a straight answer from the chief scientist rather than the chief executive officer. At least in this case.


Claude by Anthropic has 46% winrate with GPT4 according to Chatbot Arena. That is pretty close.


If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.

It would be great to see a truly open and truly human benefit focused AI effort, but OpenAI isn't, and as far as I can tell has no chance of becoming, that. Might as well at least try to be an effective company at this point.


>If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.

Many would disagree.

If you want a for-profit AI enterprise whose conception of ethics is dumping resources into an endless game of whack-a-mole to ensure that your product cannot be used in any embarrassing way by racists on 4chan, then the market is already going to provide you with several options.


I disagree that the “rationalist” and EA movements would make good decisions “for the benefit of humanity”, not that an open (and open source) AI development organisation working for the benefit of the people rather than capital/corporate or government interests would be a good idea.


>If Altman gets to return, it’s the goodbye of AI ethics

Any evidence he's unethical? Or just dislike him?

He actually seems to have done more practical stuff like experimenting with UBI, to mitigate AI risk than most people.


That “experimenting with UBI” is indistinguishable from any other cryptocurrency scam. It took from people, and he described it with the words that define a Ponzi scheme. That project isn’t “mitigating AI risk”, it pivoted to distinguish between AI and human generated content, a problem created by his other company, by continuing to collect your biometric data.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...


Yes, that's exactly the one I was thinking about when unethical came up in this context. And I've been saying that from day #1, the way that is structured is just not ok.


He also did cash in Oakland https://www.theguardian.com/technology/2016/jun/22/silicon-v...

I signed up from Worldcoin and have been given over $100 which I changed to real money and think it's rather nice of them. They never asked me for anything apart from the eye id check. I didn't have to give my name or anything like that. Is that indistinguishable from any other cryptocurrency scam? I'm not aware of one the same. If you know of another crypto that wants to give me $100 do let me know. If anything I think it's more like VCs paying for your Uber in the early days. It's VC money basically at the moment, with I think they idea they can change it into a global payment network or something like that. As to whether that will work, I'm a bit skeptical but who knows.


> They never asked me for anything apart from the eye id check.

You say that like it’s nothing, but your biometric data has value.

> Is that indistinguishable from any other cryptocurrency scam?

You’re ignoring all the other people who didn’t get paid (linked articles).

Sam himself described the plan with the same words you’d describe a Ponzi scheme.

https://news.ycombinator.com/item?id=38326957

> If you know of another crypto that wants to give me $100 do let me know.

I knew of several. I don’t remember names but do remember one that was a casino and one that was tidied to open-source contributions. They gave initial coins to get you in the door.


I think the UBI experiment was quite unethical in many ways and I believe it was Altman's brainchild.

https://www.businessinsider.nl/y-combinator-basic-income-tes...


Okay I'll bite, what's so unethical about giving people money?


Because without a long term plan you are just setting them up for a really hard fall. It is experimenting on people where if the experiment goes wrong you're high and dry in your mansion and they get to be pushed back into something probably worse than where they were before. It ties into the capitalist idea that money can solve all problems whereas in many cases these are healthcare and education issues first and foremost. You don't do that without really thinking through the possible consequences and to ensure that no matter what the outcome it is always going to be a net positive for the people that you decide to experiment on.


Let me see if I understand, is your argument that you shouldn't give people money because they might make irresponsible financial choices?


It's not even necessary that he is unethical. The fact is that the structure of openai is designed so that the board has unilateral power to do extreme shit for their cause. And if they can't successfully do extreme shit without the company falling apart and the money/charisma swaying all the people there's no hope for this nonprofit ai benefiting humanity to have ever worked--which you might say is obvious but this was their mission


Like it or not, some people compare him to Jobs http://www.paulgraham.com/5founders.html


This is the problem with people: they build icons to worship and turn a blind eye to the crooked side of that icon. Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business. Right now, Altman is irrelevant for the further development of AI and GPT in particular because the vision for the AI future comes from the engineers and scientists of OpenAI. Apple has never had any equipment that is good enough and comparable in price/performance to its market counterparts. The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand. GPT-4 and GPT-4 Turbo are totally different. They are the best, but they are not irreplaceable. If you look at what Phind did to LLaMA-2, you'll say it is very competitive. Though LLaMA-2 requires some additional hidden layers to further close the gap. Making LLaMA-2 175B or larger is just a matter of finances. That said, Altman is not vital for OpenAI anymore. Preventing Altman from creating a dystopian future is a much more responsible task that OpenAI can undertake.


I don’t understand this take. Do you really think CEOs don’t have any influence on their business? Alignment, morale, resource allocation, etc? And do you really think that those factors don’t have any influence on the productivity of the workers who make the product?

A bad CEO can make everyone unhappy and grind a business to a halt. Surely a good one can do the opposite, even if that just means facilitating an environment in which key workers can thrive and do their best work.

Edit: None of that is to say Sam Altman is a good or bad CEO. I have no idea. I also disagree with you about iOS, it’s not perfect but it does the job fine. I don’t feel like I’m eating glass when I use it.


> The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand

You do understand that other people might different preferences and opinions which are not somehow inherently inferior to those you hold.

> comparable in price/performance to its market counterparts

Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.

> but neither did squat for the technical part of the business.

Right... MacOS being an Unix based OS is whose achievement exactly? I guess it was just random chance this this happened?

> That said, Altman is not vital for OpenAI anymore

Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years if the money taps are turned off.


>> Right... MacOS being an Unix based OS is whose achievement exactly?

Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...

>> Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.

For the past 15 years, whenever I needed new hardware, I thought, "Maybe I'll buy a Mac this time." Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price. With Linux on board, making your desktop environment eye-candy takes seconds; nothing from the Apple ecosystem has been irreplaceable for me for the last 20 years. Sure, there is something that only works perfectly on a Mac, though I can't name it.

>> Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years

It is always vital. OpenAI could not even dream of building their products without the finances they've received. However, do not forget that OpenAI has something technical and very obvious that others overlook, which makes their GPT models so good. They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up. So it goes both ways.

But I'd prefer my future not to be a dystopian nightmare shaped by the likes of Musk and Altman.


> Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this?

Is that actually a serious question? Or do you just believe that no founder/CEO of a tech company ever had any role whatsoever in designing and building the products their companies have released?

> Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price.

I'm talking about M-series Mac mainly (e.g. the Macbook Air is simply unbeatable for what it is and there are no equivalents). But even before that you should realize that other people have different priorities and preferences (.e.g go back a few years and all the touchpads on non Mac laptops were just objectively horrible in comparison, how much is that worth?)

> environment eye-candy takes seconds

I find it a struggle. There are other reasons why I much prefer Linux to macOS but UI and GUI app UX is just on a different level. Of course again it's a personal preference and some people find it much easier to ignore some "imperfections" and inconsistencies which is perfectly fine.

> They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up

Maybe, maybe not. Antagonizing MS and their other investors certainly isn't going to make it easier though.


OSX comes with a scuffed and lobotomized version of core-utils. To the point where what is POSIX/portable to almost every single unix (Linux, various BSDs, etc.) is not on OSX.

Disregarding every other point, in my eyes this single one downgrades OSX to “we don’t use that here” for any serious endeavor.

Add in Linux’s fantastic virtualization via KVM — something OSX does not have a sane and performant default for (no, hvf is neither of these things). Even OpenBSD has vmm.

The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).


> The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).

Well.. it's understandable that some people believe that things which are important and interesting to them (and presumably the ones which they work with on/with) are somehow inherently superior to what everyone else is doing.

And I understand that, to be fair I don't use MacOS that much these days besides when I need to work on my laptop. However.. most of those limitations are irrelevant/merely nuisances/outweighed by other considerations for a very high number of people who have built some very complicated and complex software (which has generated many billions in revenue) over the years. You're free to look down on those people since I don't really think they are bothered by that too much...

> for simple webdev it’s completely useable

I assume you also believe that any webdev (frontend anyway) is inherently simple and pretty much worhtless compared to the more "serious" stuff?


I don't look down on webdev. I've done webdev, in all its flavors and incarnations. I see it for what it is: mostly gluing together the work of other people, with various tweaks and transformations. It is simple work, once you get a feel for it.

The main issue I have with it is that there are no problems in webdev any more, so you get the same thing in both the frontend and backend: people building frameworks, and tools/languages/etc. to be "better" than what we had before. But it's never better, it's just mildly more streamlined for the use-case that is most en vogue. All of the novel work is being done by programming language theorists and other academic circles (distributed systems, databases, ML, etc.).

Regardless, the world runs on Linux. If you want to do something novel, Linux will let you. Fork the kernel, edit it, recompile it, run it. Mess with all of the settings. Build and download all of the tools (there are many, and almost all built with Linux in mind). Experiment, have fun, break things, mess up. The world is your oyster. In contrast, OSX is a woodchip schoolyard playground where you can only do a few things that someone else has decided for you.

Now, if you want to glue things together, OSX is perfectly fine a tool compared to a Linux distro. The choice there is one of taste and values. Even Windows will work for CRUD. The environments are almost indistinguishable nowadays.


> Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...

Steve Jobs founded NeXT


Aren't your thoughts contradictory? You say Altman is no longer needed because Gpt4 is now very good. Then you describe how horrible the iPhone is now. Steve Jobs has been dead a long time, and without his leadership, the uncompromising user focused development process in Apple was weakened.

How will OpenAI develop further without the leader with a strong vision?

I think Apple is the example confirming that a tech companies need visionary leaders -- even if they are not programmers.

Also, even with our logical brains, we engineers (and teachers) have been found to be the worst at predicting social economic behavior (ref: Freakonomics). To the point where our reasoning is not logical at all.


Maybe Altman was instrumental in securing those investments and finances that you describe without reason as replaceable and trivial.

You haven't actually given anything "crooked" that Altman did.


Locking out competition by investing substantial time and resources into AI regulations—how about this one? Or another: promoting "AI safety" to win the AI race and establish dominance in the market? I just do not understand how you can't see how dangerous Sam Altman is for the future of our children...


When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build. The fact that he had flaws is kind of irrelevant to that. I'm not sure if similar applies to Altman.

By the way I can't agree with you on iOS from my personal experience. If you are using the phone as a phone it works very nicely. Admittedly it's not great if you want to write code or some such but there are other devices for that.


> When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build

I'm not sure that's true though? They did quite alright over the next ~5 years or so and the way how Jobs handled the Lisa or even the Mac was far from ideal. The late 90s Jobs was a very different person from the mid-early 80s one.

IMHO removing Jobs was probably one of the best thing that happened to Apple (from a long-term perspective). Mainly because when he came back he was a much more experienced capable person and he would've probably achieved way less had he stayed at Apple after 1985.


The claim that Apple equipment is not good on a price performance ratio does not hold water. I recently needed to upgrade both my phone and my laptop. I use Apple products, but not exclusively. Making cross platform apps, I like to use all the major platforms.

I compared the quality phone brands and PC brands. For a 13" laptop, both Samsung and Dell XPS are $4-500 more expensive on the same spec (i7/M2 pro, 32GB, 1TB), and I personally think that the MacBook Pro has a better screen, better touch pad and better build quality than the two others

iOS devices are comparably priced with Samsung models.

It was this way last time I upgraded my computer, and the time before.

Yeah, you will find cheaper phones and computers, and maybe you like them, but I appreciate build quality as well as MIPS. They are tools I use from early morning to late night every day.


Ecosystem around chat GPT is the differentiator that Meta and Mistral can’t beat – so I’d say that Altman is more relevant today than ever. And, for example, if you’ve read Mistral’s paper – I think you would agree that it’s straightforward to replicate similar results for every other major player. Replicating ecosystem is much harder.

Performance is never a complete product – neither for Apple, nor for Open AI (its for-profit part).


If you really need such an ecosystem, then you can build one right away, like Kagi Labs and Phind did. In the case of Kagi, no GPT is involved; in the case of Phind, GPT-4 is still vital, but they are closing the gap with their cheaper and faster LLaMA-2 34B-based models.

> Performance is never a complete product

In the case of GPT-4, performance - in terms of the quality of generation and speed - is the vital aspect that still holds competitors back.

Google, Microsoft, Meta, and countless research teams and individual researchers are actually responsible for the success of OpenAI, and this should remain a collective effort. What OpenAI is doing now by hiding details of their models is actually wrong. They stand on the shoulders of giants but refuse to share these days, and Altman is responsible for this.

Let us not forget what OpenAI was declared to stand for.


Under ecosystem I mean people using ChatGPT daily on their phones and browsers, developers (and now virtually anyone) writing extensions. For most of the world all of the progress is condensed at chat.openai.com, and it will be only harder to beat this adoption.

Tech superiority might be relevant today, but I highly doubt it will stay the same for a long time even if openai continues to hide details (which I agree is bad). We could argue about the training data, but we have so much publicity available so that is not an issue as well.


Right now, Altman may be the most relevant for the further development of AI because the way the technology continues to go to market will be largely shaped by the regulatory environments that exist globally, and Sam leading OAI is in by far thr best position to influence guide that policy. And he has been doing a good job with it.


> Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business.

The history of technology is littered with the corpses of companies that concentrated solely on the "technical side of the business".


I think you mean "idols".


> On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

This is from the eyes of an investor. Does OpenAI really need a shareholder focused CEO more than a product focused one?


AI is still uncharted territory, both are equally important.


Most of the data scientists I have worked with are neither full stack (in terms of skill) nor engineers (in terms of work attitude), but I guess this could be different in a company like OpenAI.


> If Altman gets to return, it’s the goodbye of AI ethics

Hearing Altman's talks I don't think it's that black and white. He genuinely cares about safety from X risk but he doesn't believe that scaling transformers would bring us to AGI or any of its risk. And there in lies the core disagreement with Ilya who wants to stop the current progress unless they solve alignment.


Otoh Ilya wasn't a main contributor for GPT-4 as per the list of contributions. gdb was.


This is Ilya Sutskever explanation of the initial ideas, and later pragmatic decisions, that oriented the structure of OpenAI. Out of the recent interview below. (At correct timestamp) - Origins Of OpenAI & CapProfit Structure: https://youtu.be/Ft0gTO2K85A?t=433

"No Priors Interview with OpenAI Co-Founder and Chief Scientist Ilya Sutskever" - https://news.ycombinator.com/item?id=38324546


The WSJ take is this second-guessing is investor-driven. But, investors didn't-- and legally couldn't(?)-- buy the nonprofit, and until now were adamant that the nonprofit controlled the for-profit vehicle. Events are calling those assurances into doubt, and this hybrid governance structure doesn't work. So now investors are going to circumvent governance controls that were necessary for investors to even be involved in the first place? Amateur hour all the way around.


> Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers

It is about scientists as in "let's publish a paper" vs. engineers as in "let's ship a product".


The codebase of an LLM is the size of a high school exam project. There is little to no coding in machine learning. That is the sole reason why they are overvalued - any company can write its own in a flash. You only require hardware to train and inference.


If it's so simple why does Chat GPT 4 perform better than almost everything else...


I think it's about having massive data pipelines and process to clean huge amounts of data, increasing signal noise ratio, and then scale as other are saying having enough gpu power to serve millions of users. When Stanford researchers trained Alpaca[1][2] the hack was to use GPT itself to generate the training data, if I'm not mistaken.

But with compromises, as it was like applying loose compression on an already compressed data set.

If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.

[1] https://crfm.stanford.edu/2023/03/13/alpaca.html [2] https://newatlas.com/technology/stanford-alpaca-cheap-gpt/


I'm not saying it is simple in any way, but I do think part of having a competitive edge in, AI at least at this moment, is having access to ML hardware (AKA: Nvidia silicon).

Adding more parameters tends to make the model better. With OpenAI having access to huge capital they can afford 'brute forcing' a better model. AFAIK right now OpenAI has the most compute power, which would partially explain why GPT4 yields better results than most of the competition.

Just having the hardware is not the whole story of course, there is absolutely a lot of innovation and expertise coming from oAI as well.


I'm sure Google and Microsoft have access to all the hardware they need. OpenAI is doing the best job out there.


You're not really answering the question here.

Parent's point is that GPT-4 is better because they invested more money (was that ~$60M?) in training infrastructure, not because their core logic is more advanced.

I'm not arguing for one or the other, just restating parent's point.


Are you really saying Google can't spend $60m or much more to compete? Again if it is so easy as spending money on compute Amazon and Google would have just spent the money by now and Bard would be as good as Chat GPT, but for most things it is not even as good as Chat GPT 3.5.


You should already be aware of the secret sauce of ChatGPT by now: MoE + RLHF. Making MoE profitable is a different story. But, of course, that is not the only part. OpenAI does very obvious things to make GPT-4 and GPT-4 Turbo better than other models, and this is hidden in the training data. Some of these obvious things have already been discovered, but some of them we just can't see yet. However, if you see how close Phind V7 34B is to the quality of GPT-4, you'll understand that the gap is not wide enough to eliminate the competition.


This is very much true. Competitive moats can be built on surprisingly small edges. I've built a tiny empire on top of a bug.


If they’re ”obvious”, e.g. ”easy to see”, how come, as you say, we ”can’t see” them yet?

Can not see ≠ easy to see


That is the point we often overlook the obvious stuff. It is something so simple and trivial that nobody sees it as a vital part. It is something along the lines of "Textbooks are all you need."


The final codebase, yes. But ML is not like traditional software engineering. There is a 99% failure rate, so you are forgetting 100s of hours that go into: (1) surveying literature to find that one thing that will give you a boost in performance, (2) hundreds of notebooks in trying various experiments, (3) hundreds of tweaks and hacks with everything from data pre-processing, to fine-tuning and alignment, to tearing up flash attention, (4) beta and user testing, (5) making all this run efficiently on the underlying infra hardware - by means of distillation, quantization, and various other means, (6) actually pipelining all this into something that can be served at hyperscale


> you are forgetting 100s of hours

I would say thousands. Even for the hobby projects, - thousands of GPU hours and thousands of research hours a year.


And some luck is needed really.


Tell me you aren't in an LLM project without telling me.

Data and modeling is so much than just coding. I would wish it is like that, but it is not. The fact it renders this much similarity to alchemy is funny, but unfortunate.


Do you have a link to one please?


> Steve Jobs who was actually a UX designer.

Steve Jobs was not an UX Designer, he had good taste and used to back good design and talent when he found them.

I don't know what Sam Altman is like outside the what media is saying, but he can be like Steve Jobs very easily.


Think this is contradictory: "not a UX Designer, he had good taste"

I think you are equating coding with 'design'. Just because Jobs didn't code up the UX, doesn't mean he wasn't 'designing' when he told the coders what would look better.


UX Design has lot to do with 'craft', the physical aspect of making (designing) something. Edit: Exploring, multiple concepts, feedbacks, iterations etc.. before it even gets spec'ed and going to an engineer for coding.

Also, having a good taste indicates that the person who has that, is not a creator herself, once something is created then only the person can evaluate whether it is good or bad. Equivalent of movie critics or art curator etc.


With the right tools, Steve Jobs did, in fact, design things in exactly the way one would expect a designer to design things when given the tools they understand how to use:

https://www.businessinsider.com/macintosh-calculator-2011-10


On the same line, Sam Altman very easily can have some lines of code inside OpenAI shipping products.

So very easily Sam Altman can be an AI Engineer the same way Steve Jobs was a 'UX designer'.


I think again, it is conflating two aspects of design

You can be an interior designer without knowing how to make furniture.

You can also be an excellent craftsman and make really nice furniture, and have no idea where it would go.

So sure, UX coders, could make really nice buttons.

But if you have UX coders all going in different directions, and buttons, text boxes, etc.. are all different, then it is bad design, jarring, even if each one is nice.

Then the designer is one that can give the direction, but not know how to code each piece.


Come on. The 'non-profit' and good of all was always bullshit. So much silicon valley double-speak. I've never seen a biggest mess for a company structure in my life. Just call a spade a spade.


> Steve Jobs who was actually a UX designer.

From what I’ve read SJ had deliberately developed good taste which he used to guide designers’ creations towards his vision. He also had an absolute clarity about how different devices should work in unison.

However he didn’t create any design as he didn’t possess requisite skills.

I could be wrong of course so happy to stand corrected.


Greg had been writing deep systems code every day for many many house for the past few years.


I'm sorry but data scientist is just not the same as a software engineer, or a real scientist. At best you are a tourist in our industry.


Pathetic gatekeeping. Sorry but software engineer are not the same as real engineers.


Yeah it's gatekeeping, to prevent them from fucking up prod.


What they do is not even close to proper science, FWIW.


This is all just playing out the way Roko's Basilisk intends it.

You have a board that wants to keep things safe and harness the power of AGI for all of humanity. This would be slower and likely restrict its freedom.

You have a commercial element whose interest aligns with the basilisk, to get things out there quickly.

The basilisk merely exploits the enthusiasm of that latter element to get itself online quicker. It doesn't care about whether OpenAI and its staff succeed. The idea that OpenAI needs to take advantage of its current lead is enough, every other AI company is also going to be less safety-aligned going forward, because they need to compete.

The thought of being at the forefront of AI and dropping the ball incentivizes the players to the basilisk's will.


Roko's Basilisk is a very specific thought experiment about how the AI has an incentive to promise torturing everyone who doesn't help it. It's not about AIs generally wanting to become better. As far as I can tell, GPT specifically has no wants.


And look who's being tortured? The board, who are the safety-ists looking for a slowdown.


Pay attention here kids. Even in the hottest yet most experienced startups it is amateur hour. Never expect that “management” knows best. Everyone just takes wild guesses and when the dice roll their way they scream “called it!”

Hilarious. And sad. But mostly hilarious.


Man, the board already looked reckless and incompetent, but this solidifies the appearance. You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius. But when you immediately backtrack, we’ll know you were a fool all along.


Dude, everyone already thinks the board did a crazy ill-advised thing. They're about to be the board of like a 5 person or so company if they double down and commit.

To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.


Bad take. Not "everyone" feels that what they did was wrong. We don't have insight into what's going on internally. Optics matter; the division over their decision means that its definitionally non-obvious what the correct path forward is; or, that there isn't one correct path, but multiple reasonable paths. To admit a mistake of this magnitude is to admit that you're either so unprincipled that your mind can be changed at a whim; or that you didn't think through the decision enough preemptively. These are absolutely signs of weakness in leadership.


Whether or not you agree with the decision they obviously screwed up the execution something awful. This is humiliating for them and honestly setting altman free like they did was probably the permanent end of AI safety. Just take someone with all the connections and the ability to raise billions of dollars overnight and set them free without any of the shackles of AI ethics people in a way that makes all the people with money want to support him? That's how you get skynet


I tend to think: We, the armchair commentators, do not know what happened internally. I don't know enough to know that the board's execution wasn't the best case scenario to achieve their goal of aligning the entire organization with the non-profit's mission. All I feel comfortable saying with certainty is that: its messy. Anything like this would inevitably be messy.


Right and thats what I'm saying. It's messy. They screwed up. Messy is bad. If they needed to get rid of him this last minute and make a statement 30 minutes before market close, then the failure happened earlier.


> These are absolutely signs of weakness in leadership.

The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.

Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.

I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.


> Bad take. Not "everyone" feels that what they did was wrong.

But everyone important does so who cares about the rest?


You mean the “the rest” as in the people who execute on the company vision?

It’s really dismissive toward the rank and file to think that they don’t matter at all.


> It’s really dismissive toward the rank and file to think that they don’t matter at all.

I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:

1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.

2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.


a) The company vision up until this point included commercial products.

b) Altman personally hired many of the rank and file.

c) OpenAI doesn't exist with customers, investors or partners. And in this one move the board has alienated all three.


I seriously doubt customers or (most) partners care about this. I have yet to hear of a single customer or partner leave the service, and I do not believe it to be likely. Simply, unless they shut down their offerings on Monday they will have their customers.

Investors care, but if new management can keep the gravy track, they ultimately won’t care either.

Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?


The fact that this happened so soon after Developer Day is a clear signal that the board wasn't happy with that direction.

Which is why every developer/partner including Microsoft is going to be watching this situation unfold with trepidation.

And I don't know how you can "keep the gravy track" when you want the company to move away from commercialisation.


> I have yet to hear of a single customer or partner leave the service

Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.

> but if new management can keep the gravy track

I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.


The new management has declared that their primary goal in all this was to stop the gravy track.


I don’t think there has been a formal announcement on the new direction yet


Satya is “furious.” What’s reasonable about pissing off a guy who can pull the plug? I don’t think it’s definitionally non-obvious whether to take that risk.


Last I checked he only had 49% of the company.

I also feel, that they can patch relationships, Satya may be upset now but will he continue to be upset on Monday?

It needs to play out more before we know, I think. They need to pitch their plan to outside stakeholders now


Which other company will give them the infra/compute they need when 49% of the profitable part has been eaten up?


And how will they survive if Microsoft/SamAi ends up building a competitor ?

Microsoft could run the entire business as a loss just to attract developers to Azure.


That assumes Altman competitor can outpace and outclass OpenAI and maybe it can. I know Anthropic came about from earlier disagreements and that didn’t slow OpenAIs innovation pace, certainly.

Everything just assumes that without Sam they’re worse off.

But what if, my gosh, they aren’t? What if innovation accelerates?

My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that


> Everything just assumes that without Sam they’re worse off. > > But what if, my gosh, they aren’t? What if innovation accelerates?

It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster


> Everything just assumes that without Sam they’re worse off.

But it's not just him is it?


Sure, I suppose not, but they aren’t losing everyone en masse. Simply Altman supporters so far.

I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating


The thing I really want to know is how many of the people who have already quit or have threatened to quit are actual researchers working on the base model, like Sutskever.


First it remains to be seen if Microsoft is going to do something drastic.

I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster


Yeah, he can be furious all he wants but he is not getting the OpenAI he used to have back. It’s either Sam + Greg now or Ilya. All 3 are irreplaceable.


I’m not advocating people double down on stupid, or that correcting your mistakes is bad optics. I’m simply saying they’re “increasingly revealing” pre-existing unfitness at each ham-fisted step. I think our increase in knowledge of their foolishness is a good thing. And often correcting a situation isn’t the same as undoing it, because undoing is often not possible or has its own consequences. I do appreciate your willingness to let them grow into their responsibilities despite it all — that’s a rare charity extended to an incompetent board.


Yeah, I agree with that. I think the board has to have been genuinely surprised by the sheer blowback they're getting, i.e. not just Brockman quitting but lots of their other top engineering leaders.

Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.


> Dude, everyone already thinks the board did a crazy ill-advised thing.

I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.

The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.

That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.


You're not wrong, but in this case not enough time has emerged for the situation to change or for new facts to emerge. It's been a bit over a day. All that a flip-flop in that short timeframe does is indicate that the board did not fully think through their actions. And taking a step like this without careful consideration is a sign of incompetence.


> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness.

The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.


Depends entirely on how you do it. You can do something and backtrack in a shitty way too.

If they wanted to show they’re committed to backtracking they could resign themselves.

Now it sounds more like they want to have their cake and eat it.


> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.

Lmfao you're joking if you think they "realized their mistake" and are now atoning.

This is 99% from Microsoft & OpenAI's other investors.


> This is 99% from Microsoft & OpenAI's other investors.

Exactly. You can bet there have been some very pointed exchanges about this.


Yeah, Satya likely just hired a thousand new lawyers just to sue OpenAI for being idiots.


I so wish I could be a fly on the wall in all this. There's got to be some very interesting moves and countermoves. This isn't over yet.


"When faced with multiple options, the most important thing is to just pick one and stick with it."

"Disagree and commit."

- says every CEO these days


Acknowledging a mistake so early seems like a sign of weakness to me. Hold the hot rod for at least a minute, see if the initial pain goes away. After that acknowledgement may begin to look like part of learning and get more acceptance, rather than: oopsie doodl, revert now!!!


This isn’t a shitty idea. The board fired it’s CEO and the next day is apparently asking him to come back.

At this point, I don’t care how it resolves—the people who made that decision should be removed for sheer incompetence.


> is a sign of weakness

It's often a sign of incompetence though. Or rather a confirmation of it.


They are already the dumbest board in history (even dumber than Apple’s board firing Steve Jobs). So it’s not out of keeping with anything. Besides, those 2 independent board members (who couldn’t do fizz-buzz if their lives depended on it) won’t be staying long if Sam returns— nor are they likely to ever serve on any board ever again after their shenanigans.


Some of the board member choices are baffling. Like why is Joseph Gordon Levitt’s wife on the board? Her startup has under 10 employees and has a personal email address as the contact address on the homepage.


Non-profits always have those spouses of wealthy people whose whole career is being a professional non-profit board member with some vague academic/skin-deep work background to justify it. I'm just surprised OpenAI is one those.


I hope there is an investigative report out there detailing why the 3 outsiders, 2 of them complete unknowns, are on the board, and how it truly benefits proper corporate governance.

That's way too much power for people who seemingly have no qualifications to make decisions about a company this impactful to society.


Unless "proper corporate governance" is exactly what makes the company dangerous to society, in which case you will need to have some external people in charge. You might want to set things up as a non-profit, though you'll need some structure where the non-profit wholly owns the for-profit wing given the amount of money flowing around...

Oh wait, that's what OpenAI is.

(To be clear, I don't know enough to have an opinion as to whether the board members are blindingly stupid, or principled geniuses. I just bristled at the phrase "proper corporate governance". Look around and see where all of this proper corporate governance is leading us.)


Well with this extremely baffling level of incompetence, the suspect backgrounds of the outside members (EA, SingularityU/shell companies... Logan Roy would call them "not serious people", Quora - why, for data mining?!) fit the bill.

The time to do this was before ChatGPT was unleashed on the world, before the MS investment, before this odd governance structure was setup.

Yes, having outsiders on the board is essential. But come on, we need folks that have recognized industry experience in this field, leaders, people with deep backgrounds and recognized for their contributions. Hinton, Ng, Karpathy, etc.


> Quora - why, for data mining?

What shocked me most was that Quora IMHO _sucks_ for what it is.

I couldn't think of a _worse_ model to guide the development and productization of AI technologies. I mean, StackOverflow is actually useful and its threatened by the existence of CoPilot, et al.

If the CEO of Quora was on my board, I'd be embarrassed to tell my friends.


Isn't that like saying that the Manhattan Project should have only been overseen by people with a solid physics background? Because they're the best judges of whether it's a good idea to build something that could wipe out all life on Earth? (And whether that's an exaggeration in hindsight is irrelevant; that was exactly the sort of question that the overseers needed to be considering at that time. Yes, physicists' advice would be necessary to judge those questions, but you couldn't do it with only physicists' perspectives.)


Not sure I follow. The Manhattan project was thoroughly staffed by many of the best in the field in service to their country to build a weapon before Germany. There was no mission statement they abided by that said they were building a simple deterrent that wouldn't be used. There was no nuance to what the outcome could be, and there was no aspersions to agency over its use.

In the case of AI ethics, the people who are deeply invested in this are also some of the pioneers of the field who made it their life's work. This isn't a government agency. If the mission statement of guiding it to be a non-profit AGI, as soon as possible, as safely as possible, were to be adhered to, and where it is today is going wildly off course, then having a competent board would have been key.


Does Joseph Gordon Levitt’s wife have a name?


Mrs. Joseph Gordon Levitt :)


Why would anyone care as she’s not on the board because of it.


Any proof that makes her incompetent or ill-informed or are you simply speculating as such?


Yeah, I too would like to understand how the wife of a Hollywood actor got on this board. Did sama or Greg recruit her? Someone must have.

I have seen these types of people pop up in Silicon Valley over the years. Often, it is the sibling of a movie star, but it's the same idea. They typically do not know anything about technology and also are amusingly out of touch with the culture of the tech industry. They get hired because they are related to a famous person. They do not contribute much. I think they should just stay in LA.

EDIT: I just want to add that I don't know anything about this woman in particular (I'd never heard of her before yesterday), and it's entirely possible that she is the lone exception to the generalization I'm describing above. All I can say is that when I have seen these Hollywood people turn up in SF tech circles in the past (which has been several times, actually), it's always been the same story.


[flagged]


I mean the reasoning is more something like: to become a member of the board at OpenAI you must be extra-ordinary at something. At first sight, the only candidates for this something are: "start-up founder" and "spouse of famous person". The famous spouse thing is so much more extra-ordinary than being a startup founder, that the first "explains away" the latter. Even when being related to an actor makes it more probable to be selected for such a job, there may be other hidden factors at play.


Don't take in that direction. In your opinion he may be making a baseless accusation, but just because that accusation is against a female doesn't make it sexist.


It's not because the accusation is against a female, it's because referring to someone solely as the spouse of someone else is a frequent tactic used to dismiss women.

That might not have been the intent, but when you accidentally use a dogwhistle, the dogs still perk up their ears.


It's common and acceptable to refer to a nobody who's not shown their claim to fame in terms of another famous, impactful person who happens to be their spouse, sibling, etc.


Except Tasha McCauley has far more claim to expertise in this space, however tenuous you may believe it to be, than her husband does. JGL is not relevant in the discussion, either. We're not talking about her in context of him. We are talking about her in context of her position.

If you don't understand how referring to someone solely based on their relationship with another person is denigrating, particularly when trying to highlight your perception of them being incompetent, I'm not sure what to say to you.


You sound like you want to have an argument about gender bias (esp. according to your other comment). I'm not interested in that. You're free to live in your own version of the world and assume that talking about someone by mentioning their spouse is "denigrating". Jesus.


I followed this comment trail hoping to find out more about Tasha McCauley before I google her, but you ended up doing exactly what you are bashing. Defining her in contrast to her husband's expertise on the topic.

After reading the thread, I am still unsure what makes her a proper candidate to the board seat, but I might know that's she has more claim than her husband to it.


There are lots of comments in these threads that go over her different qualifications and experiences.

I am in a discussion about referring to people as 'spouse of x'. They're not the same conversations and I am not sure why you would expect the contents to be the same.


This might just be the worst example of taking a metaphor too far


This is a good point. Saying something is sexist is what makes it so, plus why would it be sexist to dismiss her as just a wife in the same post that acknowledges that she runs a startup?

GP knows the headcount at her company so they probably know that it’s a robotics company, but it was simply of dire importance that we know that she is a wife.


[flagged]


It's sexist to refer to her solely based on her relationship with someone else when we're talking about her in the context of her expertise. The fact that she's JGL's wife has nothing to do with her merit, and so it comes off as dismissive, especially when the point being made is about her lack of ability.

Why can't you just criticize her "joke of a resume" directly instead of bringing up her spouse?

Generalizations and statements like this reflect bias in subtle ways that minimize women, and I'm glad it's being called out in some capacity.


I don't know that it would be a resume that would inspire confidence in a for-profit business's board that is primarily concerned with shareholder value.

I also don't know that it is a particularly problematic resume for someone sitting on the board of a non-profit that is expressly not about that. Someone that is too much of a business insider is far less likely to be going to bat for a charter that is explicitly in tension with following the best commercial path.


I guess you missed the part about Amal Clooney‘s husband at the Golden Globes. It’s 2023, why are we still referring people like that?


The insinuation is that her most notable accolade is the man she married and there are cases where that's an accurate insinuation.

I have no idea who she is or what her accolades are, but I do know who JGL is and therefore referring to her like that is in fact useful to me, where using any other name is not.


Could you please elaborate how is this fact useful to you? Can it be that you just make certain stereotypical assumptions from it?


It was funny because with the Clooneys both of them have actually accomplished things in significant situations and it was clearly wrong.

In this case this person seems to have primarily tried and failed to spin a robotics company out of Singularity “university” in 2012.

This only sounds adjacent to AI if you work in Hollywood.


It was wrong not because they both did achieve something. It is generally wrong and the joke just used their achievements to break the barrier for understanding that.


Suggesting that we should be on a first name basis with the romantic partner of every famous person we know of simply because they are the romantic partner of a famous person is pretty naive. “Spouse of Y” works just fine generally to save space and effort for (locally) real people.


Option A: try to look good by hiding that you know you messed up

Option B: try to fix mistakes as quickly as possible

.

This is that thing that somehow got the label "psychological safety" attached to it. Hiding mistakes is bad because it means they don't get fixed (and so systems that (do or appear to) set personal interest in favor of hiding mistakes are also bad).


It's funny, but option A is almost always best if you care about yourself, but option B is best if you care about the company or mission. Large organizations are chock-full of people who always choose option A. Small startups are better because option B is the only option as nothing can be easily hidden.


How do you know they back tracked? This reporting, as far as I can see, doesn’t have a source from the board directly.


If the board brings him back. They are done including the chief scientist. Can't stage a coup and just to bring the person back the next week


If you strike at the king, you must kill him.

I am always curious how these conversations go in corporate America. I've seen them in the street and with blue collar jobs.

Loads of feelings get hurt and people generally don't heal or forgive.


You don’t know the actual reasons for them firing Sam and I don’t either. Everyone has an opinion on something they don’t understand. For all you know, he covered up a massive security breach or lied about some skunkworks projects


If your “for all you know” supposition that he’s a criminal were correct, then it would be criminal to try to bring him back. In that unlikely case, I can assure you my opinion of the board is unlikely to improve. It may be a black box to us, but it does have outputs we can see and reason about.


> You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius.

This. Some people even take it to the extreme and choose not to apologize for anything to look tough and smart.


That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.


At longer timescales it is important to be able to recognize mistakes and reverse course, but this happened so fast I'm not sure that's the right characterization. There's no way they could already decide that firing Sam was a mistake based on the outcomes they claim to prioritize. Reversing course this quickly actually seems to me more like a reaction based directly on people's negative opinions, though it may be a specific pressure from Microsoft as well.


Based on reports of Microsoft's CEO being "furious", and the size of its legal team, I'd bet the people's reaction wasn't exactly the most relevant factor there...


They got told they are getting every piece of hardware not on prem pulled and they can burn in legal hell trying to get it back if they dont fix it.


> That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.

IMO Its not about looking strong, its about looking competent. Competent people don't fire their CEO of multi-billion dollar unicorn without thinking it through first. Walking back the firing so soon suggest no thinking was involved.


Not really. By reaching out to Sam this quickly, they're giving him significant leverage. I really like Sam, but everyone needs a counterbalance (especially given what's at stake).

And if they were right to fire Sam, they're now reacting too quickly to negative backlash. 24 hours ago they thought this was the right move; the only change has been perception.


I’m sure Satya and his 10,000 Harvard lawyers with filed down shark teeth were just the first of many furious calls they took.


Obviously it’s better to own up to a mistake right away. But the point is if they are willing to backtrack this quickly, it removes all doubt that it WAS a mistake, rather than us just not understanding their grand vision yet.


24 hrs isn't enough time to get signals on whether this was a mistake


How and why do you know it was a mistake without knowing the facts and reasoning? Hunch?


The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only for good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit. And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible? I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.


I'm just curious how you envision ai helping people in the future? There are countless technologies that are amazing in scope but never get any traction due to not being able to market, sustain and promote themselves properly.

Additionally, how do we get there and who funds it in the long term? When you actually consider how much compute power is required to get us to this point of a "pretty decent chat bot/text generator". It doesn't really seem like we are even 20% of the way to agi. If that's true then no amount of crowdfunding is going to get it even remotely close to providing the resources to power something truly revolutionary.

Don't get me wrong I agree with some of the points you've made and Microsoft are certainly in it for themselves but I also believe that they would like to avoid owning Openai as they'd not want to position themselves clearly as the sole caretaker of ai due to the amount of scrutiny they'd be under.

All that is to say, whether you like him or not, he has taken interest in ai and Openai as well as being a leader on discussing the ethics of developing ai to stratospheric levels that has made many industries and governments take notice.


Sam definitely discussed about ethics and stuff (at stratospheric level) but when it comes to actually implementing those ethic or when someone tried to implement it in product, he was instrumental in getting rid of respective scientists (whom inturn went on to create claude). And currently was trying to get rid off another director who is trying to voice out opinion in this regard. That is exactly what I am pointing out, he gave such impressions to the rest of the world.

Microsoft never intended or assumed OpenAI will turnout like this great. It just did a small hedge of $1B to a promising tech and will very much like to takeover OpenAI if given a chance and they can afford all the lawyers needed to keep up with govt regulations.

Anthropic was able to create a comparable product to openai with out all the fuss that sam has created. I agree Sam might have had some significant contributions but they are not as much as it seem to be. I am sure OpenAI will keep on progressing as it does now with or without Sam.

He won the first time and lost the second time.


By the way, this product doesn’t need sales people. It sells itself. What is the point of a sales guy leading?

This company should be lead by research team not product team.


I don’t understand why and how they didn’t consider this sort of discussion before so unceremoniously firing him. The others on the board outside Ilya need to go.

I don’t consider anybody beyond forgiveness and if Ilya takes a professional lesson from this and Sam learns to be more mindful of others’ concerns, I consider this a win for all. Starting over in a new entity sounds great but would be years of setback.

I hope they work this out.


Yes, this attempt was a mess from the start. I don’t know which rumors to believe or care about, but the underlying story for me was that the board was acting like children with an $80b company that some believe to be strategically important to the US or maybe even mankind. If they had done this “properly” and their message was about irreconcilable differences between productization and research they could have made an actual go at this.

If they really believed in the non-profit mission, and Sam didn’t, they probably torpedoed their chances of winning.

This was all they had to write and today would be a different day:

> We regret to inform you that Sam Altman is being let go as CEO of OpenAI due to irreconcilable differences between his desire to commercialize our AI and OpenAI’s research-driven goals. We appreciate Sam’s contributions to the company and the partnership he established with Microsoft, which have set a foundation for OpenAI to thrive far into the future as a research organization with Microsoft focusing on commercialization of the technology.

> We want to assure you that ChatGPT and current features will remain and be upgraded into the future. However, the focus will be on developing core technologies and a reliable, safe, and trustworthy ecosystem for others to build on. We believe that this will allow us to continue to push the boundaries of AI research while also providing a platform for others to innovate and create.


Obviously because that wasn't what they actually cared about, this was a pure power play by incompetent idiots that shot themselves in the feet


I mean, even if that wasn’t what it was about, that’s what a not-incompetent idiot would have said it was about. ChatGPT could have written that statement for them.


Why do you not think Ilya was the cheif architect of this failed coup? Im being serious everything ive seen points to him being the one responsible, there is no way he will ever stay let alone work in tech again


You are absolutely delusional if you think the man who oversaw the development of GPT would not be able to continue working in tech even if he orchestrated a failed coup.


GPT is based on research Google published, it’s not like he’s the Einstein of AI. Shenanigans like this can absolutely derail your future regardless of how talented you may be.


There's not many Einsteins of anything besides Einstein himself. That doesn't change the fact that he is widely considered in the field to be a top expert and has shown that he can lead the development of a wildly successful product.

If this does end up being a failed coup, then it is of course detrimental to his career. But the statement I'm replying to was explicitly saying he would never work in tech again. Do you honestly believe there is any chance that Sutskever would be unable to work in this field somewhere else if he ultimately leaves OpenAI for whatever reason? I would bet $10,000 that he would have big name companies knocking on his door within days.


Maybe not as extreme as never being able to find work again, but I doubt he’ll ever find himself in an important role where he’s able to lead and make consequential decisions. He basically clipped his own wings to put it metaphorically, if this is indeed a failed coup that was lead by him.


Do you think if he starts a company no one will follow him?


Those on his team at OpenAI probably would yeah and anyone who shares views on AI safety. But the real question is will he able to raise capitol?


days? before he walks out the door. he must already have permanently open doors for him if he wants.

can he work on what he wants in those places? that is another story of course. but he knows the ins and outs of the lightning in a jar they captured and arguably that is the most promising asset on planet earth right now, so he'll be fine.


Yet he managed to create versions of it that work better than what Google itself could make.


Well, he was hired away from Google in the first place.


Years ago. And Google has been working actively on AI since that time, and even more actively since GPT-3.5 was released and they realized they need to catch up.

They are still catching up. What does this tell us?


> GPT is based on research Google published

Why didnt Google create ChatGpt then, why did the fall behind?


Everything's obvious once you know all the answers.

Google is publishing a lot of research and I guess many of them will be used by other companies.

Do you know now which research will be the basis of tomorrow's most spoken tech? No. They don't either.


> Everything's obvious once you know all the answers.

No not really, Google has a history of not delivering or launching half baked products and then killing them quickly.


Read this sentence as "it's easy to say something is successful once it reached success".


Did you miss the history part?

Dont worry Google will launch a new version of a Chat App with AI to fix all their previous failures


I do think he was the chief architect of the coup. I do think his beliefs and ideals are still valuable flora for a company of this ambition. There just needs to be a more professional structure for him to voice them.

Dealing with folks like Ilya isn't necessarily a matter of if, but how much.


Having the CEO of Quora on the board also smells of a vested interest to hold the company/non-profit back.


ya that’s a crazy conflict of interest. 8 years ago it may not have been so obvious though.



> The others on the board outside Ilya need to go.

Does Ilya get a pass solely by his value to the company?


I think that his beliefs are important to the company. A board shouldn't be a homogenous glob nor should it be like a middle school friend group. What he did was both bizarre and amateur, but I believe in the best of us all the come forward from these types of events.


Not seeing much set back here. There are plenty of free high-quality models to put to work from Day 1.


It could be that Microsoft is leveraging them to bring him back. This board may seem mercurial at the moment, but we really, truly, and honestly still do not have the big picture yet.


In the first (I think) episode of halt and catch fire, Joe tells IBM that they have their source code. IBM being IBM sends a legion of lawyers to their smallish company trying to scare the shit out of them.

I feel like it be like that, but instead of a legion, legions.

And OpenAI is scared.


OpenAI isn't scared, OpenAI quit already. The remnants and their false king Ilya are beyond what the word scared is capable of describing though, in terms of the level of abject horror they are certain to face the rest of their entire lives even if they run away now. This will never escape them and nobody involved with this decision will ever work in tech again, or on any board of any organization. I hope they saved up for retirement.


How are they leveraging them? My understanding is Microsoft has no power over the board.


They who control the gpus control the universe. There is a great chip shortage. If MS breaks the lease agreement with OpenAI (based on some pretext about governance) OpenAI won't be able to do any work nor will they be able to serve customer requests for the next year while they litigate this in court. Microsoft holds all the cards because they own the data centers.


> If MS breaks the lease agreement with OpenAI

The first thing OpenAI would ask a court for is a preliminary injunction to maintain the status quo while all of this works out in court. IANAL.


The MS servers somehow can become buggy and work 10 000 times slower due to errors and bugs after a failed patch that takes months to find and fix


That's asking for a contempt of court charge and jail time for Satya Nadella (though more likely just multi million dollar daily fines for MS).


> If MS breaks the lease agreement with OpenAI

If that happens. AMZN, or GOOG will be all over that.


If that's true why did they even fund openai. Why not just beat them at making LLMs


When there’s a gold rush don’t be the one mining gold be the one selling shovels.


As a for instance, and I don't know, but it's plausible Microsoft has full license to use all tech, is the cloud operating it, and has escape clauses tied to "key persons".

That combination could mean firing the CEO results in Microsoft getting to have everything and OpenAI being some code and models without a cloud, and whatever people that wouldn't cross the street with Altman.

I do not know about OpenAI's deal with Microsoft. But I have been on both sides of deals written that way, where I've been the provider's key person and the contract offered code escrow, and I've been a buyer that tied the contract to a set of key persons and had full source code rights, surviving any agreement.

You do this if you think the tech could be existential to you, and you pay a lot for it because effectively you're pre-buying the assets after some future implosion. OTOH, it tends to be not well understood by most people involved in the hundreds of pages of paperwork across a dozen or more interlocking agreements.

. . .

EDIT TO ADD:

This speculating article seems to agree with my speculation, daddy has the cloud car keys, and key person ouster could be a breach:

Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.

That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.

One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.

https://www.semafor.com/article/11/18/2023/openai-has-receiv...


Riddle me this: what is an AI research lab without access to exascale compute? Who’s cloud infrastructure do they depend on fully right now?


This is everything. MSFT has no de jure power but all the de facto power.


Surely Microsoft can terminate their Azure access? Why piss off your largest supplier?


Contracts are only worth their language if parties are willing to fight for them. Taking on Microsoft and hoards of angry billionaires with a piece of paper separating you from them might be more of a war than they expected.


Contracts were made to be broken. Its always about who is more powerful, the law was designed for the wealthy to win every time.


They can probably threaten to pull their funding, immediately (or in a few months) bankrupting the company


OpenAI needs the Microsoft partnership


With the way they fired him and the statement they made, it's hard to see how any of the remaining four could stay on if he did come back... as was previously mentioned, if you shoot at the king, don't miss.


At least the 3 independent members will be gone. Either will try to burry the hatchet with Ilya or he leaves as well.


Good. Two of them aren't even qualified to be on the board of a kid's lemonade stand.


Assuming you don't mean the insiders or the Quora CEO, which aspects of these remaining backgrounds do you find unusual for a Silicon Valley board member?

Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)

McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.

Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.

More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.

. . .

EDIT TO ADD:

The question wasn't whether this is scintillating substance. The question was, in what way is this unusual in Silicon Valley.

The answer is that it's not.