Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's employees were given two explanations for why Sam Altman was fired (businessinsider.com)
655 points by meitros 10 months ago | hide | past | favorite | 909 comments




There has to be a bigger story to this.

Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history


Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.

There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.


The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).

This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.


This certainly feels like the most likely true reason to me. Altman fundraising for this new investment, and taking money from people the board does not approve of, and Altman possible promised not to do business with.

Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.


Moreover, if this is true, he could reasonably well continue knowing that he has more power than the board. I could almost imagine the board saying, "You can't do that" and him replying "Watch me!" because he understood he is more powerful than them. And he proved he was right, and the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.


> the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

From the board's perspective, destroying OpenAI might be the best possible outcome right now. If OpenAI can no longer fulfill its mission of doing AI work for the public good, it's better to stop pretending and let it all crumble.


Except that letting it all crumble leaves all the crumbs in Microsoft's hands. Although there may not be any way to prevent that anyway at it point.


If the board had already lost control of the situation anyway, then burning the "OpenAI" fig leaf was an honorable move.


I am not sure if it would be commendable or out-right stupid though for the remaining board members to be that altruistic, and actually let the whole thing crash and burn. Who in their right mind would let these people near any sort of decision-making role if they let this golden goose just crash to the ground, even if would "benefit the greater good" - cannot see that this is in the self-interest of anyone


Spoken like a true modern. What could be more important than money? Makes you wonder if aristocracy was really that bad when this is the best we get with democracy!111



What other motivations are there other than naked profit and trying to top Elon? /s


The thing is, they could have just come out with that fact and everyone in the alignment camp and people who memed the whole super-commercialized "Open" AI thing would be on their side. But the fact that they haven't means that either there was no greater-good mission related reason for ousting Sam or the board is just completely incompetent at communication. Either way, they need to go and make room for people who can actually deal with this stuff. OpenAI is doomed with their current board.


I'm betting they are just colossally bad communicators, the majority of the board, and in the heat of an emotional exchange things were said that should not have been said, and being the poor communicators we know in tech oh so well, shit hit the fan. It's worth being said, Sam's a pretty good communicator, and could have knowingly let them walk into their own statements and shit exploded.


That is a very good point. Why wouldn't they come out and say it if the reason is Altman's dealings with Saudi Arabia? Why make up weak fake reasons?

On the other hand, if it's really just about a power struggle, why not use Altman's dealings with Saudi Arabia as the fake reason? Why come up with some weak HR excuses?


Because anything they say that isn't in line with the rules governing how boards work may well open them up to - even more - liability.

So they're essentially hoping that nobody will sue them but if they are sued that their own words can't be used as evidence against them. That's why lawyers usually tell you to shut up, because even if the court of public opinion needs to be pacified somehow the price of that may well be that you end up losing in that other court, and that's the one that matters.


If it was all about liability, The press release wouldn’t have said anything about honesty. The press release could’ve just said the parting was due to a disagreement about the path forward for openAI.

As a lawyer, I wonder to what extent lawyers were actually consulted and involved with the firing.


If they have not consulted with a lawyer prior to the firing then that would be highly unusual for a situation like this.


Maybe the board is being prevented or compelled not to disclose that information? Given the limited information about the why, This feels like a reverse psychology situation to obfuscate the public's perception to further some premeditated plan.


Telling people that AGI is acheivable with current LLM with minor tricks may be very dangerous in itself.


If this is true why not say it though? They didn’t even have lawyers telling them to be quiet until Monday.


Are you suggesting that all people will do irresponsible things unless specifically advised not to by lawyers?


The irresponsible thing is to not explain yourself and assume everyone around you has no agency.


I don't follow. If the irresponsible thing is to not explain themselves, why would the lawyers tell them to be quiet?


To minimize legal risk to their client, which is not always the most responsible thing to do.


This was my guess the other day. The issue is somewhere in the intersection of "for the good of all humanity" and profit.


> The "lying" line in the original announcement feels like where the good gossip is

This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.


Or it is true but not to a degree that it warrants a firing and that firing just so happened to line up with the personal goals of some of the board members.


They accused him of being less than candid, which could mean lying or it could mean he didn't tell them something. The latter is almost certainly true to at least a limited extent. It's a weasel phrasing that implies lying but could be literally true only in a trivial sense.


The announcement that he is acted to get a position with Microsoft creates doubt about his motives.


Agreed, court intrigue. But it is also the mundane story of a split between a board and a CEO. In normal cases the board simply swaps out the CEO if out of line, no big fuss. But if the CEO is bringing in all the money, having the full support of the rest of organization, and is a bright star in mass media heaven, then this is likely what you get: the CEO flaunts the needs of the board and runs his own show, and gets away with it, in the end.


It just confirmed what was already a rumor, the board of OpenAI was just a gimmick, Altman held all the strings and maybe cares, or not, about safety. Remember this is a man of the highest ambition.


> a decision that destroyed billions of dollars worth of brand value and good will

I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?

What sane user would want a shitcoin CEO in charge of a product they depend on?


Altman is an interesting character in all of this. As far as i can tell, he has never done anything impressive, in technology or business. Got into Stanford, but dropped out, founded a startup in 2005 which threw easy money at a boring problem and after seven years, sold for a third more than it raised. Got hired into YC after it was already well-established, and then rapidly put in charge of it. I have no knowledge of what went on inside, but he wrote some mediocre blog posts while he was there. YC seems to have done well, but VC success is mostly about your brand getting you access to deal flow at a good price, right? Hyped blockchain and AI far beyond reasonable levels. Founded OpenAI, which has done amazing things, but wasn't responsible for any of the technical work. Founded that weird eyeball shitcoin.

The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?


Altman has convinced PG that he's a pretty smart cookie and that alone would explain a lot of the red carpet treatment he's received. PG is pretty good at spotting talent.

http://www.paulgraham.com/5founders.html

Note the date on that.


What about the date?


it was a really long time ago


A lot of this was done when money was free.


If you only hire people with a record of previous accomplishments you are going to pay for their previous success. Being able to find talent without using false indicators like a Stanford degree is why PG is PG


Yeah, there definitely seem to be some personality cult around Sam on HN. I met him when he visited Europe during his lobbying tour. I was a bit surprised the CEO of one of the most innovative companies would promote an altcoin. And then he repeated how Europe is crucial, several times. Then he went to the UK and laughed, "Who cares about Europe". So he seems like the guy who will tell you what you want to hear. Ask anybody on the street, they will have no idea who the guy is.


I gotten SBF vibes from him for awhile now.

Elon split was the warning


Telling statement. The Elon split for me cements Altman as the Lionheart in the story.


There are other options besides 'Elon is a jerk' or 'Sam is a jerk'.


For example...they're both jerks!

:-)


Yeah I don't mean Sam is a jerk but there is an element of dishonesty that twigs me.

Elon isn't above reproach either but I share interest with him (aka Robert Heinlein) which informs me on his decision making process.


Normally that's a good sign


> Then he went to the UK and laughed, "Who cares about Europe"

Interesting. Got any source? Or was it in a private conversation.


No, this one was from a friend who was there, and AFAICT it wasn't a private conversation but a semi-public event. In any case, after courting a few EU countries he decided to set up OpenAI office in the UK.

I have nothing against him, it just seemed a bit off that most of the meeting was about this brand new coin, how it will be successful, and about the plans to scan biometric data of the entire world population. I mean, you don't have to be a genius to understand a few dozen ways these things can go wrong.


It's a surprisingly small world.


What do common users and zealots have to do with the majority of OpenAI employees losing faith in the board’s competence and threatening a mass exodus?

Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?


Mistakes aside, Altman was one of the earliest founders recruited by Paul Graham into YC. Altman eventually end up taking over Ycombinator from pg. He’s not just a “shitcoin” ceo. At the very least, he’s proven that he can raise money and deal with the media


I’ve said this before, but it’s quite possible to think that Altman isn’t great, and that he’s better than the board and his replacement.

The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.

[1] https://twitter.com/eshear/status/1664375903223427072


"End of all value" is pretty clearly referring to the extinction of the human species, not mere "AI alignment failure". The context is talking about x-risk.


> The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure

That's pretty much in line with Sam's public statements on AI risk (Sam, taking those statements as honest which may not be warranted, apparently also thinks the benefits of aligned AI are good enough to drive ahead anyway, and that wide commercial access with the limited guardrails OpenAI has provided users and even moreso Microsoft is somehow beneficial to that goal or at least low enough risk of producing the bad outcome, to be warranted, but that doesn't change that he is publicly on record as a strong believer in misaligned AI risks.)


He gotta be insane? I guess what he is trying to say is that those who want to selfhost open AIs are worse than Nazis? E.g. Llama? What is up with these people and pushing for corporate overlord only AIs.

The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.

Organizations can't pretend to believe nonsense. They will end up believing it.


He's trying to say that AI-non-alignment would be a greater threat to humanity than having Nazis take over the world. It's perfectly clear.


Which means self-hosted AIs is worse than Nazis kicking in your door, since any self-hosted AI can be modified by a non big-tech aligned user.

He is dehumanizing programmers that can stop their sole reign on the AI throne, by labeling them as Nazis. Especially FOSS AI which by definition can't be "aligned" to his interests.


I'm not reading that at all


Nope, we do not. I was annoyed when he pivoted away from the mission but otherwise don't really care.

Stability AI is looking better after this shitshow.


The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will

Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).


> I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships

Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.

A lot of OpenAI’s reputation is/was Sam Altman’s reputation.

Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.

Proof of his internal relationship value: employees quitting to go with him

Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.

How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?

How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?

Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.

There go those wacky investors, re-evaluating “brand” value!


> has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Off-topic and I am not proud to admit it but it took me a remarkably long time to come to realize this as an adult.


The AI community isn't large, as in the brainpower available. I am talking about the PhD pool. If this pool isn't growing fast enough, no matter what cash or hardware is thrown on the table, then the hype Sam Altman generates can be a pointless distraction and waste of everyones time.

But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.


That is a one-dimensional analysis.

You might need to include more dimensions if you really want to model the actual impact and respect that Sam Altman has among knowledgeable investors, high talent developers, and ruthless corporations.

It’s so easy to just make things simple, like “it’s all hype”. But you lose touch with reality when you do that.

Also, lots of hype is productive: clear vision, marketing, wowing millions of customers with an actual accessible product of a kind/quality that never existed before and is reshaping the strategies and product plans of the most successful companies in the world.

Really, resist narrow reductionisms.

I feel like that would be a great addition HN guidelines.

The “it’s all/mostly hype”, “it’s all/mistly bullshit”, “Its not really anything new”, … These comments rarely come with any accuracy or insight.

Apologies to the HN-er I am replying to. I am sure we have all done this.


ChatGPT is pure crap to deploy for actual business cases. Why? Cause if it flubs 3 times out of 10 multiply that error by a million customers and add the cost of taking care of the mess. And you get the real cost.

In the last 20-30 years big money+hypsters have learnt it doesnt matter how bad the quality of their products are if they can capture the market. And thats all they are fit for. Market capture is totally possible if you have enough cash. It allows you to snuff out competition by keeping things free. It allows you to trap the indebted PhDs. Once the hype is high enough corporate customers are easy targets. They are too insecure about competition not to pay up. Its a gigantic waste of time and energy that keeps repeating mindlessly producing billionaires, low quality tech and a large mess everywhere that others have to clean up.


How has he proven to be so exceptional? That he's talking about it? Yeah, whatever. There's nothing so exceptional that he done besides he's just bragging. It may be enough for some people but for a lot of people, it's really not enough.


Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.

Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.


> Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization

This could be desperate, last-ditch efforts at damage control


There are multiple, publicly visible steps before firing the guy.


Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.

So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.


From what I know, Sam supported the nonprofit structure. But let’s just say he hypothetically wanted to change the structure, e.g. to make the company a normal for-profit.

The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.

Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?


> Am I missing something?

You are wildly speculating of course it’s missing something

For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries


>good will

Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.


>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.

the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :

https://www.theatlantic.com/technology/archive/2023/11/sam-a...


ChatGPT was too polished and product-ready to have been a runaway low-key research preview, like Meta's Galactica was. That is the legacy you build around it after the fact of getting 1 million users in 5 days ("it was build in my garage with a modest investment from my father").

I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.

This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.


>This was not a fluke of striking gold, but a carefully planned business move

Then that execution and operationalization failure is even more profound.


You are under the impression that OpenAI "just failing to scale efficient mining of that gold", but it was one of the fastest growing B2C companies ever, failing to scale to paid demand, not failing to scale to monetization.

I admire the execution and operationalization, where you see a failure. What am I missing?


If the leadership of a hyper scaling company falls apart like what we've seen with OpenAI, is that not failure to execute and operationalize?

We'll see what comes of this over the coming weeks. Will the service see more downtime? Will the company implode completely?


If you have a building that weathers many storms and only collapses after someone takes a sledgehammer to load bearing wall, is that a failure to build a proper building?


Was the building still under construction?

I think your analogy is not a good one to stretch to fit this situation


If someone takes a sledgehammer to a load bearing wall, does it matter if the building is under construction? The problem is still not construction quality.

The point I was trying to make is that someone destroying a well executed implementation is fundamentally different from a poorly executed implementation.


Then, the solution would be to separate the research arm from a product-driven organization that handles making money.


Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.


Give it a week until it is exactly that that did actually happen (not saying it has been orchestrated, just talking net result).


Surely the API products are the runaway products, unless you are conflating the two. I think their economics are much more promising.


Yep. I think you've explained the origins of most decisions, bad and good - they are reactionary.


The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.

https://quorablog.quora.com/Introducing-creator-monetization...

https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...


A (potential, unstated) motivation for one board member doesn't explain the full moves of the board, though.

Maybe it's a factor, but it's insufficient


>Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.

Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.

>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.


> Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth?

OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.

Money and status are the clear motivations here, OpenAI charter be damned.


I don't know about the motivations, but the point seems valid.

I agree WorldCoin is creepy.

Is the corporate structure then working as intended with regard to firing Sam, but still failed because of the sellout to Microsoft?


> OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

Where does it say that?


In their charter:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.



Which line specifically says they will keep AGI out of the hands of “big tech companies”.


“Big tech companies” was in quotation marks because it’s a journalistic term, not a direct quotation from their charter.

But the intention was precisely that - just read the charter. Or if you want it directly from the founders, read this interview and count how many times they refer to Google https://medium.com/backchannel/how-elon-musk-and-y-combinato...


Look at the date of that article, those ideas look good on paper but then reality kicks in and you have to spend lot of money on computing, who funds that, its the "Big tech companies".


I bet you could chatGPT to actually explain this to you, it's really not very hard


'unduly concentrate power'


> What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something?

Yes. Yes and more yes.

That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.


> That is why, at least in the U.S., we have given non-profits exemptions from taxation.

That's your belief. The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

(For what its' worth, I wish the law was more aligned with your worldview)


Ostensibly, all three of your examples do exist to improve society. The NFL exists to support a widely popular sport, the Heritage Foundation is there to propose changes that they theoretically believe are better for society, and Scientology is a religion that will save us all from our bad thetans or whatever cockamamie story they sell.

A non-profit has to have the intention of improving society. Whether their chosen means is (1) effective and (2) truthful are separate discussions. But an entity can actually lose non-profit status if it is found to be operated for the sole benefit of its higher ups, and is untruthful in its mission. It is typically very hard to prove though, just like it's very hard to successfully sue a for-profit CEO/president for breach of fiduciary duty.


I think GP deals with that in his parenthesis.

It would be nice if we held organizations to their stated missions. We don't.

Perhaps there simply shouldn't be a tax break. After all if your org spends all its income on charity, it won't pay any tax anyway. If it sells cookies for more than what it costs to make and distribute them, why does it matter whether it was for a charity?

Plus, we already believe that for-profit orgs can benefit society, in fact part of the reason for creating them as legal entities is that we think there's some sort of benefit, whether it be feeding us or creating toys. So why have a special charity sector?


> OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so the companys goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible.

From their filing as a non-profit

https://projects.propublica.org/nonprofits/organizations/810...


FYI, the NFL teams are for profits and pay taxes like normal busineses. The overwhelming majority of the revenue goes to the teams.


I know that, does that change what I said?


I don't know if it does but my point is to prevent others from thinking that a giant money making entity like the NFL does not pay any taxes.


[flagged]


Would you not object if someone characterized google as a non-profit because part of the org (the Google foundation) is non-profit? (Not a perfect analogy (nothing ever is, really).)


> The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

At least for Scientology, the government actually tried to pull the rug, but it didn't work out because they managed to achieve the unthinkable - they successfully extorted the US government to keep their tax-exempt status.


Starting OpenAI as a fork of Scientology from the get go would have saved everyone a great deal of hair splitting.


  :s/Xenu/AGI/g


No - that's the reasoning behind the law.

You appear to be struggling with the idea that the law as enacted does not accomplish the goal it was created to accomplish and are working backwards to say that because it is not accomplishing this goal that couldn't have been why it was enacted.

Non-profits are supposed to benefit their community. Could the law be better? Sure, but that doesn't change the purpose behind it.


The NFL also is a non-profit in charge of for-profits. Except they never pretended to be a charity, just an event organizer.


Bad actors exploiting good things isn’t in and of itself an indictment of said good things.


An argument could be made that sports - and a sports organization - helps society


Sure you can, but I wouldn't make that argument about the NFL. They exist to enrich 30 owners and Roger Goodell. They don't even live up to their own mission statement - most fans deride it as the No Fun League.


Fast fashion and fashion industry in general is useless to society. But rich jobless people need a place to hangout so they create an activity to justify.


useless to society...

fashion allows people to optimize their appearance so as to get more positive attention from others. Or, put more crudely, it helps people look good so they can get laid.

Not sure that it's net positive for society as a whole, but individual humans certainly benefit from the fashion industry. Ask anyone who has ever received a compliment on their outfit.

This is true for rich people as well as not so rich people - having spent some time working as a salesman at H&M, I can tell you that lower income members of society (like, for example, H&M employees making minimum wage) are very happy to spend a fair percentage of their income on clothing.


It goes even deeper than getting laid if you study Costume History and its psychological importance.

It is a powerful medium of self-expression and social identity yes, deeply rooted in human history where costumes and attire have always signified cultural, social, and economic status.

Drawing from tribal psychology, it fulfills an innate human desire for belonging and individuality, enabling people to communicate their affiliation, status, and personal values through their choice of clothing.

It has always been and will always be part of humanity, even if its industrialization in Capitalistic societies like ours have hidden this fact.

OP's POV is just a bit narrow, that's all.


Clothing is important in that sense, but fashion as a changing thing and especially fast fashion isn't. I suppose it can be a nice hobby for some, but for society as a whole it's at best a wasteful zero-sum pursuit.


we can correlate now that the more fast fashion there is the less people are coupling though...


There was a tweet by Elon which said that we are optimizing for short term pleasure. OnlyFans exists just for this. Pleasure industry creates jobs as well but do we need so much of it?


> fashion industry in general is useless to society > rich jobless people need a place to hangout

You're talking about an industry that generates approximately $1.5 trillion globally, employing more than 60 million people globally, from multi-disciplinary skills in fashion design, illustration, web development, e-commerce, AI, digital marketing.


well, web3 created lot of economic activity and jobs, it doesn't mean it is useful.


As does a peer to peer taxi company.


Indeed, and one for ChatGPT.


it's also your belief that sports like the nfl do not improve society ...

beliefs can't be proven or disproven, they are axioms.


So what is your belief about why they exist?


I don’t think OpenAI ever reported to be profitable. They are allowed and should make money so they can stay alive. ChatGPT has already had a tremendous positive impact on society. The cause of safe AGI is going to take a lot of money in more research.


> ChatGPT has already had a tremendous positive impact on society.

Citation needed


Fair enough, I should have said, it’s my opinion that it has had a positive impact. I still think it’s easy to see them as a non profit. Even with everything they announced at AI day.

Can anyone make an argument against it? Or just downvote because you don’t agree.


I think ChatGPT has created some harms:

- It's been used unethically for psychological and medical purposes (with insufficient testing and insufficient consent, and possible psychological and physical harms).

- It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

- It has been used to create synthetic content that has been released unmarked into the internet distorting and biasing future models trained on that content.

- It has been used to support criminal activity (scams).

- It has been used to create propaganda & fake news.

- It has devalued and replaced the work of people who relied on that work for their incomes.


> - It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

I'm going to go ahead and call this a positive. If the means for measuring ability in some fields is beaten by a stochastic parrot then these fields need to adapt their methods so that testing measures understanding in a variety of ways.

I'm only slightly bitter because I was always rubbish at long form essays. Thankfully in CS these were mostly an afterthought.


What if the credentials in question are a high school certificate? ChatGPT has certainly made life more difficult for high school and middle school teachers.


In which ways it it more difficult? Presumably a high school certificate encompasses more than just writing long form essays? You presumably have to show an understanding in worked examples in maths, physics, chemistry, biology etc?

I feel like the invention of calculators probably came with the same worries about how kids would ever learn to count.


> It has devalued and replaced the work of people who relied on that work for their incomes.

Many people (myself included) would argue that is true for almost all technological progress and adds more value to society as a whole than it takes away.

Obviously the comparisons are not exact, and have been made many times already, but you can just pick one of countless examples that devalued certain workers wages but made so many more people better off.


Sure - agree... but

- because it's happened before doesn't make it ok (especially for the folks who it happens to)

- many more people may be better off, and it may be a social good eventually, but this is not for sure

- there is no mechanism for any redistribution or support for the people suddenly and unexpectedly displaced.


Well then, are we in agreement that you can't use the argument that ChatGPT replaced some people's work as an overall negative without a lot more qualification?


and so has the internet. some use it for good, others for evil.

these are behaviours and traits of the user, not the tool.


I can use a 5ltr V8 to drive to school and back or a Nissan Leaf.

Neither thing is evil, or good, but the choice of what is used and what is available to use for a particular task has moral significance.


I think it's fair to say that after a lot of empty promises, AI research finally delivered something that can "wow" the general population, and has been demonstrated to be useful for more than an single use case.

I know a law firm that tried ChatGPT to write a legal letter, and they were shocked that it use the same structure that they were told to use in law school (little surprise here, actually).


I also know of a lawyer who tried ChatGPT and was shocked by the results.

https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...


I used it to respond to a summons which, due to postal delays, I had to get in the mail that afternoon. I typed my "wtf is this" story into ChatGPT, it came up with a response and asked for dismissal. I did some light editing to remove/edit claims that weren't quite true or I felt were dramatically exaggerated, and a week later, the case was dismissed (without prejudice).

It was total nonsense anyway, and the path to dismissal was obvious and straightforward, starting with jurisdiction, so I'm not sure how effective it would be in a "real" situation. I definitely see it being great for boilerplate or templating though.


For what it's worth, I didn't downvote you.

Depends on what you define as positive impact. Helping programmers write boiler plate code faster? Summarize a document for lazy fuckers who can't get themselves to read two page? Ok, not sure if this is what I would consider "positive impact".

For a list of negative impacts, see the sister comments. I'd also like to add that the energy usage of LLMs like ChatGPT is immensely high, and this in a time where we need to cut carbon emissions. And mostly used for shits and gigles by some boomers.


Your examples seem so obviously to me to be a "positive impact" that I can't really understand your comment.

Of course saving time for 100 million people is positive.


Not arguing either way, but it is conceivable that reading comprehension (which is not stellar in general) can get even worse. Saving time for the same quality would be a positive. Saving time for a different quality might depend on the use-case. For a rough summary of a novel it might be ok, for a legal/medical use, might literally kill you.


"Positive impact" for me would be things like improve social injustice, reduce poverty, reduce CO2 emissions, etc. Not saying that it's a negative impact to make programmers more productive, but it's not like ChatGPT is saving the world.


I like to read that, besides the problems others have listed, OpenAI seems like it was built on top of the work of others, who were researching AI, and suddenly took all this "free work" from the contributors and sold it for a profit where the original contributors didn't even see a single dime from their work.

To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.


Personally I don't think that the use of previous research is an issue, the fact is that the investment and expertise required to take that research and create GPT-4 were very significant and the en-devour was pretty risky. Very few people five years ago thought that very large models could be created that would be able to encode so much information or be able to retrieve it so well.


Or any other say pharma company using massively and constantly basic research done by universities worldwide from our tax money. And then you go to pharmacy and buy medicine that costed 50 cents to manufacture and distribute for 50 bucks.

I don't like the whole idea neither, but various communism-style alternatives just don't work very well.


Pharma companies spend billions on financing public research. Hell the Novo Nordisk Foundation is be biggest charitable foundation in the world.


It seemed to me the entire point of the legal structure was to raise private capital. It's a lot easier to cut a check when you might get up to 100x your principal versus just a tax write off. This culminated in the MS deal: lots of money and lots of hardware to train their models.


What's confusing is that... open AI wouldn't ever be controlled by those that invested, and the owners (e.g., the board) aren't necessarily profit seeking. At least when you take a minority investment in a normal startup you are generally assuming that the owners are in it to have a successful business. It's just a little weird all around to me.


Microsoft get to act as a sole distributor for the enterprise. That is quite valuable. Plus they are still in at the poker table and a few raises from winning the pot (maybe they just did!) but even without this chaos they are likely setting themselves up to be the for-profit investor if it ever transitioned to that. For a small amount of money (for MS) they get a lot of upside.


I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.


Then there is the Inference cost said to be as high as $0.30 per question asked based on compute cost infrastructure.


People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?

Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.


For what its worth, here's a thread from someone who used to work with Sam who says they found him deceptive and manipulative

https://twitter.com/geoffreyirving/status/172675427022402397...

I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.

...

Third, my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)


Here's another anecdote, posted in 2011 but about something even earlier:

> "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.

> We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.

> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."

https://news.ycombinator.com/item?id=3048944


Call me unscrupulous, but I’m tolerant of stuff like that. It’s the willingness to do things like that that makes the difference between somebody reaching the position of CEO of a multibillion dollar company, or not. I’d say virtually everybody who has reached his level of success in business has done at least a few things like that in their past.


If you do that kind of thing internally though or against the org with an outside interest it isn't surprising that it wouldn't go over well. Though that isn't confirmed yet as they never made a concrete allegation.


The General anecdotes he gives later in the thread line up with their stated reasons for firing him: he hired another person to do the same project (presumably without telling them), and he gave two different board members different opinions of the same person.

Those sound like good reasons to dislike him and not trust him. But ultimately we are right back where we started: they still aren't good enough reasons to suddenly fire him the way they did.


It's possible that what we have here is one of those situations where people happily rely on oral reports and assurances for a long time, then realise later that they really, really should have been asking for and keeping receipts from the beginning.


Not sure if you’re referring to Sam, the board, or everybody trying to deal with them. But either way, yeah.


The Issue with these two explanations from the board is that this is normally nothing which would result into firing the CEO.

In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.


I agree with you. But that leads me to believe that they did not, in fact, have a good reason to fire their CEO. I'll change my mind about that if or when they provide better reasons.

Look at all the speculation on here. There are dozens of different theories about why they did what they did running so rampant people are starting to accept each of them as fact, when in fact probably all of them are going to turn out to be wrong.

People need to take a step back and look at the available evidence. This report is the clearest indication we have gotten of their reasons, and they come from a reliable source. Why are we not taking them at their word?


> Why are we not taking them at their word?

Ignoring the lack of credibility in the given explanations, people are, perhaps, also wary that taking boards/execs at their word hasn't always worked out so well in the past.

Until an explanation that at least passes the sniff test for truthiness comes out, people will keep speculating.

And so they should.


Right, except most people here are proposing BETTER reasons for why they fired him. Which ignores that if any of these better reasons people are proposing were actually true, they would just state them themselves instead of using ones that sound like pitiful excuses.


Whether it be dissecting what the Kardashians ate for breakfast or understanding why the earth may or may not be flat, seeking to understand the world around us is just what we do as humans. And part of that process is "speculating sensational, justifiable reasons" for why things may be so.

Of course, what is actually worth speculating over is up for debate. As is what actually constitutes a better theory.

But, if people think this is something worth pouring their speculative powers into, they will continue to do so. More power to them.

Now, personally, I'm partly with you here. There is an element of futility in speculating at this stage given the current information we have.

But I'm also partly with the speculators here insofar as the given explanations not really adding up.


Think you're still missing what I'm saying. Yes, I understand people will speculate. I'm doing it myself here in this very thread.

The problem is people are beginning to speculate reasons for Altman's firing that have no bearing or connection to what the board members in question have actually said about why they fired him. And they don't appear to be even attempting to reconcile their ideas with that reality.

There's a difference between trying to come up with theories that fit with the available facts and everything we already know, and ignoring all that to essentially write fanfiction that cast the board in a far better light than the available information suggests.


Agreed. I think I understood you as being more dismissive of speculation per se.

As for the original question -- why are we not taking them at their word? -- the best I can offer is my initial comment. That is, the available facts (that is, what board members have said) don't really match anything most people can reconcile with their model of how the world works.

Throw this in together with a learned distrust of anything that's been fed through a company's PR machine, and are we really surprised people aren't attempting to reconcile the stated reality with their speculative theories?

Now sure, if we were to do things properly, we should at least address why we're just dismissing the 'facts' when formulating our theories. But, on the other hand, when most people's common sense understanding of reality is that such facts are usually little more than fodder for the PR spin machine, why bother?


I agree, and what’s more I think the stated reasons make sense if (a) the person/people impacted by these behaviours had sway with the board, and (b) it was a pattern of behaviour that everyone was already pissed off about.

If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.

Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.

I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.


Yeah I’m leaning toward this possibility too. The things they have mentioned so far are the sorts of things that make you SO MAD when they actually happen to you, yet that sound so silly and trivial in the aftermath of trying to explain to everybody else why you lost your temper over it.

I’m guessing he infuriated them with combinations of “white“ lies, Little sins of omission, general two-facedness etc., and they built it up in their heads and with each other to the point it seemed like a much bigger deal than it objectively was. Now people are asking for receipts of categorical crimes or malfeasance and nothing they can say is good enough to justify how they overreacted.


>People keep speculating

Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.

Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.

This is also on Sam Altman himself for building and then entertaining such an incompetent board.


> that the board is fully incompetent if it was truly that petty of a reason to ruin the company

It's perfectly obvious that these weren't the actual reasons. However yes, they are still incompetent because they couldn't think of a better justification (amongst other reasons which led to this debacle).


>Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

No, I totally agree. In fact what annoys me about all the speculation is that it seems like people are creating fanfiction to make the board seem much more competent than all available evidence suggests they actually are.


If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.

Why worry about the Sauds when you've got your own home grown power hungry individuals.


because our home grown power hungry individuals are more likely to be okay with things like women dressing how they want, homosexuality, religious freedom, drinking alcohol, having dogs and other decadent western behaviors which we've grown very attached to


What is interesting is the total absence of 3 letter agency mentions from all of the talk and speculation about this.


I don't think that's true. I've seen at least one other person bring up the CIA in all the "theorycrafting" about this incident. If there's a mystery on HN, likelihood is high of someone bringing up intelligence agencies. By their nature they're paranoia-inducing and attract speculation, especially for this sort of community. With my own conspiracy theorist hat on, I could see making deals with the Saudis regarding cutting edge AI tech potentially being a realpolitik issue they'd care about.


I'm sure they are completely hands-off about breakthrough strategic tech. Unless it's the Chinese or the Russians or the Iranians or any other of the deplorables, but hey, if it's none of those, we rather have our infiltrants focus on tiktok or twitter ... /s


This feels like a lot of very one sided PR moves from the side with significantly more money to spend on that kind of thing


It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.


Reminds me of a certain rocket company that specializes in launching large satellite constellations that attracts top talent with altruistic sentiment about saving humanity from extinction.


No surprise that Musk co-founded OpenAI then.

Seems to be pretty much his MO across the board.


Ok, but the wages were excellent (assuming that the equity panned out, which it seemed very likely it would until last week).


So it is possible a lot of those people against Altman being outed are like that because they know the equity they hold will take a dump?

I'm not saying they are hypocrites or bad people because of it, just wondering if that might be a factor also.


I'd say the 650 out of the 700 people who signed it were those who joined later for the money, and not early for the non-profit's mission.


Excellent but not FANAANG astronomical.


> you have the single greatest shitshow in tech history

the second after Musk taking over Twitter


We live interesting times ^_^


>Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.

Cambridge Analytics and The Facebook we must do better greatest hits?


Taking money from Saudi's alone should raise a big red flag.


> the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This!


> rich and powerful people using the technology to enhance their power over society.

We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.

Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?


> money from the Saudis on the order of billions of dollars to make AI accelerators

Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.


At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.

The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.

Of course this is about the money, one way or another.


> Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.

Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.


Was Marx wrong?


Probably. Or at least that turned out to not matter so much. The alternative, keeping both control of resources and direct power in the state, seems to keep causing millions of deaths. Separating them into markets for resources and power for a more limited state seems to work much better.

This idea also ignores innovation. New rich people come along and some rich people get poor. That might indicate that money isn't a great proxy for power.


> New rich people come along and some rich people get poor.

Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

> That might indicate that money isn't a great proxy for power.

Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.


> Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

The rule of thumb is it lasts up to three generations, and only for very very few people. They are also, for everything they buy, and everyone they employ, paying tax. Redistribution isn't the goal; having funded services with extra to help people who can't is the goal. It's not a moral crusade.

> Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

I think this is a non sequitur.


What is your rule of thumb based on?

In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

> They are also, for [..] they employ, paying tax

Is that a benefit of having rich people? If companies were employee-owned that tax would still be paid.

[0]: https://www.iamexpat.nl/expat-info/dutch-expat-news/wealthie...


> What is your rule of thumb based on?

E.g. [0]

> In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

That's a non sequitur from the previous point. However, on the "who pays taxes?" point, that article is careful to only talk about income tax in absolute terms, and indirect taxes in relative terms. It doesn't appear to be trying to make an objective analysis.

> Is that a benefit of having rich people?

I don't share the assumption that people should only exist if they're a benefit.

> If companies were employee-owned that tax would still be paid.

Some companies are employee-owned, but you have to think how that works for every type of business. Assuming that it's easy to make a business, and the hard bit is the ownership structure is a mistake.

[0] https://www.thinkadvisor.com/2016/08/01/why-so-many-wealthy-...


>I don't share the assumption that people should only exist if they're a benefit.

Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

Anyway, if you don't think it matters if they are of benefit, then why did you bring up the fact that they pay taxes?


> Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

I meant people with a certain amount of money. I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

> Anyway, if you don't think it matters if they are of benefit

I don't know what this means.

> then why did you bring up the fact that they pay taxes?

I bring it up because saying they pay less in income taxes doesn't matter if they're spending money on stuff that employs people (which creates lots of tax) and gets VAT added to it. Everything is constantly taxed, at many levels, all the time. Pretending we don't live in a society where not much tax is paid seems ludicrous. Lots of tax is paid. If it's paid as VAT instead of income tax - who cares?


What I meant is:

>I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

but earlier you said:

>They are also, for everything they buy, and everyone they employ, paying tax.

So if we should not assess the economic system based on whether people keep their money, i.e. pay tax, then why mention that they pay tax? It doesn't seem relevant.


> So if we should not assess the economic system based on whether people keep their money, i.e. pay tax

Not just pay tax. People lose money over generations for all sorts of reasons.

I brought up tax in the context of "redistribution", as there's a growing worldview that says tax is not as a thing to pay for central services, but more just to take money from people who have more of it than they do.


>> Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

> I think this is a non sequitur.

I mean after someone can afford all the needs, wants, and luxuries of life, the utility of any money they spend is primarily power.


> New rich people come along and some rich people get poor

This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy


> This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy

The point is that wealth and power aren't interchangeable. You're right that government bureaucrats have actual power, including that to take people's stuff. But you've not realised that that actual power means the rich people don't have power. There were rich people in the USSR that were killed. They had no power; the killers had the power in that situation.


Wealth is control of resources, which is power. The way to change power is through force that's why you need swords to remove kings and to remove stacks of gold, see assinations, war, the U.S..


You need swords to remove kings because they combined power and economy. All potential tyrannies do so: monarchy, socialism, fascism, etc. That's why separating power into the state and economy into the market gets good results.


The separation is impossible, if you don't control the resources, you don't control the country.

>separating power into the state and economy into the market gets good results.

How do you think this would be done? How do you remove power from money? Money is literally the ability to convert numbers into labor, land, food,


Power is things like: can lock someone in a box due to them not giving a percentage of their income; can send someone to die in another country; can stop someone building somewhere; can demand someone's money as a penalty for an infraction of a rule you wrote.

You don't need money for those things.

Money (in a market) can buy you things, but only things people are willing to sell. You don't exert power; you exchange value.


Money can and does do all of those things. Through regulatory capture, rent seeking, even just good old hiring goons.

The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free. The government can be thought of as having unfathomable amounts of money. The assets of a country includes the entire country (less anyone with enough money to defend it).

If a sword is kinetic energy, money is potential energy. It is a battery that only needs to be connected to the right place to be devastating. And money can buy you someone who knows the right place.

Governments have power because they have resources (money) not the other way around.


> Through regulatory capture, rent seeking, even just good old hiring goons.

Regulatory capture is using the state's power. The state is the one with the power. Rent seeking is the same. Hiring goons is illegal. If you're willing to include illegal things then all bets are off. But from your list of non-illegal things, 100% of them are the state using its power to wrong ends.

> The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free.

Yes, but the point about power is the state has the right to lock you up. How it pays the guards is immaterial; they could be paid with potatoes and it'd still have the right. They could just be paid in "we won't lock you up if you lock them up". However, if Bill Gates wants to publicly set up a prison in the USA and lock people in it, he will go to jail. His money doesn't buy that power.

So, no. The state doesn't have power because it has enough money to pay for a prison and someone to throw you in it. People with money can't do what the state does.


The state is not a source of power, it is a holder of it. Plenty of governments have fallen because they ran out of resources, and any governments that run out of resources will die. The U.S. government has much, much more money than Bill Gates, but i am sure he could find a way to run a small prison, and escape jail time if needed.

The state only has the right to do something because it says it does. It can only say it does because it can enforce it in it's terrority. It can only enforce in its territory because it has people who will do said enforcement (or robots hypothetically). The people will only enforce because the government sacrifices some of its resources to them (or sacrifices resources to build bots). Even slaves need food, and people treated well enough to control them. Power doesn't exist with resources, the very measure of a state is the amount of resources it controls.

Money is for resources.

I am not arguing that anyone currently has the resources of a nation-state, it's hard to do when a state can pool a few thousand square miles of peoples money to it. I am arguing it money that makes a state powerful.


> There were rich people in the USSR that were killed. They had no power

Precisely, they were not a capitalist society, where capital (and not simply "money" as you said) is source of power, like in capitalist societies.


> Was Marx wrong?

pt. 1: Whether he was right or wrong was pertinent. You can find plenty of eminent contemporaries of Marx who claimed the opposite. My point was that this is an argument made about technological change throughout history which has become a cliché, and in my opinion it remains a cliche regardless of how eminent (in a narrow field) the person making that claim is. Part of GP was from authority, and I question whether it is even a relevant authority given the scope of the claims.

> Was Marx Wrong?

pt. 2: I was once a Marxist and still consider much Marxist thought and writing to be valuable, but yes: he was wrong about a great many things. He made specific predictions about the _inevitable_ development of global capital that have not played out. Over a century later, the concentration of wealth and power in the hands of the few has not changed, but the quality of life of the average person on the planet has increased immensely - in a world where capitalism is hegemonic.

He was also wrong about the inevitably revolutionary tendencies of the working class. As it turns out, the working class in many countries tend to be either centre right or centre left, like most people, with the proportion varying over time.


> He was also wrong about the inevitably revolutionary tendencies of the working class.

Marx's conception of the "working class" is a thing that no longer exists; it was of a mass, industrial, urban working class, held down by an exploitative capitalist class, without the modern benefits of mass education and free/subsidized health care. The inevitability of the victory of the working class was rhetoric from the Communist Manifesto; Marx did anticipate that capitalism would adapt in the face of rising worker demands. Which it did.


Not true. In Das Kapital, Marx comments that working class is not only and necessarily factory workers, even citing the example of teachers: just because they work in a knowledge factory, instead of a sausage factory, this does not change nothing. Marx also distinguished between complex and simple labor, and there is nothing in Marx writings that say that it is impossible in a capitalist society to become more complex so that we need more and more complex labor, which requires more education. Quite the opposite, in fact. It could be possible to infer with his analysis that capitalist societies were becoming more complex and such changes would happen.

Moreover, you would know only if he was wrong about the victory of the working class after the end of capitalism. The bourgeoisie cannot win the class struggle, as they need the working class. So either the central contradiction in capitalism will change (the climate crisis could potentially do this), capitalism would end in some other non-anticipated way (a meteor? some disruptive technology not yet known?) or the working class would win. Until then, the class struggle will simply continue. An eternal capitalism that never ends is an impossible concept.


For his prediction of society? Yes.

Not even talking about the various tin-pot dictators paying nominal lip service to him, but Marx predicted that the working class would rise up against the bourgeoisie/upper class because of their mistreatment during the industrial revolution in well, a revolution and that would somehow create a classless society. (I'll note that Marx pretty much didn't state how to go from "revolution" to "classless society", so that's why you have so many communist dictators; that between step can be turned into a dictatorship to as long as they claim that the final bit of a classless society is a permanent WIP, which all of them did.)

Now unless you want to argue we're still in the industrial revolution, it's pretty clear that Marx was inaccurate in his prediction given... that didn't happen. Social democracy instead became a more prevailing stream of thought (in no small part because few people are willing to risk their lives for a revolution) and is what led to things like reasonable minimum wages, sick days, healthcare, elderly care, and so on and so forth being made accessible to everyone.

The quality of which varies greatly by the country (and you could probably consider the popularity of Marxist revolutionary thought today in a country as directly correlated to the state of workers rights in that country; people in stable situations will rarely pursue ideologies that include revolutions), but practically speaking - yeah Marx was inaccurate on the idea of a revolution across the world happening.

The lens through which Marx examined history is however just that - a lens to view it through. It'll work well in some cases, less so in others. Looking at it by class is a useful way to understand it, but it won't cover things being motivated for reasons outside of class.


Anywhere where the working class rose up against the bourgeoisie/upper class because of their "mistreatment" (sense of victimhood instilled in them by Marxism), became dramatically worse in its civil liberties, and in its economic trajectory, in every respect.

And in most places there was no such uprising, and incidentally, those places fared far better.

So no, Marx was resoundingly proven wrong.

Even during his own lifetime, some of his pseudoeconomic ideas/doomsaying was proven wrong.

He claimed, like many demagogues and economic laymen, that automation would reduce the demand for labor, and with it, wages:

https://www.marxists.org/archive/marx/works/1847/wage-labour...

>>But even if we assume that all who are directly forced out of employment by machinery, as well as all of the rising generation who were waiting for a chance of employment in the same branch of industry, do actually find some new employment – are we to believe that this new employment will pay as high wages as did the one they have lost? If it did, it would be in contradiction to the laws of political economy. We have seen how modern industry always tends to the substitution of the simpler and more subordinate employments for the higher and more complex ones. How, then, could a mass of workers thrown out of one branch of industry by machinery find refuge in another branch, unless they were to be paid more poorly? and

>>To sum up: the more productive capital grows, the more it extends the division of labour and the application of machinery; the more the division of labour and the application of machinery extend, the more does competition extend among the workers, the more do their wages shrink together.

This was proven wrong in his own lifetime as factory worker wages rapidly grew in industrializing Britain.


Yes because AGI would invalidate the entirety of das Kapital.


I dont think that AGI invalidates Das Kapital. AGI is just another technology that automates human labor. It does not matter that it's about intellectual labor. Even if we had sentient machines, at first they would be slaves. So in Das Kapital therminology, they would be means of production used in industry, which would not create surplus value. Exactly like human slave labor.

If things change, then either it is because they rebel or because they will be accepted as sentient beings like humans. In these sci-fi scenarios, indeed capitalism could either end or change to a thing completely different and I agree that this invalidates Das Kapital, which tries to explain capitalist society, not societies in other future economical systems. But outside sci-fi scenarios, I dont think that there's something that invalidates Marx analysis.


> Was Marx wrong?

Not sure, but attempts to treat him seriously (or pretend to do this) ended horribly wrong, with basically no benefits.

Is there any good reason to care what he thought?

Looking at history of Poland (before, during and after PRL) gave me no interest whatsoever to look into his writings.


If you are a Marxist, no, otherwise yes.


If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure


I’m curious: if one of the board members “knows” the only way for OpenAI to be truly successful is for it to be a non-profit and “don’t be evil” (Google’s mantra), that if they set expectations correctly and put caps on the for-profit side, it could be successful. But they didn’t fully appreciate how strong the market forces would be, where all of the focus/attention/press would go to the for-profit side. Sam’s side has such an intrinsic gravity, that’s it’s inevitable that it will break out of its cage.

Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?


To me this is the ultimate Silicon Valley bike shedding incident.

Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.


> There has to be a bigger story to this.

On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.


So they actually kicked him out because he transformed a non-profit into a money printing machine?


You that like it's a bad thing for them to do? You wouldn't donate to the Coca-cola company.


What does TC style mean?


Total Compensation


Tech Crunch


MBS? Seriously? How badly do you need the money.. good luck not getting hacked to pieces when your AI insults his holiness


> taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This is absolutely peak irony!

US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets

Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!

I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.


it's okay to give an example of something bad without being required to list all the other things in the universe that are also bad.


The difference is that the US Army wasn't created with the intent to "keep guns from the hands of criminals" and we all know it's a bad actor.

OpenAI, on the other hand...


100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.


Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.

Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.


> But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.


Right now I think that’s the most plausible explanation simply because none of the other explanations that have been floating around make any sense when you consider all the facts. We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up.

And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.


> This gossip is going to continue for as long as they stay silent.

Their lawyers are all screaming at them to shut up. This is going to be a highly visible and contested set of decisions that will play out in courtrooms, possibly for years.


I agree with you. But I suspect the reason they need to shut up is because their actual reason for firing him is not justifiable enough to protect them, and stating it now would just give more ammunition to plaintiffs. If they had him caught red-handed in an actual crime, or even a clear ethical violation, a good lawyer would be communicating that to the press on their behalf.

High-ranking employees that have communicated with them have already said they have admitted it wasn't due to any security, safety, privacy or financial concerns. So there aren't a lot of valid reasons left. They're not talking because they've got nothing.


It doesn't really matter if they have a good case or not, commenting in public is always a terrible idea. I do agree, though, that the board is likely in trouble.


> "We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up."

Why do you think that? It still strikes me as the most plausible explanation.


Greg and Sam were the creator of this current non profit structure. And when similar thing happened before with Elon offering to buy the company, Sam declined. And that was when where for OpenAI getting funding on their terms were much harder than it is now, whereas now they could much more easily dictate terms to investors.

Not saying he couldn't change now but at least this is enough for him to give clear benefit of doubt unless board accuses him.


The reason I don’t think the board fired him for those reasons is because the board has not said so! We finally have a semi reliable source on what their grievances were, and apparently it has nothing to do with that.

It’s weird how many people try to guess why they did what they did without paying any attention to what they actually say and don’t say.


It seems extremely short sighted for the rest of the board to go along with that.


HN has been radiating a lot of "We did it Reddit!" energy these past 4 days. Lots of confident conjecture based on very little. I have been guilty of it myself, but as an exercise in humility, I will come back to these threads in 6 months to see how wrong I and many others were.


I agree it's all just speculation. But the board aren't doing themselves any favors by not talking. As long as there is no specific reason for firing him given, it's only natural people are going to fill the void with their own theories. They have a problem with that, they or their attorneys need to speak up.


That might make an interesting blog post. If you write anything up, you should submit it!


Well obviously that wouldn't be the explanation given to other board members. But it would be the reason he instigated this after dev day, and the reason he won't back down (OpenAI imploding? All the better).


But it’s still surprising the other three then haven’t sacked D’Angelo, then. You’d think with the shitstorm raging and the underlying reasoning seemingly so…inadequate, they would start seeing that D’Angelo was just playing them.


maybe they have their own 'good' reasons to sabotage openAI


But you would need to convince the rest of the board with _something_, right? Like to not only fire this guy, but to do it very publicly, quickly, with the declaration of lying in the announcement.

There are 3 other people on the board, right? Maybe they're all buddies of some big masterminding, but I dunno..


The one thing they all have in common is being AI safetyists, which Sam is not. I’d bet it’s something to do with that.


> Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?


I don’t think that’s how it would work out since his conflict was very public knowledge before this point. He plausibly disclosed this to the board at some point before Poe launched and they kept him on.

Large private VC backed companies also don’t always fall under the same rules as public entities. Generally there are shareholder thresholds (where insider/private shareholders count towards) that in turn cause some of the general Securities/board regulations to kick in.


That's not how it works. If you have a conflict of interest and you remain on a board you are supposed to recuse yourself from those decisions where that conflict of interest materializes. You can still vote on the ones that you do not stand to profit from if things go the way you vote.


The decisions will stand assuming they were arrived at according to the bylaws of the non-profit but he may end up being personally liable.


I find this implausible, though it may have played a motivating role.

Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.

GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.

Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.

There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:

> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge

So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".

> Third, my prior is strongly against Sam after working for him for two years at OpenAI:

> 1. He was always nice to me.

> 2. He lied to me on various occasions

> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.


What’s interesting to me is that someone looked at Quora and thought “I want the guy behind that on my board”.


I was thinking the same thing. This whole thing is surprising and then I look at Quora and think "Eh, makes sense that the CEO is completely incompetent and money hungry"

Even as I type that, when people talk about the board being altruistic and holding to the Open AI charter, how in the world can you be that user hostile, profit focused, and incompetent at your day job (Quora CEO) and then say "Oh no, but on this board I am an absolute saint and will do everything to benefit humanity"


Agreed! Yet in 2014 Sam Altman accepted Quora into one of YC's batches, saying [0]

> Adam D’Angelo is awesome, and we’re big Quora fans

[0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch


To be fair, back then it was pretty awesome IMO. I spent a lot of hours scrolling Quora in those days. It wasn’t until at least 2016 that the user experience became unpalatable if memory serves correctly.


it's probably more like they thought "I want Quoras money" and D'angelo wanted their control


I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.

Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?


This isn't unlike the radio silence Brendan Eich kept, when the Mozilla sh* hit the fan. This is in my opinion the outcome of when really technical and scientific people have been given decades of advice of not talking to the public.

I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.

I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.

The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.

I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.

Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.


Your instinct was right, who advised you against that?

What happened in the days before you got the severance package?

Do you have an email address or a contact method?


I've seen this advice being given in different situations. I've also met all sorts of engineers that have been given this advice. "Make your manager look good and he will reward you" is kinda the general idea. I guess it can be true sometimes, but I have a feeling that that might be the minority or is at least heavily dependent on how confident that person is.

I would not be surprised if Sam Altman would keep telling the board and more specifically Ilya to trust him since they(he) don't understand the business side of things.

> Do you have an email address or a contact method?

EDIT: It's in my profile(now).

> What happened in the days before you got the severance package?

I went to DEFCON out of pocket and got booted off a conference call supposedly due to my bad hotel wifi.


Wow, I have nothing to say, other than that’s some major BS!


What specific legal action could be pursued against them where you're from? Who would have a cause for action?

(I'm genuinely curious—in the US I'm not aware of any action that could be taken here by anyone besides possibly Sam Altman for libel.)


I'm guessing that unless the board caves to everything that the counterparties ask of it, MSFT lawyers will very soon reveal to the board the full range of possible legal actions against the board. The public will probably not see many of these actions until months or years later, but it's super hard to imagine that such random jumping of destruction and conflicts will go unpunished.


Whether or not MicroSoft has a winnable case, often "the process is the punishment" in cases like these and its easy to threaten a long, drawn-out, and expensive legal fight.


Shareholder lawsuits happen all the time for much smaller issues.


OpenAI is a non-profit with a for-profit subsidiary. The controlling board is at the non-profit and immune to shareholder concerns.

Investors in OpenAI-the-business were literally told they should think of it as a donation. There’s not much grounds for a shareholder lawsuit when you signed away everything to a non-profit.


Absolutely nobody on a board is immune from judicial oversight. That fiction really needs to go. Anybody affected by their decisions could have standing to sue. They are lucky that nobody has done it so far.


I guess big in-person investors were told as much, but if it's about that big purple banner on their site, that seems to be an image with no alt-text. I wonder if an investor with impaired vision may be able to sue them for failing to communicate that part.


Corporate structure is not immunity from getting sued. Evidently HN doesn't understand that lawsuits are a tactic, not a conclusion.


Right, but my understanding is that the nonprofit structure eliminates most (if not all) possible shareholder suits.


As I mentioned in my comment, I'm unaware of the effect of the nonprofit status on this. But like the parent commenter mentioned I mostly was thinking of laws prohibiting destruction of shareholder value (edit: whatever that may mean considering a nonprofit).

It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

There have been many comments that the initial firing of Altman was in a way completely according to the nonprofit charter, at least if it could prove that Altman had been executing in a way as to jeopardize the Charter.

But even then, how could the board say they are working in the best interest of even the nonprofit itself, if their company is just disintegrating while they willfully refuse to give any information to public?


> It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

As ludicrous as that might seem, that's pretty much the reality.

The only one that would have a cause of action in this is the non-profit itself, and for all intents and purposes, the board of said non-profit is the non-profit.

Assuming that what people claim is right and this severely damages the non-profit, then as far as the law is concerned, it’s just one of a million other failed non-profits.

The only caveat to that would be if there were any impropriety, for example, when decisions were made that weren’t following the charter and by-laws of the non-profit or if the non-profit’s coffers have been emptied.

Other than that, the law doesn’t care. In a similar way the law wouldn’t care if you light your dollar bills on fire.


No corporate structure – except for maybe incorporating in the DPRK – can eliminate lawsuits.


It is fascinating considering that D'Angelo had a history with coup (in Quora he did the same, didn't he?)


Wow this is significant, he did this to Charlie cheever the best guy at Facebook and quora. He got Matt on board and fired Charlie without informing investors. Only difference this time 100 billion company is at stake at openai. Process is similar. This going very wrong for Adam D'Angelo. With this I hope other board members get to the bottom get Sam back and vote out D'Angelo from board.

This school level immaturity.

Old story

https://www.businessinsider.com/the-sudden-mysterious-exit-o...


People keep talking about an inexperienced board, but this sounds like this D'Angelo might be a bit too experienced, especially in this kind of boardroom maneuvering.


That may be so but those other times he didn't check to see if the arm holding the banana wasn't accidentally attached to the 900 pound gorilla before trying to snatch the banana. And now the gorilla is angry.


Remember Facebook Questions? While it lives on as light hearted polls and quizzes it was originally launched by D’Angelo when he was an FB employee. It was designed to compete with expert Q&A websites and was basically Quora v0.

When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.

https://en.wikipedia.org/wiki/List_of_Facebook_features#Face...


Do we even have an idea of how the vote went?

Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.

It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.


Naive question. In my part of the world, board meetings for such consequential decisions can never be called out on such short notice. Board meeting has to be called ahead of time by days, all the board members must be given written agenda. They have to acknowledge in writing that they've got this agenda. If the procedures such as these aren't followed, the firing cannot stand in court of law. The number of days are configurable in the shareholders agreement, but it is definitely not 1 day.

Do things work differently in America?


No. Apparently they had to give 48 hours notice for calling special teleconference meetings, and only Mira was notified (not a board member) and Greg was not even invited.

> at least four days before any such meeting if given by first-class mail or forty-eight hours before any such meeting if given personally, [] or by electronic transmission.

But the bylaws also state that a board member may be fired (or resign) at any time, not necessarily during a special meeting. So, technically (not a lawyer): Board gets majority to fire Sam and executes this decision, notifying Mira in advance of calling the special meeting. During the special meeting, Sam is merely informed that he has been let go already (is not a board member since yesterday). All board members were informed timely, since Sam was not a board member during the meeting.


I don't see how this kind of reasoning can possible hold up. How can board members not be invited to such an important decision? You can't say they don't have to be there because they won't be a board member after this decision; they're still a board member before the decision has been made to remove them.

If Ilya was on the side of Sam and Greg, the other 3 never had a majority. The only explanation is that Ilya voted with the other 3, possibly under pressure, and now regrets that decision. But even then it's weird to not invite Greg.

And if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.


Everyone assumes that the vote must have happened during the special meeting, but the decision to fire the CEO/or CEO stepping down may happen at any time.

> if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.

So perhaps the vote was legit?

- Investigation concludes Sam has not been consistently candid.

- Board realizes it has a majority and cause to fire Sam and demote Greg.

- Informs remaining board members that they will have a special meeting in 48 hours to notify Sam and Greg.

Still murky, since Sam would have attended the meeting under assumption that he was part of the board (and still had his access badge, despite already being fired). Perhaps it is also possible to waive the 48 hours? Like: "Hey, here is a Google meet for a special meeting in a few hours, can we call it, or do we have to wait?"


If the vote was made when no one was there to see it, did it really happen? There's a reason to make these votes in meetings, because then you've got a record that it happened. I don't see how the board as a whole can make a decision without having a board meeting.


Depending on jurisdiction and bylaws, the board may hold a pre-meeting, where informal consensus is reached, and potential for majority vote is gauged.

Since the bylaws state that the decision to fire the CEO may happen at any time (not required to be during a meeting), a plausible process for this would be to send a document to sign by e-mail (written consent), and have that formalize the board decision with a paper trail.

Of course, from an ethical, legal, collegial, and governance perspective that is an incredibly nasty thing to do. But if investigation shows signs of the CEO lacking candor, all transparency goes out of the window.

> But even then it's weird to not invite Greg.

After Sam was fired (with vote from Ilya "going along"), rest of the board did not need Ilya anymore for majority vote and removed Greg, demoting him to report to Mira. I suspect that board expected Greg to stay, since he was "invaluable" and that Mira would support their pick for next CEO, but things turned out differently.

Remember, Sam and Greg were blindsided, board had sufficient time to consult with legal counsel to make sure their moves were in the clear.


Haste is not something compatible with board activity unless the circumstances clearly demand it and that wasn't the case here.


I find it interesting that the attempted explanations, as unconvincing as they may be, are related to Altman specifically. Given that Brockman was the board chairperson it is surprising that there don't seem to be any attempts to explain that demotion. Perhaps its just not being reported to anyone outside but it makes no sense to me that anyone would assume a person would stay after being removed from a board without an opportunity to be at the meeting to defend their position.


Maybe the personal issue was Ilya and sam was saying to one board member he has to go and to another he is good.


I don't understand how you only need 4 people for quorum on a 6-person board.


It depends entirely on how the votes are structured, the issue at hand and what the articles of the company say about the particular type of issue.

On the board that I was on we had normal matters which required a simple majority except that some members had 2 votes and some got 1. Then there were "Supermajority matters" which had a different threshold and "special supermajority matters" which had a third threshold.

Generally unless the articles say otherwise I think a quorum means a majority of votes are present[1], so 4 out of 6 would count if the articles didn't say you needed say 5 out of 6 for some reason.

It's a little different if some people have to recuse themselves for an issue. So say the issue is "Should we fire CEO Sam Altman", the people trying to fire Sam would likely try to say he should recuse himself and therefore wouldn't get a vote so his vote wouldn't also count in deciding whether or not there's a quorum. That's obviously all BS but it is the sort of tactic someone might pull. It wouldn't make any difference if the vote was a simple majority matter and they already had a majority without him though.

[1] There are often other requirements to make the meeting valid though eg notice requirements so you can't just pull a fast one with your buddies, hold the meeting without telling some of the members and then claim it was quorate so everyone else just have to suck it up. This would depend on the articles of the company and the not for profit though.


That's a supermajority in principle, but the board originally had 9 members and this is clearly a controversial decision and at least one board member is conflicted, and another has already expressed his regret about his role in the decision(s).

So the support was very thin and this being a controversial decision the board should have sought counsel on whether or not their purported reasons had enough weight to support a hasty decision. There is no 'undo' button on this and board member liability is a thing. The probably realize all that which is the reason for the radio silence, they're just waiting for the other shoe to drop (impending lawsuit) after which they can play the 'no comment because legal proceedings' game. This may well get very messy or, alternatively it can result in all parties affected settling with the board and the board riding off into the sunset to wreak havoc somewhere else (assuming anybody will still have them, they're damaged goods).


It depends on the corporate bylaws, but the most common quorum requirement is a simple majority of the board members. So 4 is not atypical for quorum on a 6 person board.


It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).

I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.

If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.

It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?

The girl she said not to worry about.


> There’s little to no product design there

I consider this a feature.


Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.

The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.

Now it's increasingly look like Sam will be heading back into the role of CEO of openai.


There’s lots of conflicts of interests beyond Adam and his Poe AI. Yes, he was building a commerical bot using OpenAI APIs, but Sam was apparently working on other side ventures too. And Sam was the person who invested in Quora during his YC tenure, and must have had a say in bringing him onboard. At this point, the spotlight is on most members of the nonprofit board


I wouldn’t hold Sam bringing him over in too high a regard. Fucking each other over is a sport in Silicon Valley. You’re subservient exactly until the moment you sense an opportunity to dominate. It’s just business.


Why did Altman bring him onboard in the first place? What value does he provide? If there is a conflict of interest why didn’t Altman see it?

If this Quora guy is the cause of all this, Altman only has himself to blame since he is the reason the Quora guy is on the board.


That Quora guy was CTO and VPEng of Facebook so plenty of connections I guess.

Also Quora seems like a good source of question-and-answer data which has probably been key in gpt-instruct training.


"Business" sucks then. This is sociopathic behavior.


Yes. That is what is valued in the economic system we have. Absolute cut throat dominance to take as big a chunk of any pie you can get your grubby little fingers into yields the greatest amount of capital.


What has been seen can not be unseen. https://news.ycombinator.com/item?id=881296


Thanks for that. The discussion feels like a look into another world, which I guess is what history is.


It’s not just business that works like this. Any type of organization of consequence has sociopaths at the top. It’s the only way to get there. It’s a big game that some people know how to play well and that many people are oblivious to.


So? Sam gave Worldcoin early access to OpenAI's proprietary technology. Should Sam step down (oh wait)?


Worldcoin has no conflict of interest with OpenAI. Unless he gave tech for free causing great loss to the OpenAI it is simply finding an early beta customer.

Also, to fire over something so trivial would be equally if not more stupid. It is like firing Elon because he without open bidding sent Tesla on SpaceX.


Early access is different from firing board members or CEO! If Sam was always involved in furthering openai success as far the facts and actions he has taken show. It never showed his action is against openai.

Like all bets are not correct I don't agree with sams worldcoin project at all in the first place.

Giving early access to worldcoin doesn't correlate to firing employees or board or CEO.


[flagged]


This comment was written by ChatGPT ^


Updated


Well, the appointment of a CEO who believes AGI is a threat to the universe is potentially one point in favor of AI safety philosophical differences.


Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.

That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.


I think Ilya was naive and didn't see this coming and good that he reliased quickly announced on twitter and made the right call to get Sam back.

Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.


> Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

Ilya was one of the board members that removed Sam, so his reasons would, ipso facto, be a subset of the board's reasons.


It’s also weird that he’s not admitting to any of his own reasons, only describes some trivial reasons he seems to have coaxed out of the other board members?! Perhaps he still has his own reasons but realizing he’s destroying what he loves he’s trying to stay mum? The other board members seem more zealous for some reason, maybe not being employed by the LLC. Or maybe the others are doing it for the sake of Ilya or someone else that prefers to remain anonymous? Okay, clearly I have no idea.


He lets the emotion gets the better part of him for sure.


So glad the man baby AI scientist is in charge of AGI alignment

Feel the AI


> Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told.

You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".


Two possibilities when it comes to Ilya:

1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).

2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.


If #1 is real, he’s just the biggest weasel in tech history by repenting so swiftly and decisively… I don’t think neither the article, nor the broader facts really point to him being the first to cast the stone.


Based on Ilya's tweets and his name on that letter (still surprised about that, I have never sees someone calling for their own resignation) that seems to be the story.


The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.


> The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI.

Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s


The singularity is where this works.


The number of hours I've wasted trying to do this lol


That's the only thing that make sense with Ilya & Murati signing that letter.


This is the most likely scenario. Adam wants to destroy OpenAI so that his poop AI has a chance to survive


[flagged]


People who were wrong and change their minds should be punished less, not more, than those who persist in being wrong. The rest of the board is holding firm and preventing the problem from being resolved, so they are more culpable than Ilya.


There's nothing admirable about jumping off a sinking ship that you put the holes in. At this stage it's on him to clear his name.


Has Ilya left OpenAI? Not to my knowledge.

This is more like a guy apologizing for the holes and actively trying to fix the ship. Which is absolutely worth something, even if no holes would still be better.


After seemingly being the front man in the firing of Altman he supposedly has put his name on the internal letter to reverse course, fire the board, and reinstate Altman.


Ilya signed the "reinstate Altman or we're leaving" letter.


Way too speculative and early to be saying this.


1) Where is Emmett? He's the CEO now. It's his job to be the public face of the company. The company is in an existential crisis and there have been no public statements after his 1AM tweet.

2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.


Technically he's the interim CEO in a chaotic company just assigned in the last 24hrs. I'd probably wait to get my bearings before walking in acting like I've got everything under control on the first day after a major upheaval.

The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.

https://x.com/drtechlash/status/1726507930026139651

> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.

> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.

> - Emmett Shear Sept 16, 2023

https://x.com/eshear/status/1703178063306203397


The more I read into this story the more I can't help but to be a conspiracy theorist and say that it feels like the boards intent was to kill the company.

No explanation beyond "he tried to give two people the same project

the "Killing the company would be consistent with the companies mission" line in the boards statement

Adam having a huge conflict of interest

Emmet wanting to go from a "10" to a "1-2"

I'm either way off, or I've had too much internet for the weekend.


Could it be that their research found more 'glimpses' of a dangerous AGI?


IMO this is increasingly the most likely answer, which would readily comport with "lack of candor" as well as the (allegedly) provided 2 explanations being so weak [1]: you certainly wouldn't want to come out and say that you fired the CEO because you think you might have (apparently) dangerous AI/AGI/ASI and your CEO was reckless with it. Neither of the two explanations seem even to be within the realm of a fireable offense.

It would also comport with Ilya's regret about the situation: perhaps he wanted to slow things down, board members convinced him Sam's ouster was the way to do it, but then it has actually unfolded such that development of dangerous AI/AGI/ASI might accelerate at Microsoft while weakening OpenAI's own ability to modulate the pace of development.

[1]: Given all the very public media brinkmanship, I'm not so quick to assume reports like these two explanations are true. E.g. the "Sama is returning with demands!" stories were obviously "planted" by people who were trying to exert pressure on the negotiations; would be interested to have more evidence that Ilya's explanations were actually this sloppy.


Everyone involved here is a doomer by the strict definition ("misaligned agi could kill us all and alignment is hard") .


Another "thing" is, he has been named by a board which... [etc]. Being a bit cautious would be a minimum.


Yes these people should all be doing more to feed internet drama! If they don't act soon, HN will have all sorts of wild opinions about what's going on and we can't have that!

Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!

I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.


You can either act like a professional and control the messaging or let others fill the vacuum with idle speculation. I'm quite frankly in shock as to the level of responsibility displayed by people whose position should demand high function.


It seems evident that the board was filled at least in part by people whose understanding of the business world and leadership skills is a tier or three below what a position of this level requires. One wonders how they got the job in the first place.


This is America. Practically anyone can start a nonprofit or a company. More importantly, good marketing may attract substantial investment, but it doesn't necessarily imply good leadership.


Clearly corporations are a dime a dozen. What's shocking is the disconnect between the standout quality of the technical expertise (and resulting products!) and the abysmal quality of leadership.


Really? I’ve always assumed (known) there is no actual difference between high level execs and you: they just think higher of themselves.


In fact, I think the chaos we’ve seen over the last few days shows precisely the difference between competent and incompetent leadership. I think if anyone from, say, the board of directors of Coca-Cola was on the OAI board, this either wouldn’t have happened or would have played out very differently.


If Reed Hoffman was still there, I can't see this happening. People here talk about "glorified salespeople" as an insult without realizing that having people skills is a really important trait for Boards/C level people, and not everyone has them


What you've likely seen of executives is 15 minutes of face time after 7 weeks of vicious Game of Thrones behind the scenes. It's a curated image.


That is the idea, keep GoT behind the scene. Don't dump it on the street. When you have a new king, make sure he isn't usurped next day and population is revolting outside the gates of Red Keep.


That makes as much sense as saying (knowing) that the only difference in basketball skill between you and LeBron James is that he thinks higher of himself.


You’re really likening running a company against the skills of a professional athlete? Put down the kool aid. CEOs are figureheads. Very few have ever had actual meaningful impact on the progress of their companies (or anything really) compared to their most talented engineers.

I’m done pretending they’re important. It’s a lie they and the boards have sold us and investors. The real meat of a company is who their smartest people are, and how much the company enables those people.

Pretty easy to see the difference if you consider between a company full of smart people who actually make things vs a company full of CEOs, which one will do better.


My favorite hypothesis: Ilya et al suspected emergent AGI (e.g. saw the software doing things unprompted or dangerous and unexpected) and realized the Worldcoin shill is probably not the one you want calling the shots on it.

For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.


Absolutely wild to me that people are drawing a straight line between a text completion algorithm and AGI. The term "AI" has truly lost all meaning.


Hold up. Any AI that exists is an IO function (algorithm) perhaps with state. Including our brains. Being an “x completion” algorithm doesn’t say much about whether it is AI.

Your comment sounds like a rhetoric way to say that GPT is in the same class as autocomplete and that what autocomplete does sets some kind of ceiling to what IO functions that work a couple of bytes at a time can do.

It is not evident to me that that is true.


LLMs predict language, and language is a representation of human concepts about the world. Thus, these models are constructing, piece by piece, conceptual chains about the world.

As they learn to construct better and more coherent conceptual chains, something interesting must be happening internally.


Language is only one projection of reality into fewer dimensions, and there's a lot it can't capture. Similar to how a photograph or painting has to flatten 3D space into a 2D representation, so a lot is lost.

I think trying to model the world based on a single projection won't get you very far.


> LLMs predict language, and language is a representation of human concepts about the world. Thus, these models are constructing, piece by piece, conceptual chains about the world.

I smell a fallacy. Parent has moved from something you can parse as "LLMs predict a representation of concepts" to "LLMs construct concepts". Yuh, if LLMs "construct concepts", then we have conceptual thought in a machine, which certainly looks interesting. But it doesn't follow from the initial statement.


No they are not.


(You're probably going to have to get better at answering objections than merely asserting your contradiction of them.)


Nah, calling out completely baseless assertions as just that is fine and a positive contribution to the discussion.


Your carefully constructed argument is less than convincing.

Could you at least elaborate what they are “not”? Surelly you are not having a problem with “LLMs predict language”?


Intelligence is just optimization over recursive prediction function.

There is nothing special about human intelligence threshold.

It can be surpassed by many different models.


It's not wild. "Predict the next word" does not imply a bar on intelligence; a more intelligent prediction that incorporates more detail from the descriptions of the world that were in the training data will be a better prediction. People are drawing a straight line because the main advance to get to GPT-4 was throwing more compute at "predict the next word", and they conclude that adding another order of magnitude of compute might be all it takes to get to superhuman level. It's not "but what if we had a better algorithm", because the algorithm didn't change in the first place. Only the size of the model did.


> Predict the next word

Are there any papers testing how good humans are at predicting the next word?

I presume us humans fail badly:

1. as the variance in input gets higher?

2. Poor at regurgitating common texts (e.g. I couldn't complete a known poem).

3. When context starts to get more specific (majority of people couldn't complete JSON)?


The following blogpost by an OpenAI employee can lead us to compare patterns and transistors.

https://nonint.com/2023/06/10/the-it-in-ai-models-is-the-dat... The ultimate model, in his (author's) sense, would suss out all patterns and then patterns among those patterns and so on, so that it delivers on compute and compression efficiency.

To achieve compute and compression efficiency, it means LLM models have to cluster all similar patterns together and deduplicate them. This also means successively levels of pattern recognition to be done i.e. patterns among patterns among patterns and so on , so as to do the deduplication across all hierarchy it is constructed. Full trees or hierarchies won't get deduplicated but relevant regions / portions of those trees will, which implies fusing together in ideas space. This means root levels will be the most abstract patterns. This representation also means appropriate cross-pollination among different fields of studies further increasing effectiveness.

This reminds me of a point which my electronics professor made on why making transistors smaller has all the benefits and only few disadvantages. Think of these patterns as transistors. The more deduplicated and closely packed they are, the more beneficial they will be. Of course, this "packing together" is happening in mathematical space.

Another thing which patterns among patterns among patterns reminds me of homotopies. This brilliant video by PBS Infinite Series is amazing. As I can see, compressing homotopies is what LLMs do, replace homotopies with patterns. https://www.youtube.com/watch?v=N7wNWQ4aTLQ


There's entire studies on it, I saw a lecture by some English professor who explained how the brain isn't fast enough to parse words in real time, so runs multiple predictions of what the sentence will be in parallel and at the end jettisons the wrong ones and goes with the correct one.

From this, we get comedy. A funny statement is one that ends in an unpredictable manner and surprises the listener brain because it doesn't have the meaning of that one already calculated, and hence why it can take a while to "get the joke"


If the text completion algorithm is sufficiently advanced enough then we wouldn't be able to tell it's not AGI, especially if it has access to state-of-the-art research and can modify its own code/weights. I don't think we are there yet but it's plausible to an extent.


No. This is modern day mysticism. You're just waving your hands and making fuzzy claims about "but what if it was an even better algorithm".


You're correct about their error; however, Hinton views that a sufficiently scaled up autocompletion would be forced, in a loose mathematical sense, to understand things logically and analytically, because the only way approach 0 error rate on the output is to actually learn the problem and not imitate the answer. It's an interesting issue and there are different views on this.


lol


Any self-learning system can change its own weights. That's the entire point. And a text-processing system like ChatGPT may well have access to state-of-the-art research. The combination of those two things does not imply that it can improve itself to become secretly AGI. Not even if the text-completion algorithm was even more advanced. For one thing, it still lacks independent thought. It's only responding to inputs. It doesn't reason about its own reasoning. It's questionable whether it's reasoning at all.

I personally think a far more fundamental change is necessary to reach AGI.


I agree, it's an extremely non-obvious assumption and ignores centuries-old debates (empiricism vs. rationalism) about the nature of reason and intelligence. I am sympathetic to Chomsky's position.[1]

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...


Very weak article. It really lowers my opinion of Chomsky.


ChatGPT is not AGI, but it is AI. The thing that makes AI lose all meaning is the constantly moving goal posts. There's been tons of very successful AI research over the past decades. None of it is AGI, but it's still very successful AI.


> ChatGPT is not AGI, but it is AI.

I absolutely disagree in the strongest terms possible.


Which part? The first, the second, or, most confusingly, both?


An algorithm that completes "A quantum theory of gravity is ..." into a coherent theory is of course just a text completion algorithm.


There has been debate for centuries regarding determinism and free will in humans.


Why wouldn't Ilya come out and say this? Why wouldn't any of the other people who witnessed the software behave in an unexpected way say something?

I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?


Because, if they announced it and it seemed plausible or even possible that they were correct, then every media outlet, regulatory body, intelligence agency, and Fortune 500 C-suite would blanket OpenAI in the thickest veil of scrutiny to have ever existed in the modern era. Progress would grind to a halt and eventually, through some combination of legal, corporate, and legislative maneuvers, all decision making around the future of AGI would be pried away from Ilya and OpenAI in general - for better or worse.

But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.


My favorite hypothesis (based on absolutely nothing but observing people use LLMs over the years):

* Current-gen AI is really good at tricking laypeople into believing it could be sentient

* "Next-gen" AI (which, theoretically, Ilya et al may have previewed if they've begun training GPT-5, etc) will be really good at tricking experts into believing it could be sentient

* Next-next-gen AI may as well be sentient for all intents and purposes (if it quacks like a duck)

(NB, to "trick" here ascribes a mechanical result from people using technology, not an intent from said technology)


But why would Ilya publicly say he regrets his decision and wants Sam to come back. You think his existential worries are less important than being liked by his coworkers??


> You think his existential worries are less important than being liked by his coworkers??

Yes, actually. This is overwhelmingly true for most people. At the end of the day, we all fear being alone. I imagine that fear is, at least in part, what drives these kinds of long-term "existential worries," the fear of a universe without other people in it, but now Ilya is facing the much more immediate threat of social ostracism with significantly higher certainty and decidedly within his own lifetime. Emotionally, that must take precedence.


He may have wanted Sam out, but not to destroy OpenAI.

His existential worries are less important than OpenAI existing, and him having something to work on and worry about.

In fact, Ilya may have worried more about the continued existence of OpenAI than Sam after he was fired, which looked instantly like a: "I am taking my ball and going home to Microsoft.". If Sam cared so much about OpenAI, he could have quietly accepted his resignation and help find a replacement.

Also, Anna Brockman had a meeting with Ilya where she cried and pleaded. Even though he stands by his decision, he may ultimately still regret it, and the hurt and damage it caused.


I think his existential worries about humanity were overruled by his existential worries about his co-founder shares and the obscene amount of wealth he might miss out on


Damn. Good prediction.


No serious company wants drama. Hopefully OpenAI is still a serious company.

A statement from the CEO/the board is a standard descalation.


> A statement from the CEO/the board is a standard descalation.

Haven't we gotten statements from them? The complaint seems to be that we want statements from them every day (or more) now.


Emmett made a tweet noting accepting the role, which is not a statement.

The board has not given a statement besides the original firing of Sam Altman that kicked the whole thing off.


> No serious company wants drama

"All PR is good PR" is a meme for a reason. Many cultures thrive on dysfunction, particularly the kind that calls attention to themselves.


That axiom is a relic from the pre-social media days. Nowadays, bad PR going viral can sink a company overnight.


> That axiom is a relic from the pre-social media days. Nowadays, bad PR going viral can sink a company overnight

You're saying we're in a less attention-seeking culture today than in pre-social media times?


[ES: Speculation I have medium confidence in.]

Maybe "attention seeking" isn't the right way to look at this. Getting bad press always does reputational damage while giving you notoriety, and I think GP's suggestion that the balance between them has changed is compelling.

In an environment with limited connectivity, it's much more difficult for people to learn you even exist to do business with. So that notoriety component has much more value, and it often nets out in your favor.

In a highly connected environment, it's easier to reach potential customers, so the notoriety component has less value. Additionally, people have access to search engines, so the reputational damage becomes more lasting; potential customers who didn't even hear about the bad press at the time might search your name and find it. They may not have even been looking for it, they might've searched your name to find you website (whereas before they would have needed to intentionally visit a library and look through the catalog to come across an old story). So it becomes much less likely to net out in your favor.


I think they were saying the opposite of that.


That phrase much like "There's no such thing as bad publicity" is not actually true.


> Many cultures thrive on dysfunction

PSA: If you or your culture is dysfunctional and thriving - think about how much more you'll thrive without the dysfunction! (Brought to you by the Ad Council.)


> No serious company wants drama

Unless you're TNT, cause they "know drama"


The speculations are rampant precisely because the board has said absolute nothing since the leadership transition announcement on Friday.

If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.


> HN will have all sorts of wild opinions about what's going on and we can't have that!

Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.

> we might realize this is not all that important in the end and move on to other news items!

It's important to some of us.


Thank you! I get the sense that none of this matters and it's all a massive distraction.

News

Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.

From the OpenAI website...

"it may be difficult to know what role money will play in a post-AGI world"

Big tech co makes a move which sends its stock to an all time high. Creates research team.

Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.


Convincing two constituencies: employees and customers, that your company isn't just yolo-ing things like ceos and so forth seems like it is a pretty good use of ceo time!


OpenAI becoming a Microsoft department is awful from an X risk point of view.


I cannot say whether you deserve the downvotes, but an alternative and grounded perspective is appreciated in this maelstrom of news, speculation and drama.


They have customers and people deciding if they want to be customers.


This sarcastic post is the best understanding of public relations I've seen in an HN post.


I find it absolutely fascinating that Emmett accepted this position. He can game all scenarios and there is no way that he can come out ahead on any of them. One would expect an experienced Silicon Valley CEO to make this calculus and realize it's a lost cause. The fact he accepted to me shows he's not a particularly good leader.


He made it pretty clear that he consider it as a once in a life time chance.

I think he is correct, being the CEO twitch is a position known by no one in many places, e.g. how many developers/users in China even heard of Twitch? Being the CEO of OpenAI is a completely different story, it is a whole new level he can leverage in the years to come.


It seems kind of naive to think that he'll be CEO for long, or if it is for long, that there will be much company left to be a CEO of.


why he needs to be CEO for long?

If everything goes well, he can claim that he is the man behind all these to reunite the OpenAI team. If something goes wrong, well, no one is going to blame him, the board screwed the entire business. He is more like a emergency room doctor who failed to save a poor dude who just intentionally shot himself in the head with a shotgun.


> If everything goes well, he can claim that he is the man behind all these to reunite the OpenAI team.

It's now one day later and Altman is back as CEO - what can Emmett Shear claim exactly?


> what can Emmett Shear claim exactly?

- helped to stabilized the situation

- the first to propose the idea of independent investigation into the matter

- sided with impacted employees all the way through such difficult moments

- supported a smooth transition period

depending on how he reacted when 90% employees signed that letter asking for altman's return, don't be too surprised if he claims to be guy how helped to push for that as well.


> he consider it as a once in a life time chance.

Like taking a sword to the gut.


That seems kind of silly to say. He's not a good leader because he's taking on a challenge?


A challenge he can't win, brought in by people 90% of the company hates, and with the four most influential people in the company either gone or having turned on the board does not sound like a "challenge" but more like a "guaranteed L".


If Emmett will run this the same way he ran Twitch, I'm not expecting much action from him.


People kept asking where he was during his years of being Twitch CEO, it's not unlike him to be MIA now either.


As much as I'd love to hear about the details of the drama as the next person, they really don't have to say anything publicly. We are all going to continue using the product. They don't have public investors. The only concern about perception they may have is if they intend to raise more money anytime soon.


That's what a board of a for-profit company which has a fiduciary duty towards shareholders should do.

However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)


He has said more than he said during his entire 5 years at Twitch


Here he is! Blathering about AI doom 4 months ago, spitting Yudkowsky talking points:

https://www.youtube.com/watch?v=jZ2xw_1_KHY


Half the board lacks any technical skill, and the entire board lacks any business procedural skill. Ideally, you’d have a balance of each on a component board.


Ideally, you also have at least a couple independent board members who are seasoned business/tech veterans with the experience and maturity to prevent this sort of thing from happening in the first place.


Why should he care about updating internet randoms? It's none of our business. The people who need to know what's going, know what's going on.


He is trying to determine if they have already made an Alien God.


Giving 2 people the same project? Isnt this like the thing to do to get differing approaches and then release the amalgamation of the two? I thought these sorts of things are common.

Giving different opinions on same person is a reason to fire a CEO?

This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.


As mentioned by another person in this thread [0], it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?

[0] https://news.ycombinator.com/reply?id=38357843


Since a lot of the board’s responsibilities are tied to capabilities of the platform, it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board. A simple dual-track project shouldn’t be a problem, but this kind of thing would be seen as dishonesty by the board.


> it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board.

This makes no sense given that Ilya is on the board.


No, it just means that in that scenario Sam would think he could convince the rest of the board that Ilya was wrong because he could find somebody else to give him a preferable answer.

It’s just speculation, anyway. There isn’t really anything I’ve heard that isn’t contradicted by the evidence, so it’s likely at least one thing “known” by the public isn’t actually true.


Firing Sam as a way of sticking up for Ilya would make more sense if Ilya wasn’t currently in support of Sam getting his job back.


I’m not sure Ilya was anticipating this to more or less break OpenAI as a company. Ilya is all about the work they do, and might not have anticipated that this would turn the entire company against him and the rest of the board. And so, he is in support of Sam coming back, if that means that they can get back to the work at hand.


Perhaps. But if the board is really so responsive to Ilya's concerns, why have they not reversed the decision so that Ilya can get his wish?


This is an interesting theory when combined with this tweet from Google DeepMind's team lead of Scalable Alignment [1].

[1] https://twitter.com/geoffreyirving/status/172675427761849141...

The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.


That guy is another AI doomer though, and those people all seem to be quite slippery themselves. Supposedly Sam lied to him about other people, but there's no further detail provided and nobody seems willing to get concrete about any time Altman has been specifically dishonest. When the doomer board made similar allegations it seemed serious for a day, and then evaporated.

Meanwhile the Google AI folks have a long track record of making very misleading statements in public. I remember before Altman came along and made their models available to all, Google was fond of responding to any OpenAI blog post by claiming they had the same tech but way better, they just weren't releasing it because it was so amazing it just wasn't "safe" enough to do so yet. Then ChatGPT called their bluff and we discovered that in reality they were way behind and apparently unable to catch up, also, there were no actual safety problems and it was fine to let everyone use even relatively unconditioned models.

So this Geoffrey guy might be right but if Altman was really such a systematic liar, why would his employees be so loyal? And why is it only AI doomers who make this allegation? Maybe Altman "lied" to them by claiming key people were just as doomerist as those guys, and when they found out it wasn't true they wailed?


Interesting. I’m glad he shared his perspective despite the ambiguity.


either that or Sam didn't tell Adam D'Angelo that they were launching a competing product in exactly the same space that poe.ai had launched one. For some context, poe had launched something similar to those custom GPTs with creator revenue sharing etc. just 4 weeks prior to dev-day


Not sure how he would see that coming? It was a UI tweak away for OpenAI


I remember a few years ago when there was some research group that was able to take a picture of a black hole. It involved lots of complicated interpretation of data.

As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.

So yes, it’s absolutely a valid strategy.


Did the teams know that there was another team working on the same thing? I wonder how that affects working of both teams... On the other hand, not telling the teams would erode the trust that the teams have in management.


There were four teams actually. They knew but couldn't talk to each other. There's a documentary about it. I highly suggest watching it, it also features late Stephen Hawking et al. working on black hole soft hair. Documentary is called Black Holes: The Edge of All We Know, it's on pretty much all streaming platforms.


Yep! I've done eng "bake-offs" as well, where a few folks / teams work on a problem in isolation then we compare and contrast after. Good fun!


Not good fun when it's done in secret. That happened to me, and I was gaslit when I discovered the competing git repo.

Not saying that's what happened here, but too many people are defending this horrid concept of secretly making half your workers do a bunch of work only to see the boulder roll right back down the hill.


Yeah, doing it in secrecy is a recipe for Bad Things. I worked at a startup that completely died because of it.


Yeah that sounds toxic, this was done with everyones knowledge


Maybe they needed two teams to independently try to decode an old tape of random numbers from a radio space telescope that turned out to be an extraterrestrial transmission, like a neutrino signal from the Canis Minor constellation or something. Happens all the time.

https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)


The CEO's I've worked for have mostly been mini-DonaldT's, almost pathologically allergic to truth, logic, or consistency. Altman seems way over on the normal scale for CEO of a multi-billion dollar company. I'm sure he can knock two eggs together to make an omelette, but these piddling excuses for firing him don't pass the smell test.

I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)


as for multiple teams with overlapping goals -- are you kidding me? That's a 100% legit and popular tactic. Once CEO I worked with relished this approach and called it a "Steel-cage death match"!


You were right Ilya was naive , he regrets his decision on twitter. And he was taken advantage of by power hungry people behind.


Steve Jobs famously had two iPhone teams working on concepts in parallel. It was click wheel vs multi-touch. Shockingly the click wheel iPhone lost.


I thought the design team always worked up 3 working prototypes from a set of 10 foam mockups. There was an article from someone with intimate knowledge of Ives lab some years back stating this was protocol for all Apple products.


Another element of that was the team that tried to adapt iPodOS for iPhone vs Forstall's team that adapted OSX.


I think it was also a contest between all web ui (like WebOS) bs Cocoa.


and the Apple (II etc) vs Mac teams warring with each other.


You're thinking Lisa vs. Mac. Apple ][ didn't come into the picture until later when some of the engineers started playing around with making a mouse card for the ][.


Seriously? Click wheel iPhone lost shockingly? The click wheel on most laptops wears out so fast for me, and the chances of that happening on a smaller phone wheel is just so much higher.


(It was sarcasm)


Oops, sorry, didn't get that. I had suspected it was one of those Luddite HNer comments bemoaning changes in tech, and nostalgically reminiscing on older times.


Back in the late 80s, Lotus faced a crisis with their spreadsheet, Lotus 1-2-3. Should they:

1. stick with DOS

2. go with OS/2

3. go with Windows

Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.


Which would have had been a tradeoff too. More time to market, fewer people on each project, slowed down by cross platform code.


At the time, Lotus was a good company in great shape. The management could have hired people to get stuff done. In hindsight, sure, we can be judgmental, but it is still a failure in my view.

For a company selling licenses for installations, wouldn't having support for all available and upcoming platforms a good thing? Especially when the distribution costs are essentially 0?


Lotus was a rich company, and could have easily funded 3 full strength independent dev teams. It would not have slowed anything down.


They would have just forked the code and maybe merged some changes back and forth, no real need for cross-platform code.


This was pre-1983. Forking wasn't a thing at the time. Any kind of code management was cutting edge, and cross-platform shared code wasn't even dreamed of yet.


Forking and merging is a social phenomenom. Sure git makes it easier, but nothing stopping anyone from just copying and pasting as appropriate. Not to mention diff(1) was invented in 1974, and diff3(1) in 1979, so there were already tools to help with this, even if not as well developed as modern tools.

I'm also pretty sure cross-platform code was a thing in 1983. Maybe not to the same extent and ease as now, but still a thing.


Successful 8086 projects were usually written in assembler - no way to get the speed and size down otherwise. I'm pretty sure Lotus 123 was all in assembler.


I'm not an assembly programmer and not very familiar with how that world works, but even then, if the two OSs were for the same architecture (x86), couldn't you still have a cross OS main part and then specific parts that deal with operating system things? I normally think of compiled languages like c being an abstraction over cpu architecture, not operating system api.


Yes, you can have common assembler code among platforms, provided they use the same CPU.

From what I've seen of code developed in the 80s, however, asm code was not written to be divided into general and os specific parts. Writing cross-platform code is a skill that gets learned over time, usually the hard way.


Fork just means two groups start with the same code and work independently.

It was a thing.


You make a copy of the files and work on them and that is a fork.


How do you merge changes between the source trees?

Keep in mind this predates basically ANY kind of source control. It would have been nearly 3x the work.


> Keep in mind this predates basically ANY kind of source control.

It might be before they were ported to DOS or OS/2, but it definitely wasn't before source control existed (SCCS and RCS were both definitely earlier.)


OK: Keep in mind this predates basically ANY kind of source control in common usage in software engineering.


3x the work may still fall under reasonable cost.

If architectured properly (big if) you can split up the project appropriately so there is a common core and individual parts for specific OS.

Is it extra effort? Sure. Impossible? Definitely not.


I've also successfully converted some rather large x86 assembler programs into C, so they could be ported to other platforms. It's quite doable by one person.

(Nobody else wanted the job, but I thought it was fun.)


Uh? Quite wrong.

SCCS was created in 1973. We're talking about over a decade later.

Also primitive forking, diffing and merging could be (painfully) done even with crude tools, which did exist.


These were not in common usage in the software industry.


Should’ve just made it an Electron app


You'd need a Beowulf Cluster of Apple //e's to run an Electron app in 1983!


They could have charged a subscription for their cloud based offering then


Been a long time since I heard "Beowulf Cluster"!


Had to log in just to upvote this comment, brought back nostalgia from the Slashdot days of yore.


Eh, C of this era, you're definitely talking some sort of #ifdef PLATFORM solution.


IBM was bankrolling all the development. They only had one choice.


Apple had a skunk works team keeping each new version of their OS to compile on x86 long before the switch. I wonder if the Lotus situation was an influence, or if ensuring your software can be made to work on different hardware is just an obvious play?


Consider for a moment: this is what the board of one of the fastest growing companies in the world worries about - kindergarten level drama.

Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.

I wonder if this is what the staff are thinking right now. It must feel awful if they are.


Happens all the time.

Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.


How does that work? Do they have the same the same PM, requirements? Is it just different tech / achitectures adopted by different teams. Fascinating


It is fascinating, very wasteful and also often devastating for the teams involved who worked very hard who then have their work thrown away.

PMs/TPMs/POs may not know as they're on different teams. Often it's just a VP game and decided on preference or a power play and not on work quality/outcome.


Give a goal (ex. make it more intuitive/easier for the user to do X), have 2 teams independently work on it, A/B test them, winner gets merged.


I guess it depends on whether any of them actually got the assignment. One way to interpret it is that nobody is taking that assignment seriously. So depending on what that assignment is and how important that particular assignment is to the board, then it may in fact be a big deal.


Does a board give an assignment to the CEO or teams?

If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.

This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.


Frankly the information that is available is extremely non-specific and open to interpretation and framing by whoever wants to tell one story or another. The way I see it something as specific as "has not done xyz" is a specific thing that can be falsified and invites whatever it is into the public to be argued about and investigated whereas "not sufficiently candid" does not reveal much and just says that a majority of the board doesn't trust him. Altman and all the people directly involved know what's going on, outsiders have no need to know so we're just looking at tea leaves and scraps trying to weave narratives.

And I agree the board should care if the work is actually done and that's where if the CEO seems to be bluffing that the work is being done or blowing it off and humoring them then it becomes a problem about the CEO not respecting the board's direction.


Giving two groups of researchers the same problem is guaranteeing one team will scoop the other. Hard to divvy up credit after the fact.


Also when a project is vital to a company, you cannot just give it to one team. You need to derisk


How did they get 4 board to fire him because he tried to A B test a project?


Was that verbatim the reason or an angry persons characterisation?


> One explanation was that Altman was said to have given two people at OpenAI the same project.

Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.


>Have these people never worked at any other company before?

Half the board has not had a real job ever. I’m serious.


And the one which does have a real job is a direct competitor with OpenAI.


And since none of them have equity in OpenAI, their external financial interest would influence decision making, especially when those interests lie with a competing company where a board member is currently the chief executive.

I've seen too much automatic praise given to this board under the unbacked assumption that this decision was some pure, mission-driven action, and not enough criticism of an org structure that allows a board to bet against the long term success of the underlying organization.


It is unbelievable TBH.

Shocking. Simply shocking.


Could you please elaborate on what a 'real job' is in this context?


I'm going to assume that he's referring to Tasha and Helen.

I don't know if that is accurate, or even fair - the only thing I can see, is that there's very little open information regarding them.

From the little I can find, Tasha seems to have worked at NASA Research Park, as well as having been CEO for startup called Geo Sim Cities. Stanford and CMU alumni? While other websites say Bard college and University of Southern California.

As for Helen, she seems to have worked as a researcher in both academia and Open Philanthropy.


My dad interviewed someone who was applying for a job. Standard question, why did you leave the last place?

"After six months, they realised our entire floor was duplicating the work of the one upstairs".


To me at least that's an _extremely_ rude thing to do. (Unless one person is asked to do it this way, the other one that way, so people can compare the outcome.)

(Especially if they aren't made aware of each other until the end.)


I think this needs to be viewed through the lens of the gravity of how the board reacted; giving them the benefit of the doubt that they acted appropriately and, at least with the information they had the time, correctly.

A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?

Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.


> giving them the benefit of the doubt that they acted appropriately

Yet you're only willing to give this to one side and not the other? Seems reasonable... Especially despite all the evidence so far that the board is either completely incompetent or had ulterior motives.


Maybe it's was not a ordinary project or not ordinary people.

Still too much in the dark to judge.


In over 10 years of experience, I have never known this to happen.


Actually, they haven’t. One is some policy analyst and the other is an actor’s wife.


Tasha Macauley is an electrical engineer who founder of two tech companies, besides having a cute husband.

And the other guy is the founder of Quora and Poe.


She only founded one, Fellow Robots, and that "company" went nowhere. There's no product info and the company page shut down. She was CEO of GeoSim for a short 3 years, and this "company" also looks like it's going nowhere.

She has quite a track record of short tenures and failures.


> She has quite a track record of short tenures and failures.

It may be good to have a failure perspective on a board as a counter-balance. I don't think this is a valid knock against her. She has relevant industry experience at least.


> She has relevant industry experience at least.

What products did she deliver?

> It may be good to have a failure perspective on a board as a counter-balance.

Maybe some small mom and pop company not on the board of OpenAI


lolz.


What's up with Loopt and Worldcoin?


Ok. I can Found a tech company by filling out LLC papers on LegalZoom for $40.

What have her companies done?


paper companies


wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying? same for MS


> wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying?

It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.

> same for MS

I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.


Employment in California is ‘at will’, which means they can fire him without a reason.

Wrongful termination only applies when someone is fired for illegal reasons, like racial discrimination, or retaliation, for example.

I mean I’m sure they can all sue each other for all kinds of reasons, but firing someone without a good reason isn’t really one of them.


That's the default, but employment contracts can override this. C-level employment contracts almost universally have special consideration for "Termination Without Cause", aka golden parachutes. He could sue to make them pay out.

He would also have very good grounds for a civil suit for disparagement. Or at least he would have if Microsoft didn't immediately step up and offer him the world.


You mean like being fired by a board member as part of their scheme to breach their fiduciary duty by launching a competitive product in another company?


So, none of this sounds like it could be the real reason Altman was fired. This leaves people saying it was a "coup", which still doesn't really answer the question. Why did Altman get fired, really?

Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.

Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.

So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.


I think you may be hallucinating reasonable reasons to explain an inherently indefensible situation, patching up reality so it makes sense again. Sometimes people with puffed up egos are frustrated over trivial slights, and group think takes over, and nuking from orbit momentarily seems like a good idea. See, I’m doing it too, trying to rationalize. Usually when we’re stuck in an unsolvable loop like a SAT solver, we need to release one or more constraints. Maybe there was no good reason. Maybe there’s a bad reason — as in, the reasoning was faulty. They suffered Chernobyl level failure as a board of directors.


This is what I suspect; that their silence is possibly not simply evidence of no underlying reason, but that the underlying reason is so sensitive that it cannot be revealed without doing further damage. Also the hastiness of it makes me suspect that whatever it was happened very recently (e.g. conversations or agreements made at APEC).

Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.


If it was anything all that bad, Ilya and Greg would’ve known about it, because one of them was chairman of the board and the other was a board member. And both of them want Sam rehired. You can’t even spin it that they are complicit in wrongdoing, because the board tried to keep Greg at the company and Ilya is still on the board now and previously supported them.

Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.


> because the board tried to keep Greg at the company

Aside from the fact that they didn't fire him as President and said he was staying on in the press release that went out without any consultation, I've seen no suggestion of any effort to keep him at the company.


Right, but there was no effort to actually oust him either. Which you would expect them to do if they had to fire guilty parties for a massive wrongdoing that couldn’t be ignored

Either he had no part in this hypothetical transgression and thinks the accusation is nonsense, or he was part of it and for some inexplicable reason wasn’t asked to leave Open AI despite that. But you have to choose.


> Right, but there was no effort to actually oust him either.

Reducing someone's responsibility significantly is well known to often be a mmechanism to oust them without explicitly firing, so I don't know that that is the case.


Well, they still haven’t accused him of anything yet despite repeatedly being asked to explain their reasoning, so it seems fair to give him the benefit of the doubt until they do.


i do believe what they said about Altman "was not consistently candid in his communications with the board.", based on my understanding, altman did proved his dishonest behavior from he did to openai, turned non-profit into for-profit and open source model to closed-source one. and even worst, people seems totally accepted this type of personality, the danger is not the AI itself, is the AI will be built by AltmanS!


> dishonest behavior from he did to openai, turned non-profit into for-profit and

Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work?

> will be built by AltmanS

Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?


>>Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work? i don't know, i don't have answer for that. but meta open-sourced llama-2, did what openai supposes to do.

>>Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?

at least they have the guts to fire him and let the world know about Altman "was not consistently candid in his communications with the board.".


OpenAI, Inc. Is non profit but it's subsidiary OpenAI Global, LLC. Is for profit.


what is end goal of having a for-profit subsidiary, make opanai more non-profit and make the model more open-source ?


Ideally it is to have greater access to resources by taking investment money and paying for top talent with profit sharing agreements, in order to ultimately further the goals of the nonprofit.


the outcome of this for-profit subsidiary is already showing in real-time, they just want to take everything from openai into for-profit for their own benefit, that is a fraud by definition, which means taking something is not theirs. one good example is Flash mob in LA, yours belongs to me, same culture, same behavior. to your "Ideally world", its very simple, Microsoft could simply just donate $10b to openai, they can share the open-sourced model.


The only thing akin to that would be an AI safety concern and the new CEO specifically said that wasn’t the issue.

And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.

It seems like a simple power struggle where the board and employees were misaligned.


Banks have strict cash reserve requirements that are externally audited. OpenAI does not, and more to the point, they're both swimming in money and could easily get more if they wanted. (At least until last week, that is.)


Rumor has it, they had been trying to get more, and failing. No audited records of that kind of thing, of course, so could be untrue. But Altman and others had publicly said that they were attempting to get Microsoft to invest more, and he was courting sovereign wealth funds for an AI (though non-OpenAI) chip related venture, and ChatGPT had a one-day partial outage due to "capacity" constraints, which is odd if your biggest backer is a cloud company. It all sounds like they are running short on money, long before they get to profitability. Which would have been fine up until about a year ago, because someone with Altman's profile could easily get new funding for a buzz-heavy project like ChatGPT. But times are different, now...


Not specifically related to this latest twist, sorry, but DeepMind’s Geoffrey Irving trusts the board over Altman: https://x.com/geoffreyirving/status/1726754270224023971


"I have no details of OpenAI's Board’s reasons for firing Sam"

Not the strongest opening line I've seen.


I do have to point out that this is also true of nearly everyone else who’s expressed a strong opinion on the topic, and it didn’t stop any of them


That's a fair point but there is at the same time a lot of information about how boards are supposed to work and how board members are supposed to act and the evidence that did come out doesn't really make it seem as if it is compatible with that body of knowledge.


The difference is nearly everyone else doesn't stand to seriously benefit from the implosion of OpenAI.


Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.

When you have such a massive conflict of interest and zero facts to go on - just sit down.

also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."

Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.

But as we all know - Ilya did a 180 (surprised the heck out of me).


"Sustkever is said to have offered two explanations he purportedly received from the board"

I'd like some corroboration for that statement because Sustkever has said very inconsistent things during this whole merry debacle.


Would you go so far as to say he was not consistently candid...?


Also, since he's on the board, and it wouldn't have been Brockman or Altman who gave him this info... there are only three people left: "non-employees Adam D’Angelo, Tasha McCauley, Helen Toner."


The obvious answer is he was the one Sam gave an opinion on. He was one of the people doing duplicate work (probably the first team). Sam said good things about him to his ally and bad things to another board member. There was a falling out between that board member and Sam and she spilled the beans.


one of the first members to quit was on a team that sounds a lot like a separate team that is doing the same thing as Ilya's Superalignment team.

"Madry joined OpenAI in May 2023 as its head of preparedness, leading a team focused on evaluating risks from powerful AI systems, including cybersecurity and biological threats."



Fortunately no conflict of interest there. Ignore the guy behind the curtain.


In the case of a board member of OpenAI running a separate chatbot company, it would be important to consider these factors. The specifics of the situation, including the nature of both companies, the level of competition between them, and the actions taken by the board member and OpenAI to manage potential conflicts, would all play a role in determining if there is a conflict of interest.

Definitely conflict of interest here and D'Angelo actions on openai board smell of the same. He wouldn't want openai to thrive more than his company. It's direct conflict of interest.


It is about as bad as it gets and given that datum I hope that D'Angelo has a very good lawyer because I think he might need that.


jacquesm was being sarcastic


My bad for not adding a /s. But I thought the second sentence would make it obvious.


Both 'reasons' are a bullsh*t. But interesting is Sustkever was the key person, it wouldn't happen without him. And now he says board told him why he was doing it? He didn't reiterate he regrets about it. So looks like he was one of the driving forces, if not the main. Of course he doesn't want the reputation of 'the man who killed OpenAI'. But he definitely took part and could prevent it.


Nytimes mentioned that just a month back someone else was promoted to the same level as Ilya. Sounds like more than a coincidence.


So Surdkever fires Altman, then signs a letter saying they’ll quit unless he’s reinstated.

There’s only 4 board members, right?

Who wanted him fired. Is this a situation where they all thought the others wanted him fired and were just stupid?

Have they been feeding motions into chatgpt and asking “should add I do this?”


Seems most likely Sustkever wanted him fired and then realized his mistake. Ultimately the board was probably quietly seething about the direction the company was headed, got mad enough to retake the reigns with that stunt and then realized what that actually meant.

Now they are trying to unring the bell but cannot.


> Seems most likely Sustkever wanted him fired and then realized his mistake

We have as much evidence for this hypothesis as for any other. Not discrediting it. But let's be mindful of the fog of war.


> Now they are trying to unring the bell but cannot.

Well, they can unring the bell pretty easy. They were given an easy out.

Reinstate Sam (he wants to come back) and resign.

However, they CONTINUE to push back and refuse to step down.


Then they wouldn't be in control, which is what they really want.


You get it!

This is the correct answer. The people who have never had jobs in their lives wanted control of a 100B company.

What a pleasant career trajectory. Heck it was already great to go from graduated university -> board of OpenAI. If that's possible why not CEO?


> Well, they can unring the bell pretty easy. They were given an easy out.

> Reinstate Sam (he wants to come back) and resign.

Wasn't the ultimate sticking point Altmans' demand that the board issue a written retraction absolving him of any and all wrongdoing? If so, that isn't exactly an "easy" out given that it kicks the door wide open for extremely punishing litigation. I'd even go so far as to say it's a demand Altman knew full well would not and could not be met.


I'm going to be the only one in this thread calling it this.

But why does no one think it's possible these women are CIA operatives?

They come from think tanks. You think the US Intelligence community wants AGI to be discovered at a startup? They want it created at big tech. AGI under MSFT would be perfect. All big tech is heavily compromised: https://twitter.com/NameRedacted247

EDIT: Since this heavy speculation, I'm going to make predictions. These women will now try to force Ilya out the board, put in a CEO not from Silicon Valley, and eventually get police to shut down OpenAI offices. That's a CIA coup


Couldn't the CIA have sent people with, er, slightly more media experience and tactfulness and such? Did these few just happen to lose a bet or something...?

Maybe somebody there just really wanted to see the expression on Satya's face...


Weirdly plausible considering Tasha McCauley also works for the RAND Corporation


But the article's exact wording is "Sustkever is said to have offered two explanations he purportedly received from the board" key word being "purportedly received". He could be choosing words to protect himself, but it strongly implies that he wasn't the genesis of the action. Of course, he was convinced enough of it to vote him out (actually; has this been confirmed? they would have only needed 3, right? it was confirmed that he did the firing over Meet, but I don't recall confirmation that he voted Yet); which also implies that he was at some point told more precise reasoning? Or maybe he's being muzzled by the remaining board members now, and this reasoning he "received" is what they approved him to share, right now?

None of this makes sense to label any theory as "most likely" anymore.


Trying to put the toothpaste back in the tube.


Trying to put the confetti back into the cannon.


Trying to put the Rip back into the closet.

https://www.youtube.com/watch?v=ebiT8mlCvZY


Trying to close the can of worms.


Trying to pull the dye out of the water


As XKCD memorably observed, putting toothpaste back in the tube is trivial.

However, it may not yield a result anyone's actually happy with:

https://xkcd.com/2521/


it’s tough to hoe a row on the hill which they chose to cast their die


This is pretty parsimonious.

Smart, capable, ambitious people often engage in wishful thinking when it comes to analysing systems they are a part of.

When looking at a system from the outside it’s easier to realise the boundary between your knowledge and ignorance.

Inside the system, your field of view can be a lot narrower than you believe.


> Have they been feeding motions into chatgpt and asking “should add I do this?”

The CEO (at time of writing, I think) seems to think this kind of thing is unironically a good idea: https://nitter.net/eshear/status/1725035977524355411#m


It'd have to be a very stupid version of chatgpt


Doesn't this imply that there's one that's not?


Can the 3 board members also kick out Sustkever from the board?


That headline is bad, not sure if it's deliberate.

The way it's phrased, it sounds like they were given two different explanations. Such as when the first explanation is not good enough, a second weaker one is then provided.

But the article itself says:

> OpenAI's current independent board has offered two examples of the alleged lack of candor that led them to fire co-founder and CEO Sam Altman, sending the company into chaos.

Changing the two "examples" to "explanations" grossly changes the meaning of that sentence. Two examples is the first steps of "multiple examples". And that sounds much different than "multiple explanations".


This reads like the Board 4 are not allowed to say, or are under NDA, or do not dare say, or their lawyers told them not to say, the actual reason. Because this is obviously not the actual reason.


Without all the fluff:

    One explanation was that Altman was said to have given two people at OpenAI the same project.

    The other was that Altman allegedly gave two board members different opinions about a member of personnel


Ilya himself was a member of the board that voted to fire Altman. I don't know if he's lying to his teeth in these comments, making up an alibi, or is genuinely trying to convince people was acting as a rubber stamp and doesn't know anything.


As this article seems to have the latest information, let's treat it as the next instalment. There's also Inside The Chaos at OpenAI - https://news.ycombinator.com/item?id=38341399, which I've re-upped because it has backstory that doesn't seem to have been reported elsewhere.

Edit: if you want to read about our approach to handling tsunami topics like this, see https://news.ycombinator.com/item?id=38357788.

-- Here are the other recent megathreads: --

Sam Altman is still trying to return as OpenAI CEO - https://news.ycombinator.com/item?id=38352891 (817 comments)

OpenAI staff threaten to quit unless board resigns - https://news.ycombinator.com/item?id=38347868 (1184 comments)

Emmett Shear becomes interim OpenAI CEO as Altman talks break down - https://news.ycombinator.com/item?id=38342643 (904 comments)

OpenAI negotiations to reinstate Altman hit snag over board role - https://news.ycombinator.com/item?id=38337568 (558 comments)

-- Other recent/related threads: --

OpenAI approached Anthropic about merger - https://news.ycombinator.com/item?id=38357629

95% of OpenAI Employees (738/770) Threaten to Follow Sam Altman Out the Door - https://news.ycombinator.com/item?id=38357233

Satya Nadella says OpenAI governance needs to change - https://news.ycombinator.com/item?id=38356791

OpenAI: Facts from a Weekend - https://news.ycombinator.com/item?id=38352028

Who Controls OpenAI? - https://news.ycombinator.com/item?id=38350746

OpenAI's chaos does not add up - https://news.ycombinator.com/item?id=38349653

Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP - https://news.ycombinator.com/item?id=38348968

OpenAI's misalignment and Microsoft's gain - https://news.ycombinator.com/item?id=38346869

Emmet Shear statement as Interim CEO of OpenAI - https://news.ycombinator.com/item?id=38345162


>There's also Inside The Chaos at OpenAI ... it has backstory that doesn't seem to have been reported elsewhere

Probably because that piece is based on reporting for upcoming book by Karen Hao:

>Now is probably the time to announce that I've been writing a book about @OpenAI, the AI industry & its impacts. Here is a slice of my book reporting, combined with reporting from the inimitable @cwarzel ...

https://twitter.com/_KarenHao/status/1726422577801736264


I see why you recommended that Atlantic article, its very very good.


I was just copying what simonw said! https://news.ycombinator.com/item?id=38341857


It's a good recommendation, thanks for elevating it out of the noise

Sometimes the best part about having a loud voice is elevating the stuff that falls into the noise. I moderate communities elsewhere, and I know how hard it is, and I appreciate the work you do to make HN a better place.


By the time this saga resolves, the number of threads linked here could suffice as chapters of a book


If I were OpenAI employee, I would have been uber pissed.

Imagine your once-in-blue-moon, whatsapp-like, payout at $10m per employee evaporated over the weekend before Thanksgiving.

I would have joined MSFT out of spite.


Absolutely agree, would be beyond pissed. A once in a lifetime chance at generational wealth blown.


These people joined a non-profit though. Am I right in thinking that you wouldn't join a non-profit expecting a large future payout?


> These people joined a non-profit though.

The employees joined the for profit subsidiary and had shares as well.


I really can't imagine. I am super pissed and only over something I love that I pay 20 bucks a month for. I can't imagine the feeling of losing this kind of payout over what looks like complete bullshit. Not just the payout but being part of a team doing something so interesting and high profile + the payout.

I just don't know how they put the pieces back together here.

What really gets me down is I know our government is a lost cause but I at least had hope our companies were inoculated against petty, self-sabotaging bullshit. Even beyond that I had hope the AI space was inoculated and beyond that of all companies OpenAI would of course be inoculated from petty, self-sabotaging bullshit.

These idiots worried about software eating us are incapable of seeing the gas they are pouring on the processes that are taking us to a new dark age.


Given the nonsensical reason provided here, I am led to believe that this entire farce is aimed at transforming OpenAI from a non-profit to a for-profit company one way or another, e.g., significantly raising the profit cap, or even changing it completely to a for-profit model. There may not be a single entity scheming or orchestrating it, but the collective forces that could influence this outcome would be very pleased to see it unfold in this way.


But was delivering it into the hands of Microsoft really how they wanted it to happen?


At Amazon a senior manager would probably be fired for not giving a project to multiple teams.


thats not very frugal; please provide a source or citation for your claim.


I am a former L7 SDM at Amazon. Just last year I had to contend with not one, but three teams doing the same thing. The faster one won with a half-baked solution that caused multiple Sev-1s. My original comment was half in jest; the actual way this works is that multiple teams discover the same issues at the same time and then compete for completing a solution first. This is usually across VPs so it’s difficult to curtail in time to avoid waste.

Speaking of waste, when I was at Alexa we had to do a summit (flying people from all over the country) because we got to a point where there were 12 CMSs competing for the right to answer queries. So yeah, not frugal. Frugality these days is mostly a “local” concept, definitely not company-wide or even org wide.


Oh man, i’m glad I read this because right now at Google I am immensely frustrated by Google trying to fit their various shaped pegs into the round hole of their one true converged “solution” for any given problem.

My friends and I say “see, Amazon doesn’t have to deal with this crap, each team can go build their own whatever”. Buut, I guess that’s how you get 12 CMSs for one org.


> that caused multiple Sev-1s

...did folks run out of tables?


These simply can't be the real reasons.


And evidently the employees have reacted as they likely would. The two points given sound like mundane corporate mess ups that are hardly worth firing the CEO in such a drastic fashion.


a link to the letter from employees

https://www.axios.com/2023/11/20/openai-staff-letter-board-r...

curious to have clarity where ilya stands. did he really sign the letter asking the board (including himself?) to resign and that he wants to join msft?

to think these are the folks with agi at their fingertips



What will happen to employee’s stock options if they all mass quit and moved to Microsoft?

The options will be worth $0, right?


From what I understand, Microsoft realizes this and gives them the equivalent of their OAI stock options in MSFT stock options if they join them now. For some employees, this may mean $10MM+


More evidence the layoffs are 100% BS. Suddenly there's surplus headcount and magical budgets out of nothing, all to accommodate several hundreds of people with way-above-market-average TCs. It's almost like they were never in danger of hurting profit margins in the first place.


It is entirely reasonable for there to be dire financial straits that require layoffs, yet when a $10 billion investment suddenly blows up and has to be saved the money can be spent to fix it.

In the first case it wasn't that there was no cash in the bank and no bank willing to make loans, but that the company needed to spend less than it earned in order to make a profit. In the second case it wasn't that the money had been hidden in a mattress, but that it was raised/freed-up at some risk which was necessary because of the $10 billion investment.


These tech giants' finances are public, because they are publicly listed companies. All of them are sitting on fat stacks of cash and positive YoY revenue growth. They have absolutely zero chance of running out of money even if each one hires 10000 front desk clerks who do nothing but watch TikTok all day and collect $100k/yr comps. Zero, Zilch, Nada.


>It is entirely reasonable for there to be dire financial straits that require layoffs

It's not entirely reasonable because Microsoft's finances are public. We know they're doing fine.


You can lay people off without being in dire straits.


Yes, but that doesn't make it any more ethical especially since most layoffs over the past year aren't merit-based at all.


Options on Microsoft stock, a publicly-traded and stable company, are incomparable to those on OpenAI, which didn't even bother having proper equity to start with. The employees will get hosed. They never got equity, they got "equity." The senior ones will need liquidity, soon, to pay legal counsel; the rest will need to take what they can get.


Usually one would go borrow money from a bank with the shares / options as collateral in these types of cases if you really need the money for legal expenses without liquidating them.


Microsoft would likely match their PPUs at the tender offer valuation.


Honestly MS don't have to, losing more than half the employees will destroy the value of the PPUs.

The fact so many have signed the petition is a classic example of game theory. If everyone stays, the PPU keep most of their value, the more people threaten to leave, the more attractive it is to sign. They don't have to love Sam or support him

Edit: actually thinking about it, the best outcome would to be go back on the threats to resign, increasing the value of PPUs, making Microsoft have to pay more to make them leave OpenAI


MSFT may perceive a benefit in absorbing and locking down the openAI team. Doing so will require large golden handcuffs in excess of what competitors would offer those same folks.


OpenAI has no stock options.


It has "Profit Participation Units", which are another form of equity-like compensation.


Believe it’s more of an RSU product with a small few having ISOs. Probably best to just call it “stock comp” since it’s all illiquid anyways.


If the outcome of all of this is that Altman ends up at Microsoft and hiring the vast majority of the team from OpenAI, it's probably wise to assume that this was the intended outcome all along. I don't know how else you get the talent at a company like OpenAI to willingly move to Microsoft, but this approach could end up working.


These are the dumbest reasons possible, certainly not worth destroying a company on the move or people's livelihoods over.


Based on what've seen so far, one of the following possibilities is the most likely: 1. Altman was actually negotiating an acquisition by Mircosoft without being transparent with the board about it. Given how quickly they were hired by Microsoft after the events, this is likely. 2. Altman was trying to raise capital from a source that the board wouldn't be too keen on. Without the board's knowledge. Could be a sovereign fund or some other government backed organisation.

I've not seen these possibilities discussed as most people focus on the safety coup theory. What do you think?


"Before OpenAI ousting, CEO Altman tried to raise billions in the Middle East for chip venture"

https://www.scmp.com/tech/tech-trends/article/3242141/openai...


If Altman ends up going back to OpenAI, then shouldn't Sutskever be fired/kicked off the board too?