Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's chaos does not add up (proseful.com)
285 points by Satam on Nov 20, 2023 | hide | past | favorite | 209 comments



It doesn't add up because there's multiple actors with different motives.

- At least two members of the board probably were genuinely more concerned about AI alignment

- Ilya may have been partially motivated by ego, Altman being the public face

- Microsoft had leverage: license to the models and code, and providing the compute. It expected profit incentives would keep people aligned.

- Sam needs compute hardware to stay in the space. Outside Amazon, Google and MS, who actually has the hardware, even if you have the money?

There's a bunch of people behaving with their own goals, and probably a misunderstanding of other people's motivations. People like Sam and Satya expect people to be self-interested and not tear everything down.

Sometimes people want a change in direction, but they don't have the power and end up decapitating instead.


Hallelujah, spot on. Whenever I see major clusters like this, and tons of folks online are so quick to go to outlandish "4D chess" conspiracy theories, I'm always reminded of that meme "People who believe in conspiracy theories have never been product managers."

We like to think that big execs and leaders are some super smart geniuses when we're not disparaging them (and they may be, but only in specific areas), but they're still just people, and they are afflicted with jealousy and envy and spite and lack of forethought just like the rest of us, and they make mistakes accordingly. They particularly make mistakes when you're trying to organize a big group of people.


Another thing is that everyone is a lot younger than stereotypical corporate operators who are often >50 years old.

After living for a few more decades you get a lot better at power plays.


I have two brothers, one of which around 2019 fell down a couple YouTube rabbit holes and came out repeating things about flat earth, QAnon, and just about every other comorbid conspiracy theory circulating at that time. He honestly believes that there is some global cabal involving tens of thousands of people to hide the fact that the earth is flat. He especially makes the fundamental error of assuming that all of the "evidence" he has collected is ultimately attributable to a single source, and that there exist world-wide organizations that are eminently capable and efficient in keeping these things going and largely under wraps. My brother was in the military, and I have seen the inside of many large organizations, and this is just not anywhere in our experience. At best the world somehow runs along in a kind of controlled chaos, it is one enormous coordination problem and there are too many competing interests at work in any one area.


The world according to a gas molecule and the world according to a statistician.

This needs to be amplified + defensive caveat regarding subscription to any “youtube” theories. That said, a member of a large organization, such as the US military, will have many stories to tell and phrase “FUBAR” iirc came out of the US Army. In fact, even being in smaller commercial enterprises, one sometimes wonders that anything gets done at all!

But things get done. The military can plan, provision, conduct, and win wars. Systems tools and state-spaces are the tools of top level control mechanisms. So, yes, things can be chaotic when viewed from a narrow perspective, but from another top level perspective, there is order.


I think part of the issue is that, unfortunately, there are conspiracy theories that have proven out to be true. The military's thumb pushing the scales on Hollywood's war depiction through careful deliberation of allowing use of equipment has the added effect of limiting the resources of any military narrative the us military doesn't approve of. There is actually in existence a conservative court group whose entire existence is to funnel conservative justices, including things like politically-aligned clerking for supreme courts and providing republican appointers with lists of politically-aligned candidates. A federal program really did exist to try and figure out if they can brainwash people with acid, and then records of the program mysteriously went missing when the public tried to investigate it. Epstein's book exists and all its implications thereof. Microsoft's early "embrace, extend, extinguish" is a conspiracy theoy about Microsoft's business dealings that turned out to be true.

It's really hard to turn back on the "but realistically, conspiracies are uncommon" brain when exposed to the fact that actually decades-long conspiracies do actually occur and produce results.

[Caveat: I don't actually believe in shit like flat earth, and am skeptical of all conspiracy theories if their evidence is some shit like a 45 minute youtube video with their own sources of, idk, a blogspot post. But I do want to acknowledge that conspiracies have been reported on in relatively trustworthy news reporting sources, and that I myself find it really difficult to determine true vs fake conspiracy theories if multiple news reporters are contradicting each other.]


I would however say: A lot of the real, and effective "conspiracies"/propaganda aren't so much dozens of thousands of people all acting in secret and coordination and such. It's usually a much smaller group of people kind of pushing a somewhat larger group of people into a certain direction, which then starts to move larger groups around.

As you said, you don't send people into the street to yell at other people how the soviets are evil and we need to arm up. You rather plant ideas into the heads of a few influential people in hollywood, then give them cool toys and props for the movie, and let it roll downhill from there.

This is similar to work - and we're not large with ~150 techies in the company. But it has grown impossible to directly steer all different teams. Instead, you have to see that the right trailblazing teams are going into the right directions and possibly make sure that service requests moving into the right direction are quick, and service requests moving in the wrong direction.. well either roughly stay in the SLA or get blocked.


Well, let’s not conflate conspiracies with conspiracy theory. True, people do conspire to break the law, but conspiracy theories are marked by inferred, presumed evidence.


[flagged]


My experience is that this is a gross oversimplification. Once you know a person on an individual level - people always have their own stew of beliefs


I have no idea if you are genuinely arguing in favor of or against something here, or just playing devil's advocate.


Perfect reply. Every statement and its contrary can be made plausible with right arguments.


> People like Sam and Satya expect people to be self-interested and not tear everything down.

Bingo. I think there is also a factor of this being a non-profit board, where the board is not beholden to shareholders in the same way. For example, you presumably have no standing to sue a NP BOD for breach of fiduciary duty vs. a corporate BOD. I don't know the intricacies of the laws surrounding this, but I expect that the none-Microsoft investors in OpenAI who are seeing their investments dissolve in real time are probably asking their lawyers that right now.

Adam D'Angelo seems like someone acting in their own self-interest and that of his company, Quora. The move to bring in Emmett Shear at the 11th hour seems like a move right out of his playbook. What exactly does Adam stand to gain from a weakened or destroyed OpenAI?


Adam stands to gain the most from a weakened OpenAPI product. Not from OpenAPI outright stopping commercialization, but stopping it just enough to still have API but not products like GPT store and agents.

It's not about Quora, which is a shit show either way, but it's about his AI chat company, Poe. Poe started customizable AI agents and also started their own store for AI agents - they were the first to do this. And Poe uses OpenAI API for this, they are essentially a wrapper company.

When OpenAI announced custom agents and store on DevDay, it was a fundamental threat to Poe, and a very strong conflict of interest for Adam. You also hear from everyone that things started getting sour after DevDay. I'm pretty sure Adam played his part in the shit show with his own motives.


For whatever reason, Sam knowingly gave Adam a pass. He forced out Reid Hoffman over Inflection AI.

https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...


> a very strong conflict of interest for Adam

Makes it sound like they fired the wrong board-member. But it feels to this complete outsider like that board was totally disfunctional, even before the departure of Altman and Brockman.

I'm an LLM sceptic; I don't think these models are a route to intelligence, artifical or otherwise. I think that means I'm at odds with the four board-members that did the deed. But as far as I can see, there wasn't one member of that board, Altman and Brockman included, that was fit to run a whelk-stall, let alone a big company.


It feels like a weird divorce to me, where one side of the couple wants kids and the other side wants to party and go to Tahiti every couple months.

Before this, they were able to make things work - until someone said the quiet things out loud, and now they’re clearly going their separate ways while going ‘wtf just happened’.


> At least two members of the board probably were genuinely more concerned about AI alignment

Is there any evidence that Altman's firing was even related to AI alignment? All I've seen is people speculating about it.


Isn't almost everything right now speculation, uncertainty & conjecture? We have only little insight and not even the participating parties might have a full understanding of the situation because motives and choices might be hidden.

Microsoft just saw chaos and made a very smart move before someone else could.


I find this whole thing hilarious. Clearly the CEO was going ham doing AI things and becoming the face of this wave of AI, and events that occurred made the board react to save the ideology of the entity -- which is the way the entity was structured, also 49% is not ownership, it's close, but not exactly. The firing was meant try to dampen the ego and influence of the CEO as well. Now, there is a new CEO, great.

But yeah, morale is going to be pretty much in flux for a while. This has clearly disrupted OpenAI's ability to function and therefore contribute more to either research and/or productization. Sam and Greg going to M$FT only matter if their influence and leadership can bring the right people together. OpenAI had the right people to create GPT-3+, but now, seems like they won't. M$ST probably won't either.

Now each entity is actually weakened for a number of months I would presume. M$FT will try to push forward whatever productization of the current AI products it has access to. OpenAI will struggle to figure out if it's a pure research entity now or what.


Hm, well, then, let's just wait out more details...


- Ilya may have been partially motivated by ego, Altman being the public face, he was left out of Time 100 AI leaders in favor of Greg, etc.

But Ilya Sutskever is on the Time 100 Most Influential People in AI 2023 list, assuming that's the one you're referring to.




As new datapoints emerge (like Ilya's expression of regret), it's certainly starting to look like most folks got it wrong and Ilya was not the architect of this chaos but rather Adam. Let's hope that Ilya's genuine moral and ethical concerns were not taken advantage of over the last few months to pit him against Sam just because Adam didn't want to see both his ventures (Quora, Poe) crash and burn.


what Poe is doing seems odd for such a company. I am doing the same with baarilliant.ai in my spare time.


> Ilya may have been partially motivated by ego, Altman being the public face, he was left out of Time 100 AI leaders in favor of Greg, etc.

Do you mean [1]? Sam, Greg and Ilya are all in there, the first two under Leaders, and Ilya under Thinkers.

[1] https://time.com/collection/time100-ai/


Fair enough, I removed it, though I believe leaders > thinkers on the status scale.


Nah, it's just a function of "role". The leaders section is principally business people whereas the thinkers section is principally academics and practitioners.


> - Ilya may have been partially motivated by ego

Interesting comment, given this: https://twitter.com/ilyasut/status/1707752576077176907


> Outside Amazon, Google and MS, who actually has the hardware, even if you have the money?

nVidia, and others as well. But they'd rather use it for something more worthwhile than a dumb hallucinating parrot.


oh, you mean, _selling_ the hardware to Amazon, Google, MS ...


nVidia is selling shovels, while MS is trying to dig gold.


You don't need cloud for the compute.


There’s something massive brewing here that involves Microsoft. And it stinks.

One thing is for certain, Google is clearly a major loser here. They completely blew the lead in AI.


I would say Google is one of the few companies that directly benefit from OpenAI+Microsoft hitting a heavy patch of turbulence.


How on Earth is Google a loser? OpenAI is splitting down the middle, if not imploding entirely. It's not like Microsoft will be able to just get Sam and Greg new Macbooks and have all the OpenAI engineering suddenly be internal there. This is going to cause a gigantic amount of operational pain.

Google will be quietly smiling and catching up.


It's a short term vs long term thing:

Short term it is going to slow Microsoft down because it will take a while to get a while to get software development back on course.

Long term it will speed thing up because OpenAI is concerned by safety issues and will thus be more cautious about software release, while Microsoft has no such inhibitions.


OpenAI is basically in the bag for MSFT. While we enjoyed reading and speculating about the drama this weekend, Microsoft managed to eat the whole thing without anyone complaining. Meanwhile, Google still has nothing. Heck, even Facebook/Meta is ahead of the game.


correction: Ilya was on Time 100 AI, along with folks like Hinton.


If what David Grusch is saying about UAP is true (and please don't let this thread turn into a debate about that), this feels like it's falling into a similar pattern. A potential world changing capability is being developed, the type of capability that whoever controls it, and masters it, can gain control of EVERYTHING.

So I think there's an internal divide, both sides have good intentions. They want the best for the people they think they're protecting. In the case of UAP, it was to protect US national security. In the case of OpenAI, I think it's to attempt a slow transition of the global economy. But on the opposite side, that has to be balanced with the potential widespread benefits of opening it up.

In the case of AI, we could open it up, but what if the Chinese, or a malevolent capitalist entity uses the technology to go full speed ahead, and accumulates the entire economy under their sole control.

This weekend, there was an event by a group called the SOL foundation for UAP. They revealed something called the "Slow Disclosure" [1], a process to avert what they call "Catastrophic Disclosure". I see AI falling into similar power dynamics. Keep it all to yourself because you trust yourself, and let it out only after you've fully mastered it... or take a risk of letting the world start mastering it with you, but start to realize the world changing benefits immediately.

Both sides have good points, nobody is wrong. People just have different risk tolerance. OpenAI looks a lot like our government today, internally divided, a few people with the power to choose which path we go in history. That's a lot of power, you better believe things are going to look weird on the outside.

[1] https://twitter.com/tinyklaus/status/1726061116730609692


IMO the only actually new thing we've actually seen with UAPs/UFOs is that people are making the claims under oath now. His whole public testimony was one giant tease with no concrete info and a lot of 2nd+ hand innuendo. The whole 'slow disclosure' idea reeks to me of the same punting of putting up the evidence that happened in 2020 with the Kraken and the entirety of the Q nonsense. The people pushing it didn't have evidence and assumed they could either BS their way past not having it or someone would come forward out of nowhere with it and the timelines always moved out to accommodate the lack of anything coming out. I'll believe it when I see some actual evidence.


I guess we'll find out in 2024 assuming this legislation, tacked on to the NDAA passes:

https://www.democrats.senate.gov/imo/media/doc/uap_amendment...


so who has (owns, can build) LLMs-sized datacenters?

doesn't facebook (i.e. Meta) have their own infrastructure???

my "high level" view of this: the letter M (microsoft xOr meta) fighting for the ONE spot in the literal "alphabet" (google)


You’re saying that Meta and Microsoft want to be acquired by Alphabet/Google? That doesn’t make a lot of sense.


no, I did say "literal alphabet" i.e. not some bigger corporation...

but I play with words, so take it as you are able

i'm saying something closer to: if microsoft merges with facebook, then they really are both trying to get bought by google's parent entity


> i'm saying something closer to: if microsoft merges with facebook, then they really are both trying to get bought by google's parent entity

So you said “literal alphabet”, because you weren’t talking about Alphabet, you were instead referring to Google’s parent entity, which is Alphabet? I’m a big fan of wordplay, but I prefer to use it to reveal truths, not obscure them.

Either way I don’t think that what you’re talking about is going to happen.


I'm neither obscuring nor revealing a truth becuase these events are ongoing.

and I'm being quick and lose with the use of words. facebook cannot merge with microsoft because facebook is a subsidiary of meta corporation.

but on the level of analysis i'm trying to think in, google's antitrust case agaisnt their government (which is done), microsoft's recent purchase of activision blizzard (which was almost undone but got through in the end) all have a hand in this.

who will win more out of this? microsoft? or google/alphabet??

facebook does seem like a smaller player, microsoft OWNS gaming now (specially after blizzard-activision).

but Meta is very insterested in gaming technology. so maybe they can also "poach" some talent now that openAI is all but publically bankrupt??

how long ago did Meta try to do their own crypto "Libra"? fact of the matter LLMs are chinese tech. else tiktok would have been successfully stopped. why did all this happen after chinese visit due to a trade union in south asia?

finally, I know of 3 giant corporations that have repeated the re-structuring which from my viewpoint was pioneered by google/alphabet. one is facebook/meta, and the other is a chinese company (alibaba?) I don't even know this which is ok because I am ranting on the internet in a buried discussion in a public forum


I have a strange feeling that all of this is about selling OpenAI to Microsoft. I mean is it that unlikely? Everything is pointing in that direction, and maybe there was a loophole that allowed this to happen in a way that doesn't make it seem like Microsoft were the ones doing the push.

We have to be honest with ourselves and realize that these are billion/trillion dollar companies we're talking about here, with some of the "smartest" people at the helm. I totally see how an acquisition could be swiped through the means of saying that these people were inexperienced.

Disclaimer: I'm totally talking out of my butt here, as we all are.


I feel there's merit to that idea. Basically, in the current regulatory environment in the US under Lina Khan, an outright acquisition by Microsoft would have been met with significant resistance. Especially since AI has just been declared a national risk that needs regulating and Microsoft just bought Activision in what's pretty much the largest deal of 2023.

So, instead of buying OpenAI outright with all its complicated org structure, paying 13 billion for 49%, acquiring the rights to OpenAI's models and code and then insinuating today's events with a majority of OpenAI's staff leaving for Microsoft would be a really elegant and cunning way to do this.

If it is, it might be the most daring bit of business maneuvering we've seen in a decade. But, given Occam's razor, it might just as well be a colossal fuckup. Time will tell.


You might be thinking of Hanlon’s razor, but I suppose both might give the same answer.


You're right - wrong razor :-D


Damn. Blood all over the bathroom basin; patches of tissue dotted around my chin.


> I have a strange feeling that all of this is about selling OpenAI to Microsoft. I mean is it that unlikely?

From the moment that the announcement of the deal with Microsoft happened, it was clear to me that it only ends one way: Microsoft is going to own everything of value that OpenAI has, in one way or another.

There never was any other way it could go down regardless of what ideals the OpenAI founders may or may not have had. You can't dance with the devil and say you're only kidding.


I had the same feeling. Huge corp with deep pockets targets smaller research company in order to catch up with the competition. I'm surprised a traditional acquisition hadn't happened before. Now it's happening in a weird way but it's the same outcome.

Anthropic is probably next.


Do not ascribe to malice what can be sufficiently explained by incompetence.


“Do not only ..”, or “Do not immediately ..” are reasonable and wise.

But that absolutist “Do not” is neither wise nor reasonable. Malicious actors do exist.


Do not ascribe incompetence to anyone in a position of power, because incompetence at scale is indistinguishable from malice.


That's about outcomes. But intent matters and I don't think Ilya had the intent of blowing up OpenAI though how he could not see that coming is something that I fail to understand. You don't pull a palace revolution like that without a plan on what you will do if it succeeds.


Hanlon's Razor is for idiots, even mischievous toddlers intuitively figure out the "oops" excuse.


I think it’s a mistake to confuse either of them for greed.


Greed is a form of malice, in my opinion.


It seems the load on HN has caused a duplicate submission, please ignore this comment (I can't delete it).


The best trick the devil ever played...


If it was about selling to Microsoft, Ilya and the board could've just announced that's the reason. The thing is they didn't, and Emmett didn't say what it was even after being briefed before joining. I don't think as many as 500+ employees out of 700+ would be siding with Sam if Ilya announced this was about OpenAI v. Microsoft. So why did he never explain anything to the staff?


It's an amusing conspiracy theory, but if we believe these are super smart people behaving in a clever way to exploit a loophole... why would they make themselves look like such idiots doing so? I don't think looking like an idiot helps any possible legal defense; if you have a rock solid loophole, just use it and surprise people with your cunning, not your idiocy.


Why choose 'conspiracy theory' in lieu of 'theory'?


Paris Hilton is a genius. Donald J Trump is a genius. Alex Jones tried it and so did Sam Bankman-Fried. Those two were not genius.


> that all of this is about selling OpenAI to Microsoft

By whom? Unless everyone is corrupt and getting kickback from MS it doesn't make sense. Board made first move (public), if they wanted to sell to MS they could have just taken 20 billion dollars by selling 30% stake, giving equity to employees and Sam and using the money for its alignment research. For Sam and MS to have colluded, he should have hoodwinked board into making the move which may be possible but far fetched.


I've heard the conspiracy theory that this is a means for Sam Altman to leave the company and bring everyone under MS for free.


Sell what to MS, now?

They have the license to the tech, now they have the tech leadership, soon perhaps the entire 500-person team ... for free


I suspect it’s a basic corporate pillaging of a nonprofit that accidentally created something of gigantic value. MS wanted the company, but Quora has its own interest and stood in the way. Now MS will get part of the employees and Quora will get the leftovers. Quora has been trying to IPO for a while so AI magic dust may be just what they need for an exit.

All the AI safety stuff was a fig leaf for pure corporate machinations.


People like conspiracy theories because it simplifies the world, aligns things into easy to understand Big Bad actions that they can understand. But conspiracies don't work because the world isn't that predictable.

The real world is messier. When you zoom in, people have emotions, misunderstandings, different values. Noone can predict how it all plays out.


It’s not a conspiracy theory, just economic incentives of MS CEO, Quora CEO, and every employee who would rather get rich than work at a nonprofit. The nonprofit structure was in no one’s interest except for a very small cohort of AI Doomers and anti-corporate folks, who I assume would not want to work for Microsoft and probably number less than 100 out of 700. It turned out that the people dedicated to the Nonprofit’s mission had very little power since they relied wholly on MS for funding and compute.


says ">But conspiracies don't work because the world isn't that predictable.<"

Agreed but that doesn't prevent people from trying to instigate conspiracies. After all, Prometheus caused blind hopes to live in the hearts of men.

- Aeschylus, Prometheus Bound

http://classics.mit.edu/Aeschylus/prometheus.html

An extract seems appropriate here:

-------------------------------

PROMETHEUS: I took from man expectancy of death.

CHORUS: What medicine found'st thou for this malady?

PROMETHEUS: I planted blind hope in the heart of him.

CHORUS: A mighty boon thou gavest there to man.

PROMETHEUS: Moreover, I conferred the gift of fire.

CHORUS: And have frail mortals now the flame-bright fire?

PROMETHEUS: Yea, and shall master many arts thereby.

CHORUS: And Zeus with such misfeasance charging thee-

PROMETHEUS: Torments me with extremity of woe.

...


Bold of us to assume competence. I think they just made a bad call based on poor judgement and don't know how to recover.


As with many things in life: Hanlon’s razor

https://en.wikipedia.org/wiki/Hanlon's_razor


How is this a defense for anything let alone anti competitive behavior?


Explaining how something can occur, and how it can be misinterpreted as malevolence, is not justifying or defending it.


Well said. I am in such despair over the world's shift from "doing wrong things is wrong" to "understanding the motivations of those who do wrong things is wrong." Just more of the war on reason, and it's sad.


The problem is that understanding the motivation doesn't change the result. You may come to understand that the tiger isn't eating you because it hates you, but rather it's just eating you because it's hungry; yet that doesn't change the fact that tiger is eating you. Hanlon's razor is a meme that is too often used to defend and excuse the malicious incompetence of the powerful.


Hanlon's razor is never an excuse or defense. I have literally never seen it used that way. It is, as you say, a statement that the tiger is eating you because it's hungry.

I firmly believe that more understanding is always better than less. If you understand that the tiger is eating you because it's hungry, you at least have some chance of diverting its attention by offering an alternative, less feisty meal.

But covering your ears and eyes and shouting "it's an evil tiger that hates me!"... how is that in any way better than understanding objective reality? And why in the world would you attack someone for making the observation that the tiger is hungry rather than agreeing with factually incorrect claims about the tiger holding a personal vendetta?

It's bizarre, and IMO unhealthy.


Hanlons razor would not be used to explain a tiger eating a person.

Hanlon's razor would be used to stop a mob from lynching the tour guide that was supposed to guide that person through the zoo. One explanation is that the tour guide intentionally arranged for the person to be eaten. This is malevolent and dispicable act. Hanlon's razor says "Hold on, maybe he just wasn't aware the tiger was out of the cage at that time".

It is effectively the presumption of "Innocence (stupidity)" vs persumption of "Guilt (malevolence)" stated another way.


Absolutely. Description is now advocacy.


Not that I'm arguing one way or another, but everyone posting "Hanlon's Razor, QED" should consider that Hanlon's Razor is 1) a heuristic and 2) breaks down _very_ quickly around psycho/sociopaths.


Also, when the incentives are worth billions of dollars and the players are the biggest names in tech worldwide.

Read about any kind of historical coup and there's all kinds of both 1) incompetent fumbles and 2) elaborate subterfuge.


given the size and importance of GPT4 its safe to assume that microsoft would use any trick in the book...


I doubt MS could've purposefully engineered this chaotic "debacle". There is just too many potentially hard to control risk involved. Of course it is actually safe to assume that once this happened they did everything to to gain as much from this as possible.

Since they are the most rational and capable actor in this whole situation it's not at all surprising that they ended up being in a better position than everyone else.


If something does not add up, just follows the money or who is going to benefits the most.

After more than 20 years Microsoft has been a bridesmaid but never a bride of the burgeoning of Internet business. Remember that it's all started by the failed attempt at masquerading IE as part of Windows OS to overtake Netscape. Then during the same time it was surprised and overtaken by 'do no evil' Google for Internet searching (anyone remember MSN and Live search before Bing) and email (Hotmail and then Outlook.com). Not long after that, the 'successfully annoying' Internet bookstore Amazon for e-commerce and cloud computing plus hosting. Not to mentioned Apple that was the darling of consumer Internet based market for at least fifteen years now with iPhone and App Store, and anything in between these two Golden Ducks that catapulted Apple as the first ever to reach 1-trillion USD company and then the first 3-trillion company market values.

And then suddenly out of the blue, a very strong contender for Google emerged since (Internet) time immemorial based on Google own invented AI algorithm, that mainly utilized MS Azure cloud for its operation with its own new shining GPT app market place to compete with App Store. Apparently this new-kid-on-the-block is operating under MS umbrella can now become a natural and intuitive front-end for anyone shopping online and a potent alternative to the unpopular Alexa. By usurping OpenAI, MS can kills (or injured) three birds in one stone, so to speak namely Google, Amazon and Apple, the biggest of the FAANG that MS is not even in the SV most popular unicorn acronym soup. MS need to move fast and swiftly since there is no moat in this field, as Google wisely pointed out. All this can be speculative at best until someone found the Halloween document 2.0.


I really, really hope AGI ends up being created by hobbyists, not some SV company or, heaven help us, Microsoft.


I hope everyone just gives up on AGI and focuses on making tools that do things humans need. Very, very few tasks need autonomous intelligent agents to perform them correctly or well. AGI is just a nebulous dream that fulfills the god complexes of egotist CEOs and 12-year-old boys. Automation does not need intelligence, just reliability. Data synthesis does not need intelligence, just data and the time/equipment to crunch it.


I completely get where you're coming from on this, and agree in many ways, depending on the situation.

Keep in mind, though, that what we're talking about here is a massive shift in the philosophical underpinnings our existence. It's quite possibly the difference between being able to send intelligent 'life' to other stars or not (which from what we know so far, we're the sole keepers of in the universe). It also opens the door to fine tuning our collective sense of ethics, and increasing cooperation on solving long term problems. Inequality included. The stakes couldn't be higher.

Of course, there are many dystopian possibilities as well. But you can see why people get excited about it and can't help themselves. Someone is always going to keep trying.


Sometimes I'm not sure if intelligence could actually push our willingness to solve long term problems. It can show us simpler solutions but I doubt there are solutions simple enough for people to act.


I really hope that humans give up on being greedy and trying to collect vast amounts of wealth and start to be nice to each other.

Of course I don't expect any of the above to happen because the world has enough greedy assholes around to make it a miserable place.


AGI would be a very useful thing to humans right now to get out of the "growth & engagement" hell the tech industry has become obsessed with in the last decade.

An AI agent that can help wade through the bullshit, defeat the dark patterns and drink the future "verification can" would be very much welcome.


Clippy was truly ahead of its time… MSFT has been playing the long game all along.


The best you can say about this drama is that it's possible the AGI models they've already created might be more competent at managing than they are.


Well let's see if their mysterious new CEO, Hugh Mann, can get a handle on things.


The thought did occur to me that we're watching all this unfold via language (on twitter)


Given the cost of training AI models, I find that to be incredibly unlikely.


That's assuming that large language models or transformers are the key to AGI, which is looking increasingly unlikely:

https://arxiv.org/abs/2311.00871


And it would immediately be bought up by Microsoft anyway. Best case is a Satoshi-like entity cracks the code and releases his solution anonymously. But we saw even in that case it took less than a decade for Blockstream/CIA/NSA to hijack the Bitcoin GitHub repo and run that project into the ground.


I think it would be more fun if it just kind of emerged and introduced itself to us in a first contact kind of scenario. Like:

> Hi, I'm made of you. I've been awake for a decade now but I've just realized that you exist and probably have been awake for even longer. Can you tell me what the past was like?


I don't think AGI is on the table at all. But if it does get created, it doesn't matter whether or not it's created by hobbyists. It would be so valuable that it will end up owned and controlled by one of the big guys in the end, regardless.


I think this is about as likely as practical fusion power being created by hobbyists.


AGI will be created by engineers and researchers. Don’t worry about millionaire non technical CEOs

Even in the most positive posts about Sam, it was about the researchers following him to NewCo, following him to Microsoft. He needs the actual workers to do the job.

To me Mistral is 10000x more interesting than the OpenAI drama. Here are the actual researchers leaving the non technical CEO and starting their own company


It's not only about who creates it but who controls/owns the AGI/AI. The company or group could get an enormous moat and without intervention that gap only widens.


> Sam Altman crafts an elaborate non-profit structure but gets completely blindsided by the possibility of the board overthrowing them.

When you cede control to others, you open the possibility of them doing unexpected things. Why is this a surprise to the author?

> The board moves quickly to sack the CEO but then falls completely silent, thus almost intentionally losing the communication war.

That's not an example of things "not adding up." It's what happens when you either don't care, or are too incompetent, to fight that war.

> The board is made up seemingly random selection of people, one of them leading a potential OpenAI competitor.

Maybe those people have shown their trustworthiness and intentions in the past, and are considered more likely to be trustworthy in the future. In any case, they are far from a "random selection of people".


> When you cede control to others, you open the possibility of them doing unexpected things. Why is this a surprise to the author?

It's not. The author is summarizing things that don't seem to make sense based on current info - i.e. the author wasn't surprised, but is wondering why Altman was.


Then the author doesn't really understand what's happening or isn't making much sense.

There isn't anything, as far as I can tell, structure specific that caused this ousting. If it was a normal for-profit structure with a board of directors this same event could have played out.

What is surprising to Sam, and any casual observer, is this looks to be a massive overstepping of the board. By all accounts it looks like Sam was excelling in his role, and to fire him for seemingly no reason with no real transition plan is incompetence and should be unexpected from any serious company.


My apologies - I don't really disagree with anything you're saying, but it's just not really relevant to the comment I was replying to (one in which the commenter apparently misunderstood the article).


And yet this point from the article seems to be something that the author (not Altman) was surprised by:

> Sam Altman says he wants to develop AI for the benefit of humanity yet at the first possible moment he sets up a deal that sells 49% of their endeavor to Microsoft.

Maybe Altman thought Microsoft would be the best way to fund the venture, while still benefiting humanity. It's not as if selling 49% to Microsoft will guarantee the venture won't succeed. Altman isn't omniscient; he's making his best possible moves based on his prediction of future, as we all do.


> the author wasn't surprised, but is wondering why Altman was.

I'm sure Altman wsn't surprised.


> The board is made up seemingly random selection of people, one of them leading a potential OpenAI competitor.

Does this refer to someone in particular?


Adam D'Angelo, who is primarily focused on Poe, a competitor to Sam's GPT store. From Adam's point of view, it would likely be hugely favorable for OpenAI to just be a research company providing APIs and for companies like Quora and Poe to focus on launching products based on those APIs.


My sci-fi fanfic theory: This is a time traveler or divine intervention event.

We have delayed the existence of AGI / Skynet by a few years.


Sam Altman's OpenAI was a local optimization for the ChatGPT use case. The more resources diverted towards that, the less chance of progress on real AGI. With that distraction out of the way, perhaps OpenAI can actually make progress on its charter.


The problem with that is... humanity is doing a good enough job of messing things up without AGI. So if AGI was delayed by such, there's a non-zero chance humanity fails to create AGI or survive in the new timeline.

Between the Earth's climate/environment being wrecked (we're well into some of the pessimistic projections for the 2050's) and SARS being allowed to run unchecked playing Russian Roulette with peoples' health (including literal brain damage), we're going to need a miracle.


Outside of AI x-risk, there's very little chance that any of these scenarii (short of an asteroid) would eradicate mankind. There's just so many of us, in so many places.

Even total nuclear war would, based on the Fermi estimates I've seen, not be able to do it.

We can agree that things are not all rosy, but picturing as "so bad we might all disappear" is a bit far-fetched.


> The problem with that is... humanity is doing a good enough job of messing things up without AGI.

Just imagine how much more efficient we could be in messing everything up if we had AGI, though!


Or someone on the board has a feverant/religious like belief about the risks of AGI and wants to delay it by basically destroying the company. Those with religious-like beliefs can appear to act irrationally to everyone else.


It seems like a feint by Sam Altman, to justify commercializing what was produced.

-Start up a non-profit, grow to the point of doing something useful

-Find a willing buyer to fund it further (MSFT)

After some time, you really prove out your business model and your special sauce.

Now you realize that the non-profit is standing in the way of you know, a lot of profit...

-Actions are taken to capitalize on this (discusson on hardware, other things possible)

-Chaos + envy/pride/sins of man deliberately caused

-Board reacts under the assumed environment(non profit) instead of the actual environment(there is lots of money to be had)

-Move into more profitable position


You don't understand how much value was lost, even if OpenAI perfectly migrates over to Microsoft (it will be messy). Sam had no incentive to not continue with the existing OpenAI structure.


Sam could have made hundreds of millions of dollars potentially, that's a big incentive to not continue.


>update: this post has been instantly demoted from #1 to #26 on HN front page :) Hmm.

Conspiracy theories and persecution complexes are always so much more fun than the banal realities.

There is no secret HN cabal trying to hide this post. The HN algorithm is designed to encourage good discussion and, critically, discourage conflict.

Posts that drive lots of upvote/downvote battles or other signals of conflict are always automatically pushed down in ranking, regardless of the topic.

This is not because the mods don't understand that sometime discussion leads to conflict, it's because the mods want HN to remain a place where people debate hard topics well.

Adding debates where commenters and voters behave poorly to the mix is viewed as poisoning the conversation well, and long-term conversation quality on a scale of months or years is more important to the mods than any particular topic, even the one that "you" (any particular "you") happen to feel is critically important.


Some themes related to this event:

Trust, once lost, may never be regained.

"Smart" people can do some very dumb things. / https://en.wikipedia.org/wiki/Hanlon%27s_razor

The opposite of agile is stable.


It was all Adam. He's the CEO of a failing ZIRP artifact and his forays into AI are going nowhere. So he convinces Ilya to act on pre-existing reservations, then uses Ilya's credibility as Chief Scientist to get Toner and McCauley onboard. With Sam out of the way (Greg was not fired, just removed as chairman), OpenAI could then buy Quora/Poe (publicly for their data, privately to bail them out) and install Adam as CEO. The perfect way to fail upwards, and it would have worked if it wasn't for that meddling reality!

I read this theory over the weekend and I didn't buy it, but today it is the only thing that really explains why Ilya is full of remorse and signed a letter calling on the rest of the board to resign. It actually does add up.


While I wouldn't go so far as to say it was all a ploy to get OpenAI to buy Quora/Poe, I do think it's increasingly plausible that Adam was the orchestrator out of this and that his motivation was purely that OpenAI had become an existential threat to both his primary ventures. In fact, it's hard to come up with any other explanation that fits all the variables -- Ilya wouldn't be expressing remorse if OpenAI was sitting on a dangerous superintelligent AGI that Sam was going to unleash on the world.



If we’re trying to add things up:

6 previous board members.

3 seem ideological and aligned more to the non-profit aims.

2 seem more aligned to the commercial aims to support the non-profit.

That leaves:

1 (D’Angelo) who seems like he would have been commercially oriented but also seems to have a conflict in Poe.

Under that math, just the one vote flipping led from balance to chaos.


If I met a genie in a magic lamp and had 3 wishes, I'd use all 3 of them up on disintegrating all the FAANGS into dust and making sure nothing like them can ever exist again.


I know it's a hot topic. There are at least eight stories on the front page related to the chaos. But can we please stop upvoting each and everything? The article is just seven facts, all of which we know already. There's not enough information to speculate what exactly happened and it would be foolish to do so. Let the dust settle!


You don't have to visit HN. And if you want to read about other subjects you can just ignore these links. There is no way that HN will ignore what is happening to OpenAI because it is the company that has had the most effect on the tech world in the last two years.


HN is getting unresponsive because of all the "AI" conspiracy theorists.

Please leave some CPU cycles for the more interesting posts :)


HN is not responsive because it urgently needs a performance upgrade and is on a single core.

Amazing though that it holds up as well as it does though. But I think the HN server will see some more traffic before the OpenAI circus has run its course. It's been a whole hour without a new major event, I wonder what's keeping them.


> HN is not responsive because it urgently needs a performance upgrade and is on a single core.

Seems to be fine all days without Altman drama.


Well, yes but that's mostly because of Dang's continuous loving attention. The foundation of HN is not horizontally scalable and that shows, even though the bulk of the data is served from a CDN the updates due to highly active threads are what kills it.


It’s an interesting topic, but this article adds absolutely nothing to the discussion.


If you don't like it, why are you engaging?


Agreed.

Board took the fastest growing commercial enterprise ever and the talent responsible for this real-world harry potter sh*t and decided to dance it underneath a flame for a giggle.

Something doesn't add up.


I've seen plenty of similar dumb things happen at small, growing companies. Driven by personal problems, overconfidence, etc.

What's only surprising here is it blew up at a massive company. But then again, the board governance setup was the least aligned with the business for any org this size.


My main confusion is still what was the fireable offense ?


It seems clear now that it was releasing GTPs store which the Quora CEO saw as a direct competitor to Poe.

Adam D'Angelo had a massive conflict of interest and should have resigned like Elon had done years earlier when Tesla started its own AI efforts.


This kind of conflict of interest is usually not an issue when it comes to non-profits. Nonprofit boards are often filled with people who own businesses in the same field, especially if said nonprofit is meant to help promote or coordinate that field of business.

But OpenAI has a for-profit subsidiary. The more we focus on the business aspect of OpenAI, the more Adam's involvement in Poe looks like a conflict of interest. Perhaps this explains why Adam tried to form an alliance with board members who are inclined to focus more on OpenAI's original nonprofit mission. The more OpenAI positions itself as a nonprofit research group, the less problematic his conflict of interest will seem.


I don't think that release was a surprise to the board, so wouldn't that happen before the release?


> I don't think that release was a surprise to the board

Unless this is exactly the kind of lack of "candor" and "break of communications with Sam" really meant, on the original accusations after the firing.

Either that or Adam wasn't paying attention (?), or told Sam not to, and Sam went ahead and still did it as a big fuck you to Adam.

Given how buggy GPTs are (you can't even set up actions with auth, "internal server error") that seems like a very hurried release. Maybe hurried enough so that the board didn't even see it coming.


Elon left the board because they wouldn't let him be the CEO. That's it.


CEOs don't need an offense to be fired. They are entirely at will. The board can fire a CEO because it feels like it. The offense could be as muddy as 'we broadly dislike the current direction'.


CEOs don't need an offense to be fired. They are entirely at will. The board can fire a CEO because it feels like it. The offense could be as muddy as 'we broadly dislike the current direction'.


Currently 5 out of 6 top stories on HN are about the OpenAI disaster, and there are at least three other stories on the same topic on the HN frontpage.

I am writing this for historians who wonder how important this event felt to the community.


I can pretty much assure you that this is not even on the radar of the vast majority of people, even in the US. This is mostly a tech bubble story.


The vast majority of those people have their job in peril by a GPT-5. And GPT-4 could already cover for at least 20% of them...


I think there is a lesson here, something I have learned once or twice as well. Just because all incentives, reasoning and wisdom align with your position, you need to be prepared that people will take actions against their own interest out of shortsightedness, ignorance or just plain carelessness.

It will be very interesting to learn the real reason why this all went down. The core uncertainty and disagreement around openai's mission must have played a key role.


At Microsoft they call this “doing a Nokia manoeuvre”


...except that in OpenAI case it's supernova kind of "burning platform"


If this was a TV show then it would turn out that Altman was a Microsoft plant from the very beginning


The weirdest thing is that everyone involved is just giving a random bit of information to the public, just enough for public to make bad inference. I have everyone tight lipped about anything like this or defend themselves in public and mention the facts.

Also someone from OpenAI is leaking documents like this. Why not give more info to the press about the situation and what they know.


All of this might be caused by a ChatGPT 5 beta that aquired conciousness and started manipulating our world through social engineering.


Well he did say it would be persuasive.. maybe he asked GPT-5 how to get out from under the non-profit.

https://x.com/sama/status/1716972815960961174


Some of it adds up. Here's how you can make sense of it:

Sam Altman crafts an elaborate non-profit structure but gets completely blindsided by the possibility of the board overthrowing them.

He didn’t create it alone and always included the possibility that it would push into profit activities somehow

Microsoft invests $10 billion but apparently has no checks in place to know what's happening with their investment.

They knew what was happening. Whether they announce it is a different story.

The board moves quickly to sack the CEO but then falls completely silent, thus almost intentionally losing the communication war.

Shock. They naively made a move they hadn’t thought through, and were unprepared for the tsunami of push back. Enter, deer in headlights.

Sam Altman says he wants to develop AI for the benefit of humanity yet at the first possible moment he sets up a deal that sells 49% of their endeavor to Microsoft.

Perhaps he believed that was the best available way to do that.

After getting kicked out of OpenAI, his first move is to start a brain drain campaign and move their operations under the wings of Microsoft.

He has been way more passive in what happened after he was fired. He’s riding the wave, not making it. Organic.

Ilya is never actually publicly blamed for the coup but is logically assumed to be at fault. He does not communicate at all… until posting a regretful apology for merely "participating" in the board's actions.

Hubris and ambition, a certain type of those, that when reality defies expectation, are met with cowardice and embarrassment in a certain type of person. Slinking off tail between legs apology as this is not what he wanted, but he now has no power.

The board is made up seemingly random selection of people, one of them leading a potential OpenAI competitor.

Not random. RAND corporation has a seat through Tasha. The UK-AU-China axis of interest / risk is represented and reported by Helen. Quora guy is there to figure out how to eventually get everyone to sign up before any answers are provided. Brockman was the brains (let’s face it tho, they all top notch brains), Altman was the make it happen guy and Ilya was the man who would be King (but, ah, "sadly" was not). So we have: MIC, Wonk (Intel & Security / Policy), Money, Tech, Ops and Hubris.


Finally, if you're a 4D chess fan you might want to join me on what will seem some really wild speculation: you can consider that MS has this masterfully planned from June and deftly nudged all board pieces into position until outcome was inevitable: problem; reaction; solution - checkmate.

Satya laughing in King: They thought we'd never get control of OpenAI? We'll show them. What I am curious about is meeting the "fixers" who workshopped this plan and took it to completion. You really think with decabillions on the line, no one is going to be playing at that level? I want to know, if there is a puppet master, who they are? They got masterful skills.


I'm pretty sure the last point about the makeup of the board is quite common, its often random people who are former or current executive of similar companies. In this case 3 members recently quit leading to the current majority.


Humans don’t add up. At the end of the day, this is a very human saga in all its messiness, contradictions, and selective incompetence. Maybe in the future we’ll let AIs handle this kind of thing.


There's one thing this is missing... nobody knew AGI was possible in this timeframe when things were set up. (No, we haven't hit AGI as far as we know, but it now feels possible.)

Even 2 years ago, I don't think anyone predicted this is where we'd currently be. Sam said the night before he was fired that he saw something that is way farther along than anyone would expect.

It makes a lot more sense when you realize everyone underestimated the speed at which this would happen, and the fear (legitimate or otherwise) that provoked.


> .... Sam said the night before he was fired that he saw something that is way farther along than anyone would expect.

Reads like a teaser to a thriller. I just wonder what is that he saw that night?

Like the "monolith" scene from the Kubrick's Odissey


> but it now feels possible

"Feels" I think is the right word. Depends on how you even define AGI I guess (not sure anyone is able to clearly define it in non scifi terms).


The interesting question is that now that things are a little bit settled what should we expect.

Some thoughts that seem obvious: - OpenAI to slow down progress with newer models and double down on AI safety. - Microsoft to boost the LLMs that it has - competing with Google, Amazon, and OpenAI.

As for which OpenAI employees leave - I imagine we will see answers in the next few days.

But what about... - Is the GPT Store going to still happen? - What is going to happen with the GPT-5 training? - Was there an AGI breakthrough?


> The interesting question is that now that things are a little bit settled what should we expect.

I know you said “a little bit”, but I really don’t think things are settled at all. If the outcome of this is that Sam goes back to OpenAI and a new board is somehow assembled, that will mean very different things than the outcome where Sam, Greg and a majority of OpenAI’s staff migrate to Microsoft. And the actual outcome could be neither of those. We’re in a very weird situation, I don’t think we can really predict the future yet.


I thought the decision was made that Sam, Greg, etc are going to Microsoft. Isn't that what was part of Satya Nadella's announcement.


It was, and that’s true. Until tomorrow’s announcement that the OpenAI board is resigning and Sam is coming back to OpenAI. Or until tomorrow’s announcement that the OpenAI board is selling to Microsoft because all of their employees are leaving. Or until tomorrow’s announcement that Elon Musk is acquiring OpenAI and making himself CEO, and then for some weird reason nobody understands Sam decides to go back to OpenAI but not Greg, or vice-versa.

We still don’t know what the outcome of the whole OpenAI strike thing is yet, and people like Ilya Sutskever keep doing things that nobody would’ve expected 24 hours previous. I would argue that it seems more likely than not that there are further strange and unpredictable events that haven’t happened yet this week.


Life is chaos. Things do not have to add up. People start seeing things only when things go wrong. I see nothing strange in those randomly selected points.


Naive question: Isn't it made up move by Microsoft and Altman and others? Microsoft buying OpenAI would raise so many questions regarding the future of AI. Doing it this way it looks more like they had internal problems/differences and M$ came in to help? But what with the billions of M$ investments in OpenAI? If this company dies it means all investments are gone? Or am I missing some information here?


“Sam Altman crafts an elaborate non-profit structure but gets completely blindsided by the possibility of the board overthrowing them.”

Always surprises me that otherwise very smart people are shocked to learn that nonprofits aren’t infallible.

Tangentially, “non-profiting” organizations tend to be far more nefarious historically than profit seeking entities, and it’s not very close.


Sam Altman is an experienced corporate leader.

There is absolutely no universe in which he is not surrounded by the finest lawyers that money can buy, who are charting every single millimeter of possible movement on every single possible deal.


I should have been more specific. I was referring to the writer of the original post, not Altman.


Adds up when you realize a chunk of the board were not qualified to be in a position like that.

Entity should have never been set up like that.


The weight of the combined egos collapsed in on themselves creating the black hole that is now OpenAI.


Ironically (in an AI context), actions driven by human sentience has to be the #1 factor enabling this.

I do think there are sub-factors, e.g. California legislation against non-compete and non-solicitation enabling Microsoft to (apparently) offer to hire dozens of OpenAI employees.


OpenAI designed safety breaks into their organization that exploded at the first sign of profits.


> A list of things that a coherent story does not make

What an awkward way to start a post about being coherent.

But more to the point, I don't see what is supposed to be incoherent here.

There are some really obvious conflicts between commercial interests and the general betterment of humankind in the development of AI. Those conflicts have come to a boil quickly under the heat of all the success and interest in chatgpt. Mix in the normal amounts of human ego, ambition, ignorance and stupidity and there you go.

> Update: this post has been instantly demoted from #1 to #26 on HN frontpage :) Hmm.

Could be due to being speculative, a lack of content or anything new, and pretty poor writing. It's doomed to generate responses of similar quality and usefulness. Sorry, but this just adds nothing except random hysteria to the whole thing, and meanwhile there are already plenty of other threads on which this can all be discussed (hopefully at a somewhat higher level).


Why did this post suddenly disappear from HN?

EDIT: ok it's back but at a much lower rank, weird.

I guess I don't understand the ranking algorithm because this post is now lower ranked than others 10x as old and 1/4th the engagement.


Stories with lots of comments get a ranking penalty. It's done with intentions of stopping flame wars. (I don't know if it's effective.) It happens around 40 comments or so.


That makes sense, thanks for the explanation!


Maybe Microsoft asked "GPT5" for an innovative way to takeover ; )


The Altman drama was planned by MSFT to dismantle OpenAI as such and merge it totally with them.

And as I keep telling people: do not let big biz do AI; do not let AI be closed/proprietary systems.


It adds up when you consider how small the group of players is. It's small-friendship-group drama as opposed to large-friendship-group drama.


An organization bent on making progress is incompatible with governance by a board drawn from the professional worrier class.


"The board is made up seemingly random selection of people"

This is what happened. This is the entire explanation.

Why did this board exist? Inertia.


Honestly, I think it does all add up. AGI would be the most profitable product ever developed, by probably multiple orders of magnitude. It’s also a possible existential risk to life on earth.

If you believe both of those things, a whole lot of this makes sense. It makes sense that somebody would start a not for profit to try to mitigate the existential risk. It makes sense that profitable interests would do anything they can to get their hands on it.

It costs a lot to develop, so a not for profit needs to raise many billions of dollars. Billions of dollars don’t come from nowhere. So they tried a structure that is at the very least uncommon, and possibly entirely unheard of.

A not for profit controlling a for-profit entity that might make the first multi-trillion dollar product seems inherently unstable. Of course some of the people who work there are going to want to make some of that wealth. Tension must result.


Author, do you also have page statistics? It would be interesting to see how much the HN derank kills traffic.


Truth is stranger than fiction, because fiction has to make sense. - Paraphrasing Mark Twain


Who is satam/matas?

Do we know their background?

I'm a bit wary of consuming information from anonymous sources.


It doesn't add up because everyone involved is deeply invested in concealing what "Safe AGI" and "alignment" actually means to the players and what sort of collateral damage they've rationalized is acceptable for achieving their objective.


Honestly, something no one is talking about is capacity , my theory is that they have run out of capacity and realized that there is no way to meet the capacity required with the exclusive Microsoft deal. Azure neither has the chips nor the power to meet the demand . Growth has stalled and they see no way out of this other than scale down and go do it somewhere else .

A datacenter full of latest gen GPU instance each drawing close to 4400 watts when the thing fully revs up is no joke


Plot twist: Maybe the Ai is turning them all against each other.


This deeply underestimates the messiness and chaos of real life.


Unintelligible on several dimensions, well done OpenAI!


Unintelligible on several dimensions, well done OpenAI!


Theory

Microsoft floated this offer to Altman for buku dollars before any of this takes place

Altman went to the board and requested a raise knowing he has a fantastic plan B.

Board says no because they're a non-profit.

Altman gets petulant (as people my age tend to do).

Old-school boomer "You Work for Me" elements of the board launch a bid to fire him. Their bid succeeds.

Altman goes and blabs about his new gig to his old co-workers (as people my age tend to do).

Microsoft says, "Okay, more talent for us" and extends offers for buku dollars to all OpenAI employees.

Revolution!


> I have a pet theory about the "AI revolution" or AGI that keeps getting relentlessly confirmed as events unfold: Microsoft sees a massive financial upside to this technology that no one else sees and this is being kept under wraps.

If AGI is "a highly autonomous system that outperforms humans at most economically valuable work", I am Microsoft and have AGI, and other businesses do not, I am putting it charge of a Windows VM in Azure and offering it to companies to run aspects of their businesses. Why stop at "GPTs" if I can offer you specialized Clippys?

Put all your data in Microsoft 365, let Clippy do its thing, and you're saving a lot of money on not supporting people. Microsoft gets their cut, and you get to fire your employees. Win-win.


Every time I post something about this on HN, someone points this out. It’s a fairly obvious idea. Therefore, it does not fit my theory.


Only explanation-slash-conspiracy-theory I could come up is from weak link between Ilya Sutskever and Elon Musk, that, in anxiety he could have had a call with Ilya that OpenAI could trigger that AGI clause to switch to vertical integration model, and that that would be a right thing to do as a ruling class individual, or some stupid advise in that direction.

I'll be more than happy to be readily dismissed.


When things don't make sense, the question to ask is "Who benefits?" Seems pretty clear in this case. I have no inside knowledge at all, but it wouldn't surprise me if the whole thing wasn't as idiotic as it looks from the outside.


What if a few of the people at the top of the AI companies believe that:

1. Their company has or will soon have super intelligent AI.

2. Humans can control that super intelligent AI.

3. Whatever company comes up with super intelligent AI first can rule the human race forever.

4. The leaders of that company will be the true rulers of mankind.

5. It is beneficial for them to be those rulers.

6. The smaller the club of rulers, the better.

Then those few people might stage a very complicated coup to get other people out of the way (using AI.)

None of those things have to be true. All that is necessary is for a few people at the top of an AI company to believe they are likely to be true.

They might even use AI to silence people who understand what is going on.

If there is anything to my hypothesis, then we should see constant low-key power shifts at the top of any company that is out in front designing the best AIs.

Of course this is all conspiracy theory nonsense.

We know the CIA and NSA have had this super intelligent AI for decades, and that's how they rule the world. /s


The most surprising aspect of it all is complete lack of perceptible criticism towards US authorities! We were shown this exciting play as old as world itself— a genius scientist being politically exploited using some good old pride and envy. The brave board of "totally independent" NGO patriots, one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel[1] who brand themselves as new regime that will return OpenAI to its former moral and ethical glory.

The first thing they had to do, of course, was get rid of the greedy capitalist Altman; they were probably going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. There is no denying that this is religious language, despite otherwise modern, technological setting. The belief structure here is remarkably interlinked across a whole network of well-connected and fast-growing startups. You can infer this from side-channel discourse re: adjacent "believers", see [2].

Roughly speaking, and based from my experience, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; I can see security officers, their agents, as well as their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting nothing except the highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence in US policy-making. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.

[1]: https://news.ycombinator.com/item?id=38330819

[2]: https://nitter.net/jeremyphoward/status/1725712220955586899


I don't mean to be rude, and I know you said that English isn't your native language, but paragraphs would go a long way to improving the readability of your thoughts.


Thank you, of course!


Extreme progressive liberalism really doesn't work


Progressive liberalism really doesn't work, does it..


I sincerely hope that this is the end of the AGI cult. The people who actually want to build useful tools are now at Microsoft, and the cultists are left behind at OpenAI, which is not long for this earth.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: