Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP (semianalysis.com)
238 points by rbanffy on Nov 20, 2023 | hide | past | favorite | 121 comments



Maybe this is for the best. The ones who are in for the money can go to MS, and the ones with a higher calling can stick with OpenAI and we'll see who wins.

I'm convinced an actual, true open approach (ideally open source) will win out, everyone thinks MS has executed a masterstroke, but more likely they've shot their load too early with LLMs only being a signpost on the way to higher-order AI (AGI is a ill-defined fantasy).


Are you really expecting greed to lose? Corporate greed is how we got to where we are currently


This is such a complete misreading of what's happening. Not sure why I keep seeing this on HN.


That's just one view (as is mine), no one knows what's actually happening.

In my view Altman represents the 'lets get lots of money' side of things and not much else. The deals with MS, ME financiers, SoftBank, a Jony Ive colab makes that pretty clear.

Maybe it's not that simple, but I'd say it's broadly correct.


It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.

I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.


Since AGI isn't a thing, no one knows what it will look like or if it will even exist.

The biggest breakthroughs in science do not come from those with the most money. It's all ideas.


OTOH humans are a non-artificial GI, and we can use ourselves as an anchor for estimates of what we'd need for an artificial equivalent.

About 1000x the complexity of GPT-3 and much slower would be the best guess right now.


It's looking at humans, how they're trained and their wetware makes me believe that AGI, as most people understand it, ie a super human like intelligence, will never exist. There will be powerful AI but it won't be human like in the way people think about it now.


That definition ought to be reserved for ASI (S meaning super) not AGI (G meaning general).

That said I agree "human like" is unlikely, although LLMs and diffusion models are much closer than I was expecting.


That's because their training source is human output. Human In Human Out (HIHO).


Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".

Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.


Precisely. Breakthroughs are often cleverer than brute force, “throw more compute/tokens at it” approaches. Turning some crucial algorithm from O(n) to O(log(n)) could be an unlock worth trillions of compute time dollars.


If this were true, Yudkowsky's MIRI would have solved AGI a decade ago. Turns out you need money and lots of compute power, not just people sitting around talking about the issues.


> If this were true, Yudkowsky's MIRI would have solved AGI a decade ago.

Isn't MIRI focussed on “trustworthy reasoning”, not AGI more generally, and doesn't it see untrustworthy AGI as an undesirable thing to develop, even as an instrumental step?

So, literally, isn't “solving AGI” an explicit anti-goal?


The article makes some statements that feel like conjectures rather than facts. For example “likely many to move to msft” and more significantly “PPUs will be refreshed with MSFT stock and 10mil salaries”.


These are extremely safe conjectures.

84% of OpenAI employees have signed a statement that they will leave OpenAI if the board does not resign. The letter they signed states that Microsoft has made an open offer to any OpenAI employee. And Satya is almost certainly not going to spoil this moment with stingy packages. He is, after all, essentially acquiring a company that was valued at $90 billion, for nothing.


Yeah, even if MS have to pay out the remainder of the $1B they promised OpenAI (in cash and Azure credits), I will say its still a pretty sweet "acquihire" (Though, not sure how much of a clean room they will have to put everyone in).

EDIT: And to continue my "thinking out loud", MS have no real need to cut ties with OpenAI (yet), who knows OpenAI might just turn out fine after all this, As S.R. Hadden once said: why build one when you can have two at twice the price?


>Though, not sure how much of a clean room they will have to put everyone in

Why would they need a clean room at all? Part of their investment was almost completely unfettered access to OpenAI IP.


From the article: "Microsoft has full legal rights and access to the weights of the base GPT-4 model as well as the various fine-tuned versions and DALL-E 3." If that's correct, Microsoft pretty much owned them already.


Not really, weights are just the magic that tells the machine what to do. If they don't have access to the training mechanism and/or the machine then they don't own OpenAI. I guess they are attempting to pirate the rest through employees, I hope they fail.


A lot of the SemiAnalysis posts are just conjecture (and the authors opinions) stated as fact. You can see this play out in their many Qualcomm vs ARM posts.

It’s been brought up several times in the past but people seem to respond more to the drama that they put forward, rather than factually correct information.


+1, careful with this site, like most of the coverage, its maximalist horse race stuff very far from the tech and facts on the ground. It often wildly exaggerates its knowledge, for instance, it is the premier source of the mind-virus that GPT-4 is a MoE model with trillion parameters.

It always appears plausible at its face if you don't know much about the tech, and if you do know the tech, you still need to have inside sources to really know how far off it is. And even once you know the tech and have inside sources to confirm they make stuff up, people are very reluctant to believe it, so it continues its march unimpeded.


So, btw, was that "gravy train" nipped in the bud?

Was the majority of openAIs employees threatening to leave the beneficial and good outcome that you were expecting for the company?


Would gently point out this is abnormal behavior for HN, been here 13 years and don't think I can recall at time a non-dead account was following people around to get followup on threads from days ago -

It's perfectly rational for me, a stranger who doesn't work at OpenAI, to say that if I worked at OpenAI, I'd be doing it for the "Open" part and wouldn't exactly enjoy the shift to commercialization - which coincidentally, has been widely reported now, and previously, as being an issue for all employees, not just the board.

It's also perfectly reasonable for me, a stranger who doesn't work at OpenAI, to not be able to know that was outweighed by their appreciation of leadership.

I recommend taking things a bit easier, trying to make everyone else use your frame of reference of conversation as a scorable game with binary opinions is a losing battle. Been there. Better to as dang says, come with curiosity, and learn from what you don't expect to hear.


> come with curiosity

Indeed, and I was curious as to your answer. There were a number of people a couple days ago, who were talking about the mission, and how its a non profit, and that it was important to stay with that mission.

And I was curious to see what those people think now.


Nah, that's not the case, we can see your comment above and it's mean-spirited


You are the one talking about mean spiritedness, scorable games and battles, not me.

You can feel however you want about your previous opinion and your now new one.

But anyway I am glad that I got your answer to the question, which is that it seems like you no longer believe that this thing was beneficial to almost anyone involved, even those who believed in the mission.


People are haluscinating that this could go through easily in a $3T mega corp. Level 68 at MS makes about $700k TC, and is equivalent in level to a Google L8. You think they're just going to bring in hundreds of people all at L10 or something, thats not how any of this works. Any ex OpenAI engineers will soon learn


They'll get a raise or other financial incentive for sure, but since they're all so willing to jump ship, it's more that they want to get away from the existing one and make a horizontal movement. That said, each and every one that has OpenAI on their employment history is set for their wages for the forseeable, regardless of what they end up doing.


I doubt that, satya nadella literally meet sam altman next day(or night) after firing

the fact that this happen and satya nadella can make move in the WEEKENDS is pretty big deal


This part seems conjecture:

> What’s more important to understand is if Microsoft has legal direct access to all the data and code used for pre-training and RL. It is obviously all stored on Azure, but if the new Sam-led internal team can freely access that, they can basically start exactly where they left off without much of a hiccup. If they cannot get it, then we estimate that it could possibly lead to only a 4-6 month delay vs prior.

Does anyone actually know if MSFT has access to OpenAI's training data ? Hard to imagine the human created/curated portion of it being redone in 4-6 months.

I've read there's some limitation in the OpenAI-MSFT agreement of it being limited to "AI" vs "AGI" technology, but no details of how that determination is made. Maybe time for OpenAI to play the AGI card.


Also I really doubt taking all that data and doing whatever with it would be legal. I'd be really surprised if MS's license with OpenAI allows for unfettered access and unlimited use.


I am somewhat reminded of Mike Pondsmith's Cyberpunk world where corporations effective own the nation state's that they are headquartered in.


I can recommend reading about the "Chaebol", which is effectively this in South Korea. Five corporations got so huge that they control politics, and the C-staff of those companies is basically immune to no matter what the legal system throws at them because they always just threaten to fire so many of their workers that the politicians give in to them.


There's a name I haven't heard in a while. I'm so glad for the revival of that IP/content. From what I can tell, all the early-80s cyberpunk literature basically had the same theme, so the classic authors are all good reads as well. It must have been something about late 70s / early 80s USA that made everyone feel like we were on the verge of corporate takeover of government. Understandably, I suppose.


> It must have been something about late 70s / early 80s USA that made everyone feel like we were on the verge of corporate takeover of government.

Yes and no. At the time, Cyberpunk was a fringe movement by some rebel authors. Hence the "punk". Until then, mainstream science fiction was very much obsessed with nice utopias that used technology and communication for the benefit of society in far distant futures. Cyberpunk tore it all down and pointed at the possible nightmare of a not so distant future where all pervading technology leaves you cluelessly behind, crushed under the foot of megacorporate feudalism.


Yes and now basically all mainstream scifi is cyberpunk and depressing. It’d be nice to go back to the mainstream being optimistic or at least neutral


You'd have to be a pretty damn good author to make that believable!


> It must have been something about late 70s / early 80s USA that made everyone feel like we were on the verge of corporate takeover of government. Understandably, I suppose.

Rise of mass media and opaque conglomerates along with the poor economics of late 70's through 80's that saw many (mostly) manufacturing jobs get nuked by boardrooms.


I listened to a college professor talking about Neuromancer and he connected republican president Ronald Reagon's "Reaganomics" to the theme corporations were taking over the world.



Sounds like Samsung.


Chaebol (Korea) and Zaibatsu (Japan). The former are still very much alive and kicking, while the former was more or less snuffed out in the post-war period (as is my understanding).


I just heard a Korean proverb:

There are three unavoidable things in life: Death, Taxes and Samsung =)


Every time that guy says "generational wealth" I throw up in my mouth a bit.


Thing is, he's correct. Early shareholders of many unicorns have cashed out dozens of millions of dollars post-IPO - that kind of wealth has no other name but generational, since even if you blow half of it in early stage VC and a quarter on "hookers and blow", the remaining quarter will pay enough in yearly dividends that you and your entire offspring will never have to work a single day in their life again.

A side note: the fact that there are so many tech unicorns in the US is self-reinforcing, because the founders and early employees of IPO'd unicorns have usually invested their money into either their own venture or those of some random people they knew, and repeated the cycle. The rest of the world barely got anything from the insane valuation explosion in the tech sector of the last 15-20 years.


Well, yes, but you see, talking that way about "generational wealth" is meant to signal approval for wanting to create a permanent dynasty of rent seekers. Even a lot of billionaires disapprove of that.


>create a permanent dynasty of rent seekers

Living off inherited wealth isn't rent seeking; your dynasty doesn't need to collect any rents if you've left a large enough sum of money to them in a trust. They're living entirely off the money you earned fair and square.


> They're living entirely off the money you earned fair and square.

And who actually makes that money? Those who pay rent, either in actual rent if you invested into real estate, or in one or another kind of fees, that you(r kids) can set (almost) at will.

Getting to that point where you are a rent seeker is the core issue at heart of SV-style capitalism. Use and burn enormous amounts of VC to destroy entrenched industry by price dumping (and in some cases like taxis, actually providing a better service), then either acquire or destroy your competition, and finally you have complete control over the market in a way that it would require enormous amounts of money to even be anything close to a threat to you (and at that point, you acquire them). Additionally, use the income from your market stranglehold to pivot into new ventures, because you can do so with ease. The best examples for how that played out are Microsoft, Facebook, Amazon, Google and, to a bit, Apple. Utterly dominant in their respective fields, and all at the control of one or more money spigots. And this is why these companies were and are valued at the valuations they have... investors assess companies on their chance of getting to such a dominant position, to then reap the benefits of the money spigot.


Deployment of capital is not the same thing as rent seeking.


Living off inherited wealth is kind of the economic definition of rent-seeking. Rent is a technical term in economics, going back to the times of lords and serfs.


> Even a lot of billionaires disapprove of that.

Yeah, a few of them are very high-profile in that [1], but they're a) an utter minority (there's ~3k of billionaires vs 236 that signed the Giving Pledge) and b) for most of them, even if they'd give away 90% of their wealth, their kids would still be effectively landed gentry. It's a sham to placate the masses.

[1] https://en.wikipedia.org/wiki/The_Giving_Pledge


I might very well make my kids "landed gentry" myself, given the chance, but I wouldn't go around pretending it was a virtuous choice on my part. Nor would I write as though somebody else were evil because they happened, for largely unrelated reasons, to disrupt my plans for doing it.

Whereas this toady is so bent on having an overclass that he's doing that on behalf of others.

... and the pledge thing is the same, really. In order to have some hypocritical desire to signal adherence to some ideal, you first have to believe it's an ideal. Those people have no need to "placate the masses" in the sense of there being an imminent risk of torches and pitchforks. They're doing it at most for prestige... and prestige only counts if it's among people whose opinions you actually care about.


What is the point of generating wealth if not to pass it down to your children?

In your value system, is blowing it on yourself better?

EDIT: to be clear, the main point of generating wealth is to generate value for society. Society then pays you for that value with the wealth that you get... Now I am pointing out that on the personal level, helping your future generations is a totally admirable goal.


You're missing more than one other option there...


In a sense you are spending it on yourself. Because providing for your children makes you feel better. Wealth gives you the freedom to make that choice. Another person may decide to buy a massive yacht instead.

Perhaps a better question is to ask if great responsibility comes with great power. Because extreme wealth is great power. And money is not some mandate for power.


False dichotomy. You and your children are not the only humans to exist on earth.


>False dichotomy

I don’t think you know what this means


> In your value system, is blowing it on yourself better?

Maybe? Like, certainly it's nice and good to pass along something to your children. But giving them the ability to just sit around and play video games every day for the rest of their life? What's good about that? (Yes, I know, many inheritors end up doing productive things with their money.)

I would advocate for dropping the inheritance tax exemption to something much much lower, and then taxing nearly all (or even all) of an inheritance above that amount.

As much as I am not a fan of Bill Gates, I 100% approve of his plan to earmark pretty much all of his wealth for philanthropy before he dies. I am still uncomfortable with the idea that we have to depend on the largess of a few billionaires in order to direct resources to make the world better. But I think that's massively better than passing on tens or hundreds of millions of dollars to offspring who didn't do anything to earn it.

> to be clear, the main point of generating wealth is to generate value for society. Society then pays you for that value with the wealth that you get...

That sounds like an overly idealized view of how things work. Many people get super rich for reasons that have little to do with generating wealth for society. And even if they do generate wealth for society, their reward is usually comically greater than the wealth they generate.


It seems like we're approaching this from opposite value systems. I think about it as a dad. The most important thing for me is to ensure a good life for my descendants (and by good life I don't mean sitting around playing video games.)

If you cap my ability to do that, that removes the main driver of my motivation.

Why allow kids to benefit from their parents at all? EG, why do immigrant kids get to benefit from their hardworking parens' ability to come to this country and build a life?


I found funny the $1e7+ notation.


is the type of wealth that makes land be worth more in some places than in others

this gets really difficult when consdiering that the same unit of wealth also prices food, AND gambling


So, you threw up in your mouth twice.


They were especially flavorful, though.


What is the evidence of this: just retweets with emojis?

"""

There is a mass exodus of the core OpenAI team leaving and joining Microsoft. This new organization within Microsoft will get hundreds of technical staff from OpenAI.

"""


The letters signed, the retweets, the scores of people who "know some one".

The truth is outside a select few, MSFT grand scheme/plans hasn't happened yet.

This shit was just announced. Noone jumped yet as it doesn't exist yet .

I can see MSFT scooping as much talent now as possible, roll the dice, cut some heads a few quarters later


If everything in this article is true, then it sounds like OpenAI is done

- Lost all their key people - Lost some or all of the handouts Microsoft was giving them, potentially more - Microsoft has full rights (commercial?) to all of their IP


> The OpenAI board, which has no legal obligation to shareholders or really to anything besides AI safety

It’s really interesting to see this said as if it’s something terrible. That’s the entire reason this company exists.


It's not being said as if it's terrible, it's shining a light on the fact that OpenAI without a cutting edge product to attempt to practice AI safety on is something much tinier, much different, and altogether less important. Without that it becomes a sort of activist non-profit like the pause people, attempting to garner support or influence policy for a cause it doesn't have a direct stake in.


That’s not the case as long as they hold the state-of-the-art in the field. Which they do.


How do you propose they recoup the massive expenses of holding state-of-the-art in the field?


Great article that summarizes events to date & answers a lot of the questions I'd had about why Microsoft wouldn't be starting over again, by going this route.


Is Microsoft stuck giving OpenAI the $10B?

Or was it more of non-binding pledge?


Even if they do, it seems like a small price to pay for effectively acqui-hiring a crippling amount of OpenAI's staff.


Satya is looking like a genius here. How much has MS stock gone up since that tweet?


about 1.5%, so a bit but not earth shattering (although that does represent about $40bn in market cap)


Was about to note the same. Low single-digit shifts in the stock of a multi-trillion dollar company can be misleading!


The majority of my wealth is in MSFT (for historical reasons...), but I never check the price. When I saw the tweets about it being down 2 percent, gasp, at the news of Sam leaving, it was still 15% higher than when I last checked. Now it's even higher.

Probably not a bad time to lock in some of those returns.


~1%; On their 2yr chart this weekend isn't even noticable


Also, GOOG with no particularly relevant news today has gone up more than MSFT since the markets opened this morning.


News relevant to MSFT is relevant to GOOG.


nah not yet. Hiring people on the spot is easy and a zero risk move for Satya, and imho is a political move not a talent move

What he does with those two assets and their previous 13B investment will be what to watch for.


Is there any confirmation or sources for the claims in this article? Or is this just speculation? I haven't seen any confirmation that all of OpenAI's core team has joined MSFT, or that they are getting $10M packages


MSFT just announced custom Azure chips too which likely also played into the "synergy" with Sam's desire to reduce reliance on Nvidia as computing capacity and demand scales with AI advancement.


They also just announced that they are using AMD MI300x chips as well.


Can't the best of you ML-devs here just go and join OpenAI?


I really hope there's a lawsuit against each and every member of that board. This whole thing is insane.


You get non compete all over the place… how come they did not have one?


Most non-competes are unenforceable by law in California.

Though I believe sometimes some narrowly-defined non-competes are ok when it comes to executive-level roles, so unclear if Altman et al. could be affected by those.

But I also feel like someone with Altman's clout could have avoided signing something like that when he took the CEO job.


Outside of CA, MSFT will bankroll to fight it. Where is money gonna come for OpenAI to enforce it?


California banned non-competes


Do we actually know what the issue that motivated sama's departure was?


What do folks here make of Ilya's comments about human intelligence, and Elon Musk's fear mongering? Could it be that they've had a breakthrough?


"if you value intelligence above all other human qualities, you’re gonna have a bad time"

I assume this is meant to mean don't get too attached to intelligence as a uniquely human quality as it soon won't be (or maybe already isn't, based on his estimation).

Altman did indicate there'd been a recent breakthrough a couple of weeks ago (4th one in company history). Not sure why the secrecy in saying what is is (or what the prior 3 were) .. Maybe can't be told without giving away HOW to achieve it, e.g. "when you train on data type XXX, YYY happens" ?


I wonder if Satya told Sam he might take the reins in the future.


@sama is now a wagie for msft lol


> $10 million plus ($1e7+)

Who is this article written for? Who doesn't know what $10 million is but knows what $1e7 is?


Even more perplexing is that there is a reference to $10M right above that one, with no scientific notation, as well as references to $50B/$80B that don't use it either. Who knows. It's funny at least, though.


Ai generated


Are you sure or are you just guessing?


This shit needs to die. I'll try to remember to ignore this blog in the future.


Ten MegaDollars.


reminds me of my favourite SI unit, the MegaGram (1 metric tonne)


Thank God no one can say “mibidollars”


1e-2 yards


Can OpenAI now make their stuff open source? ;-) Or better yet, public domain. It seems a shame to fail at their initial promise AND have it all go to Microsoft alone.


The AI safety people seem to generally be of the opinion that open source == less safe.

Ironically, it's almost like what you'd hear from MS in the 90s/2000s


> The AI safety people seem to generally be of the opinion that open source == less safe.

They (the moderate AI risk-of-slavery-or-extermination camp, that sees safety as a critical issue but the promise of AI benefit to humanity if well-aligned as significant enough that working for aligned AI is worthwhile instead of just adopting an AI-ban and bombing non-compliant datacenters) have always been of the opinion that open source is less safe than in the hands of a benevolent dictator with AI safety in mind, which is why their best case scenario was them being the benevolent dictatory gatekeeping AI.

OTOH, its quite possible that they would see having a single entity purely interest in private gain being likely to acquire a commanding lead as even a greater risk than having current SotA public (really, its hard for me to see any other possibility consistent with their public views other than seeing that threat as a reason to join the extremist Yudkowski camp in anti-AI crusade mode.)


> which is why their best case scenario was them being the benevolent dictatory gatekeeping AI.

That seems like a fool's errand, even before they accepted MS's money, with Google, Facebook, and even Amazon in play.

To be an independent alternative to big corporations, I can understand. But all that went out the window with Altman, his deal with MS, and the direction he chose to take OpenAI - effectively becoming the big corporation they were suppose to fight.


> That seems like a fool's errand, even before they accepted MS's money, with Google, Facebook, and even Amazon in play.

Agreed; my describing the history of public decision making by OpenAI is not the same as endorsing any of the decisions.

I've never thought closed gatekeeping was going to be an effective mitigation in this space, whether by intrusive government action or a private "benevolent gatekeeper".


> Can OpenAI now make their stuff open source?

Some kind of a leak a real possibility IMO


All it takes is one pissed off engineer. Let's go.


The timing is right. With 2/3 of the company grumpy, it'd be untraceable.


The whole point of OpenAI is to prevent AGI from becoming open source so no.


No, the point of OpenAI is to mitigate AI risk while promoting development of publicly beneficial AI.

It initially took the view that actual openness was the best route for that. It later took the view that openness was more dangerous than having the most advanced progress behind the walls of a benevolent gatekeeper.

The set of options it faces has changed, and its quite possible its view of the best option in the current context could change, again, because of that. (Also, who "OpenAI" is has changed over time, and changed rather radically recently, a process which almost certainly is not complete, which also can change decisions about the best way forward.)


OpenAI is done.

This is the biggest Silicon Valley power play I can recall in the last 3 decades.


Others just sound boring, i.e. all the takeovers of promising startups by the bigger companies with minimal drama and big cashouts for its founders.

There were some of the big lawsuits too I suppose? But those were (relatively) slow burners.


[flagged]


Man am I tired of that lazy as hell meme. Seriously, in what universe is this Microsoft's fault, and in what multiverse would they ever ever ever want to extinguish a major business opportunity in the middle of a massive boom instead of profit?


Unless there was a serious shadow conspiracy behind this all, it seems like this just kinda fell into Microsoft's lap.


It doesn't seem to be, Microsoft is rather the beneficiary of an internal OpenAI split. TFA has a good summary of the events to date, as sibling mentioned.


"Microsoft can likely claw back or not deliver quite a bit of what it had planned for OpenAI. These compute resources can be routed to the new internal team."

Ok sure internet guy. They'll just "claw back" based on which clause in the agreement? Oh the "don't fire the CEO clause". Really, which section and paragraph is that?

My conjecture on what'll probably happen is compute will be split somewhat unevenly between NewOpenAI and OldOpenAI. Because let's not forget, OldOpenAI actually has revenue and a raft of paying customers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: