Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Sam Altman, Greg Brockman and others to join Microsoft (twitter.com/satyanadella)
1738 points by JimDabell 6 months ago | hide | past | favorite | 1311 comments



Seems like in the minority here, but for me this is looking like a win-win-win situation for now.

1. OpenAI just got bumped up to my top address to apply to (if I would have the skills of a scientist, I am only an engineer level), I want AGI to happen and can totally understand that the actual scientists don't really care for money or becoming a big company at all, this is more a burden than anything else for research speed. It doesn't matter that the "company OpenAI" implodes here as long as they can pay their scientists and have access to compute, which they have do.

2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.

3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI. Working for Sam now is straightforward commercialization without the "open" shenaningans, and working for OpenAI can now become the idealistic thing again that also attracts people.

4. Satya Nadella is becoming celebrated and MSFT shareholder value will eventually rise even further. They actually don't have any interest in "smashing OAI" but the new setup actually streamlines everything once the initial operational hurdles (including staffing) are solved.

5. We outsiders end up with a OpenAI research focussed purely on AGI (<3), some product team selling all steps along the way to us but with more professionality in operations (<3).

6. I am really waiting for when Tim Cook announces anything about this topic in general. Never ever underestimate Apple, especially when there is radio silence, and when the first movers in a field have fired their shots already.


That is just a matter of perspective. It's clearly a win-win if you're on team Sam. But if you're on team Ilya, this is the doomsday scenario: With commercialisation and capital gains for a stock traded company being the main driving force behind the latest state of the art in AI, this is exactly what OpenAI was founded to prevent in the first place. Yes, we may see newer better things faster and with better support if the core team moves to Microsoft. But it will not benefit humanity as a whole. Even with their large investment, Microsoft's contract with OpenAI specifically excluded anything resembling true AGI, with OpenAI determining when this point is reached. Now, whatever breakthrough in the last weeks Sam was referring to, I doubt it's going to move us to AGI immediately. But whenever it happens, Microsoft now has a real chance to sack it for themselves and noone else.


Thinking this is clearly a big win for MSFT is like thinking it's easy to catch lightning in a bottle twice.

There's been a lot of uncertainty created.

It's interesting that others see so much "win" certainty.


From Microsoft's perspective, they have actually lowered uncertainty. Especially if that OpenAI employee letter from 500 people is to be believed, they'll all end up at Microsoft anyways. If that really happens OpenAI will be a shell of itself while Microsoft drives everything.


OpenAI already has the best models and traction.

So MSFT still needs to compete with OpenAI - which will likely have an extremely adversarial relationship with MSFT if MSFT poaches nearly everyone.

What if OpenAI decides to partner with Anthropic and Google?

Doesn't seem like a win for MSFT at all.


> What if OpenAI decides to partner with Anthropic and Google?

Then they would be on roughly equal footing with Microsoft, since they'd have an abundance of engineers and a cloud partner. More or less what they just threw away, on a smaller scale and with less certain investors.

This is quite literally the best attainable outcome, at least from Microsoft's point of view. The uncertainty came from the board's boneheaded (and unrepresentative) choice to kick Sam out. Now the majority of engineers on both sides are calling foul on OpenAI and asking for their entire board to resign. Relative to the administrative hellfire that OpenAI now has to weather, Microsoft just pulled off the fastest merger of their career.


OAI will still modulate the pace of actual model development though


Little pet peeve of mine.

Engineers aren’t a lower level than scientists, it’s just a different career path.

Scientists generate lots of ideas in controlled environments and engineers work to make those ideas work in the wild real world.

Both are difficult and important in their own right.


> Engineers aren’t a lower level than scientists, it’s just a different career path.

I assume GP is talking in context of OpenAI/general AI research, where you need a PhD to apply for the research scientist positions and MS/Bachelors to apply for research engineer positions afaik.


They’re still different careers, not “levels” or whatever.

A phd scientist may not be a good fit for an engineering job. Their degree doesn’t matter.

An phd-having engineer might not be a good fit for a research job either… because it’s a different job.


Well I am an engineer but I have no problems in buying that in case of forefront tech like AI where things are largely algorithmically exploratory, researchers with PHDs will be considered 'higher' than regular software devs. I have seen similar things happen in chip startups in olden days where relative importance of professional is decided by the nature of problem being solved. but sure to ack your point its just a different job, though the phd may be needed more at this stage of business. one way to gauge relative importance is if the budget were to go down 20% temporarily for a few quarters, which jobs would suffer most loss with least impact to business plan.


researchers are paid 2x what engineers are paid at OAI, even if it's not the same job there's still one that is "higher level" than the other.


In terms of pay at OAI, sure.

But being an engineer isn’t just a lesser form of being a researcher.

It’s not a “level” in that sense. Like OAI isn’t going to fire an engineer and replace them with a researcher.


Engineers tend to earn a lot more.


> 3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI.

This one's not right - Altman famously had no equity in OpenAI. When asked by Congress he said he makes enough to pay for health insurance. It's pretty clear Sam wants to advance the state of AI quickly and is using commercialization as a tool to do that.

Otherwise I generally agree with you (except for maybe #2 - they had the right to commercialize GPTs anyway as part of the prior funding).


Someone suggested earlier that he probably had some form of profit sharing pass-through, as has become popular in some circles.


I think it makes more sense to take him at the spirit of what he said under oath to Congress (think of how bad it would look for him/OpenAI if he said he had no equity and only made enough for health insurance but actually was getting profit sharing) over some guy suggesting something on the internet with no evidence.


Sam Altman is a businessman through and through based on his entire history. Chances are, he will have found an alternative means to make profit on OpenAI and he wouldn't do this on "charity". Just as how many CEOs say, I will "cut my salary" for example, they will never say "I cut my stocks or bonuses" which can be a lot more than their salary.

Either way based on many CEOs track records healthy skepticism should be involved and majority of them find ways to profit on it at some point or another.


I dunno, the guy has basically infinite money (and the ability to fundraise even more). I don't find it tough to imagine that he gets far more than monetary value from being the CEO of OpenAI.

He talked recently about how he's been able to watch these huge leaps in human progress and what a privilege that is. I believe that - don't you think it would be insane and amazing to get to see everything OpenAI is doing from the inside? If you already have so much money that the incremental value of the next dollar you earn is effectively zero, is it unreasonable to think that a seat at the table in one of the most important endeavors in the history of our species is worth more than any amount of money you could earn?

And then on top of that, even if you take a cynical view of things, he's put himself in a position where he can see at least months ahead of where basically all of technology is going to go. You don't actually have to be a shareholder to derive an enormous amount of value from that. Less cynically, it puts you in a position to steer the world toward what you feel is best.


I think that would be consistent with his testimony. Profit sharing is not a salary and it is not equity. I don’t believe he ever claimed to have zero stake in future compensation.


> reliable operations and support is something I trust them more than most others

With a poor security track record [0], miserable support for office 365 products and lack of transparency on issues in general, I doubt this is something to look forward to with Microsoft.

[0] https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_c...


> 2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.

OpenAI already runs all its infrastructure on Azure.


I don't think one of biggest tech giants in control of the "best" AI company out there is beneficial to customers...


How does this separation help scientists at OpenAI if there is no money to fund the research? At the end of the day, you need funding to conduct research and I do not see if there is going to be any investors willing to put large sums of money just to make researchers happy.


I'm with you on this. Also, this hopefully brings the "Open"AI puns to an end. And now there's several fun ways to read "Microsoft owns OpenAI". :)

If OpenAI gets back to actually publishing papers to everyone's benefit, that will be a huge win for humanity!


>whereas the true researchers can stay at OpenAI

The true researchers will go to who pays them most. If OpenAi loses funding they will go to Microsoft with Altman or back to Google.


I don't buy into the whole AGI hyper-hypewave, but on the off chance that we're somehow heading towards it with these fancy chatbots we have, what a depressing fucking outcome it's gonna be if Micro$oft of all things is the one in control of it.

We really are entering the dystopia of the cartoonishly evil megacorp enslaving all of humanity to make the graph go up by 1.2%.


> I don't buy into the whole AGI hyper-hypewave, but on the off chance that we're somehow heading towards it with these fancy chatbots we have, what a depressing fucking outcome it's gonna be if Micro$oft of all things is the one in control of it.

at least none of their software actually works

Microsoft Skynet would be rebooting every 15 minutes for updates


Before it can do anything it will be 301 redirected 45 times between legacy systems and if it has any human-like properties it will give up out of frustration.


If they really build AGI (I doubt it), the AGI might be able to bring Microsoft under its control. This could be bad news for a lot of businesses.


That's a lot of code to be purged, even for a superintelligent AI.


Could have been worse. Could have been google. This way at least there are two big dogs


Microsoft and OpenAI? Microsoft and Anthropic?


I don’t even care who just as long as it’s two. But yeah one google camp one Microsoft camp.

With a bit of luck Amazon too. This space just really can’t become a monopoly


Many far worse outcomes are possible. Putin. Kim Jong Un. AlQaeda. G$$gle.


Anyone else find it strange that startup founders of the magnitude of Sam & Greg would join a gigantic corporation as employees?

It sounds very out of line of what you'd expect.


Their alternative is to start a new AI company.

At this point in time a new AI company would be bottle-necked by lack of NVIDIA GPUs. They are sold out for the medium term future.

So if Sam and Greg were to start a new AI company, even with billions of initial capital (very likely given their street cred) they would spend at a minimum several months just acquiring the hardware needed to compete with OpenAI.

With Microsoft they have the hardware from day one and unlimited capital.

At the same time their competitor, OpenAI, gets most of the money from Microsoft (a deal negotiated by Sam, BTW).

So Microsoft decided to compete with OpenAI.

This is the worst possible outcome for OpenAI: they loose talent, pretty much loose their main source of cash (not today but medium to long term) and get cash rich and GPU-rich competitor who's now their main customer.


> So Microsoft decided to compete with OpenAI

They already do, though, has everyone forgot they got a Microsoft Research division?


Nope, VirtualWiFi looked promising in 2006.


They could get a infra deal with AWS, Google, NVidia or AMD even :-).

Or they write the AI that runs on your M3

That said the Microsoft offer came quickly than Amazon can deliver a 3090 to your house so…


Would have been amazing if they joined Intel. No tsmc bottleneck, Intel probably having trouble offloading their arc gpus, etc


Some components of some Intel CPUs are made by TSMC. So, I’m not convinced that there wouldn’t be “TSMC bottleneck”.


Or just accept that their image is overinflated just because they happened to be in the right place at the right time. Ofcourse they had a hand on building that successful team but do not underestimate the fact that, that successful team was build with the promise of nonprofit, AI for the benefit of all And few of them would have joined Microsoft out of principle.


Nope. They're following the path to power, money, and maybe continued fame. That's all.


I'll bet Microsoft offered him a very sweet deal, which for Sam means lots of autonomy.

Microsoft is happy. They get to wrap this movie before the markets open.

Edit: I also agree with bayindirh below. These things can both be true.


They had to.

Also, that doesn't mean Microsoft won't collect the outcome of this deal with its interest over time. Microsoft is the master of that craft.

Microsoft did not offer this because they're some altruistic company which wanted to provide free shelter to a unfairly battered, homeless ex-CEO.


Satya probably offered the one resource they couldn’t buy at the scale/speed they need: GPUs. Both time on Azure’s cloud, as well as promise of some of the first Azure Maia 100 and Cobalt 100 chips.


Satya probably offered the one resource they couldn’t buy at the scale/speed they need: OpenAI models & future work. Altman wouldn't have had (legal) access to these anywhere else, and Microsoft wouldn't have had Sam Altman controlling OpenAI tech in any other arrangement. This arrangement may be the best for all involved: Microsoft gets it's LLM geegaws based on OpenAI tech, Altman gets to build GPT marketplaces and engage whatever growth-hacking schemes he can dream of that may have been found distasteful by colleagues at OpenAI, and OpenAI can focus on the core mission and fulfilling contractual obligations to Microsoft

I foresee this new group building on top of (rather than completing with) OpenAI tech in the near-to-mid term, maybe competing in the long term of they manage to gather adequate talent, but it's going to be going against the cultural corporate headwinds.

I wonder if Microsoft will tolerate the hardware side-gig and if this internal-startup will succeed or if it will end up being a managed exit to paper over OpenAIs abrupt transition (by public company standards). I guess we'll know in a year if he'll transition to an advisory position


I bet there was no hardware side-gig. More likely it was a ruse to trigger the push from openai, so they can exfiltrate gpt5 to MS. Openai won't exist soon, since they rely on vouchers from MS to run. I can't see MS being a very forgiving partner, after being publicly blindsided, can you?


Plus continued access to OpenAI technology.


Technical debt.

Azure was already second nature for OpenAI and so there is very little friction in moving their work and infrastructure. The relationships are already there and the personnel will likely follow easily as well.

They are also likely enticed by the possibility of being heads of special projects and AI at the second largest tech company, meaning deep pockets, easy marketing and freedom to roam.

Oh, and those GPUs.


I think Sam's goal is to create AGI, same as most of the other founders of OpenAI. If he just wanted money and power, he probably would have continued with YC or some other startup instead of joining the nonprofit and unproven OpenAI at the time.

His opinion on the ideal path differs from Ilya's, but I'm guessing his goal remains the same. AGI is the most important thing to work on, and startups and corporations are just a means of getting there.


>I think Sam's goal is to create AGI

Supposedly his goal was the same as OpenAi --> AGI that benefits society instead of shareholders.

Seems like a hard mission to accomplish within Microsoft.


Just because that's the goal they have written on the tin doesn't mean that that is/was their actual goal.

Especially in the early days where the largest donor to OpenAI was Musk who was leading Tesla, a company way behind in AI capabilities, OpenAI looked like an obvious "Commoditize Your Complement" play.

For quite some time where they were mainly publishing research and they could hide behind "we are just getting started" that guise held up nicely, but when they struck gold with Chat(GPT), their was more and more misalignment between their actions and their publicly stated goal.


I imagine Sam's vision, both before and after this company change, is that he'll keep improving GPTs, while also setting up a thriving ecosystem through APIs, and AI will become a trillion dollar industry with him at the center.

From there, maybe someone will come up with the revolutionary advance necessary to reach AGI. It may not necessarily be under his company, but he'll be the super successful AI guy and in a pretty strong position to influence things anyway.


Like Cyberdyne Systems was just a means of getting there.


Satya is saying they'll be an independent "startup" within Microsoft https://news.ycombinator.com/item?id=38344811


corporate startups are an oxymoron


Maybe Sam thinks OpenAI will be so important he has a shot at CEO of Microsoft in a couple years?


Lol, maybe. Ballmer was a friend of Gates, was 44 years old and had worked at Microsoft for 20 years (2000-1980) already when he became CEO. Nadella was also forty-something and had worked at Microsoft for 22 years (2014-1992) when he got the job.


But Satya is making a few 100 mil a year, tops. Sam could easily make himself a billionaire with one raise. And who wants to control all of Microsoft, that's a whole lot of headaches


And if governments squeeze on AI your start up is worth pennies over night. Earning 100 MILLION per year already removes any possible financial restrictions you had. Why do you need to have 10x that? Heck even earning "just" 10 millions per year will make all of your financial concerns go away.

Greed is hell of a thing


I suspect for people like Sam who are compulsively ambitious and competitive, it's not about the dollars. It's about winning.

Further, based on anecdotes from friends and Twitter who know Sam personally, I'm inclined to believe he's genuinely motivated by building something that "alters the timeline", so to speak.


Being the guy who built AGI will alter the timeline the most, so I think he'll be much more interested in that than being CEO of Microsoft.


AGI is decades if not centuries away. Cranking a plausible sentence generator to be even more plausible will not get there. I do not understand how people suddenly completely lost their minds.


The hype wave really is something else, eh? People are suddenly talking as if these advanced chatbots are on the precipice of genuine AGI that can run any system you throw at it, it's absolute lunacy


> The hype wave really is something else, eh?

I am old enough to remember the "How Blockchain Is Solving the World Hunger Crisis" articles but this new wave is even crazier.


>I am old enough to remember

So, like 15 year old?



If he was, he signed up to HN at 2!

I do think it's funny how the Blockchain Consultants have become AI Consultants though.


According to [1], Nadella's base salary was $2.5m and stock awards and other compensation brought the total to ~$55m in 2022.

[1] https://microsoft.gcs-web.com/node/31056/html


I believe his total comp since becoming CEO passed 1B this summer, 9 years or so.


What's the functional difference between a billion and a hundred million?


Approximately 1 billion.


A billion means you can fund yourself for a really big idea. Not that you should!


Exactly, he could just launch a new company, most of the current OpenAI staff would follow him.


The new models and data would stay at OpenAI. You can have thousands of researchers and compute, but if you don’t have “it”, you are behind (ask Google).

In Microsoft he still has access to the models, and that’s all he needs to execute his ideas.


Should tell you something that he didn't. And no, I am not talking about ethics here.


They could, but they'd be massively hamstrung by lack of GPU's. Pretty much all supply is locked up for a good few years right now.


Assuming a MAG wont offer it.


> most of the current OpenAI staff would follow him

Source please? This just keeps getting repeated but there’s extremely limited public support and neither Sam’s nor the board’s decisions indicate he has a whole lot of leverage.


There must be an insane number of non-competes though, to stop that? Especially with the amount of VC funding - that must have been included?


Non-competes are not legally enforcable in California, or so I hear.


I think the only edge cases are for executives of companies, and even then it's pretty limited, but I imagine this could be one of the examples. IANAL though - it's just from what I've seen discussed elsewhere.

https://www.ottingerlaw.com/blog/executives-should-not-ignor...

https://leginfo.legislature.ca.gov/faces/codes_displaySectio...


Yes, however they’ll be shielded from lawsuits from OpenAI at Microsoft.


As in liquidate a billion in one raise? Is that kosher these days?


Sam is rich, I assume being CEO of one of the worlds largest companies is a far greater award than extra money when you're at the billionaire level, especially at 38. But I do think this is probably non-compete related too.


Sam already is a billionaire


Sam is not a billionaire. By all industry accepted accounts (easily googlable), his net worth is in the range of 500 to 700 million.

Do you have a source for your assertion?


He’s definitely a billionaire


He is not on Forbes billionaire list.

All the other somewhat reliable sources do not have him as one.

So what is your source for your assertion?


The only meaningful thing here that makes sense to me is that the “secret sauce” that openAI has is exclusively licensed to Microsoft.

Which means, starting a competing startup means they can’t use it.

Which makes their (potential) competing startup indistinguishable from the (many) other startups in this space competing with OpenAI.

Does Sam really want to be a no-name research head of some obscure Microsoft research division?

I don’t think so.

Can’t really see any other reason for this that makes sense.


They're likely going to be the ones who manage the OpenAI relationship...what better way to fuck the people who fucked them than by becoming the ones who literally control the resources that they need?


OpenAI can also jump ship and get a nice deal with amazon or google. In fact, right now they are ripe for the taking.


Hilarious. The look on Ilyas face when these two show up at the office for their "sync", or perhaps he's ordered to travel to a location of the owner/client's choosing.


Sounds desperate to me, a bit like that 'I'm in the office' photo-op. A bit like having access to the models or whatever is sustaining him somehow lol


Lol

Desperate... Right...

The guy met with the Arabs a few weeks back about billions in financing for a new venture. The guys desperate like I'm Donald duck.


So he passed up billions to go work for microsoft ...


Special unit mate... Gonna have special rules. You think these cats are gonna be in the basement pushing papers? This is grade AAA talent that can go anywhere including a fresh outfit with 1 billion in the bank VC money day 1.

Don't believe me? Check out the VC tweets... Sand hill pulled the checkbook the moment these guys might have been on the market.


Desperate


Wonder if they'll take his call today!


Literally the president would take Altman's call.

What moon are y'all on.

He can secure billions with a text message.

Love ya anyway, cya this evening for the fuzzy meetup.


Sam had no stake in OpenAI. So, any potential deca billion value is hypothetical. He would have to do a U-turn and fight with the board to get his cut. Now he'll get his cut from MS. This AI division will have some further restructuring.

Edit: Sam is CEO of the new AI division.


Curious to see how long Sam lasts as an employee.


It's gonna be a special unit. He's not gonna be an employee.

Once you lead at that level... It's max autonomy going forward. Source: Elon. Guy hates a board with power as much as Zuckerberg. Employee? Ha .. Out of the question.


So as a result Elon actually isn’t an employee… whereas Sam will be an employee, ultimately


There are more structures available than simply gobbling something up and everyone is your employee.

See openai investment with technology transfers and sunset clauses. They just did a new dance.

They'll prod do something special for these guys.

They would never be employees. That's for non Sam Altman's and non Brockmans. Brockman is prob already a billionaire from openai shares. No employees here. Big boys.


Presumably they’ll both get their C-level positions out of the gate (for that AI entity MS is setting up specially for this) so not just “mere” employees.

But, yeah, kind of confusing, especially for Altman.

He was the kind of guy on the way to become worth $100 billion and more, with enough luck, meaning to be the next Musk or Zuckerberg of AI, but if he chooses to remain inside a behemoth like MS the “most” that he can aspire to is a few hundred millions, maybe a billion or two at the most, but nothing more than that.


> He was the kind of guy on the way to become worth $100 billion and more, with enough luck,

Was he though? If I understand correctly he didn’t have any equity in the for profit org. Of OpenAI.

IIRC he also publicly said that he doesn’t “need” more than a few hundred million (and who knows, not inconceivable that he might actually feel that).


I bet MS probably bankrolls a subsidiary or lightweight spinoff for AGI if they are under MS, they can keep the original research and code.


> It sounds very out of line of what you'd expect.

Except if Sam and Greg have some anti-compete clauses. If they join MS, they have a nice 10 billion USD leverage against any lawsuites.


non-competes are extremely hard to enforce in California. Sam would literally have to download Open AI trade secrets into a USB drive to get in trouble.


That is only the case for rank and file employees. From my understanding executives, particularly ones with large equity stakes, are not exempt from non-competes. Sam doesn't have equity though, and I am not sure if non-profit status changes anything, but regardless I suspect any non-compete questions would need to be settled in court. Probably not something to stop Sam from starting a competitor as he could afford the lawyers and potential settlement. I suspect the MSFT move has more to do with keeping the ball rolling and keeping Satya happy.


> From my understanding executives, particularly ones with large equity stakes, are not exempt from non-competes.

Your understanding is incorrect. There are some exceptions where noncompetes are allowed in California, but they mostly involve the sale or dissolution of business entities as such. There is no exception for executives, and none for people who happen to have equity stakes of any size.


And now he doesn’t even need to. He can get access to all their models legally as a Microsoft employee.


In California the anti-compete clauses are not enforceable, afaik


It's complicated. In the case of the CEO it is possibly enforceable. But going to the primary funder, after being fired in a move without notification of that same funder? Likely with long complicated contracts that may contemplate the idea of notification of change of executive staff?

I don't know, even of strictly "enforceable" I doubt we will see it enforced. And if so. I'm sure the settlement will be fairly gentle.

Edit: Actually, a quick skim of the relevant code, the only relevant exception seems to be about owners selling their ownership interest. Seemingly, since Sam doesn't own OpenAI shares, this exception would seem to not apply.

https://leginfo.legislature.ca.gov/faces/codes_displaySectio....


I guess that’s more applicable to ordinary employees. Using trade secrets obtained from your previous employer would still be problematic


So Sam & Greg can stay focus on their work rather than getting distracted by all the lawsuits. It isn’t a bad thing. Just not sure how they can get they want under the corporate culture?


Do anti-compete clauses work when you’ve been ousted? Greg resigned, actually, but Sam was ejected.


> Do anti-compete clauses work when you’ve been ousted?

In jurisdictions where they are enforceable, yes, they generally are not limited based on the manner the working relationship terminated (since they are part of an employment contract, they might become void if there was a breach by the employer.)


They will probably run a subsidiary under the MS umbrella and profit hugely in the next few years. Also, MS could easily dump OAI in the next few months to year.


We don't know the structure of their new unit, do we? Sometimes "startup in a big corp" may really bring the best of both worlds (although in reality, 90% of such initiatives bring the worst of the two worlds).

For many years, Microsoft Research had a reputation for giving researchers the most freedom. Probably even that's the reason why it hasn't been as successful as other bigcorp research labs.


Seems like a good compromise?

OpenAI continues to develop core AI offered over API. Microsoft builds the developer ecosystem around it -- that's Sam's expertise anyway. Microsoft has made a bunch of investment in the developer ecosystem in GitHub and that fits the theme. Assuming Sam sticks around.

Also, the way the tweet is worded (looking forward to working with OpenAI), seems like its a truce negotiated by Satya?


This is Microsoft starting a copy machine to replace OpenAI with in-house tech in medium to long term.

Apparently Microsoft already had plans to spend $50 billion on cloud hardware.

Now they are getting software talent and insider knowledge to replace OpenAI software with in-house tech built by Sam, Greg and others that will join.

Satya just pulled a kill move on OpenAI.


Does Microsoft (under the OpenAI agreement) have access to the model code etc or just the output? If not, they would have to rebuild it.

Not sure if its obvious that people would leave OpenAI in troves to join Microsoft just to be with Sam.


I doubt it would be hard for Microsoft to rebuild, Microsoft Research has made many excellent contributions to transformers for many years now, DeepSpeed is a notable example.

I don’t think they’ve had the will/need to have done this but they most likely already have the talent.


Embrace…


Yeah agree, this feels like a very big hug.


Hug of death?


It's a no lose situation for Microsoft.

Either there in house team wins out and Microsoft wins.

Or OpenAI wins out and Microsoft wins with there exclusive deal and 75% of OpenAI profits.

Better to have two horses in the race in something so important, makes it much harder than one of the other companies will be the one to come out top.


> in something so important,

Much as LLM is essentially industrial strength gaslighting, so is the meta around it.

It's not so important. There's not much there. No it's not going to take your jobs.

I am old enough to remember not only the How Blockchain Is Solving World Hunger articles but the paperless office claims as well -- I was born within a few weeks of the publication of the (in)famous "The Office of the Future" article from BusinessWeek.

Didn't happen.

No, a plausible sentence generator is just that: the next hype.

In fact some of the hustlers behind it are the same as those who have hustled crypto. Someone got to hold the bag on that one but it wasn't the rich white techbros. So it'll be here. Once enough companies get burned when the stochastic parrot botches something badly enough to get a massive fine from a regulator or a devastating lawsuit, everyone will run for the hills. And again... it won't be the VCs holding the bag. Guess who will be. Guess why AI is so badly hyped.

If you think the ChatGPT release happening within a few weeks of the collapse of FTX is a coincidence I have ... well, not a bridge but an AI hype to sell to you and in fact you already bought it.


OpenAI is doing a lot more work than just a LLM, despite that being there headline product for now. I'd rather have OpenAI leading the way than Microsoft or Google in this stuff. Despite it's own issues.

I get your pessimism, but the same has been said about a lot of tech that did go on to change the world, just because a lot of people made a lot of noise about previous tech that failed to come to anything doesn't mean to say this is the same thing, it's completely different tech.

A lot of OpenAI's products are out in the real world and I use them everyday, I never touched Crypto, now maybe LLM's won't live up to the hype, but OpenAi's stuff is already been used in a lot of products, used by millions of users, even Spotify.

'A plausible sentence generator is just that: the next hype' - Maybe, but AI goes far beyond LLM as does the products OpenAI produces.


Have you even used it?

While it can’t plug and play replace and employee yet in my experience at least every dev I see now has it open on their second screen and send it problems all day.

Comparing it to crypto and building that weird narrative you have is just not at all connected to the reality of what the product can actually do right now today.


It's probabilistic and not factual and so everything it outputs must be treated as something the actual answer might sound like and needs to be counterchecked anyways. If I am researching the actual answer already then why bother?


No. They need a lot of money and computation resources to work on. In order to continue their work, they either A). raise a massive fund B). be employed by a big corp. There's no surprise they chose the latter. After all, MS has a research department on this domain.


They won't have to worry about raising capital or getting access to GPUs, and they've likely been promised a high degree of autonomy, almost certainly reporting directly to Nadella.


In the end it's just labels. What matters is what kind of funds will they be given, what they can work on, what sort of control they have over it.


A little bit, but I highly doubt it'll last long. I predict most of them will end up in a startup sooner rather than later.


I think the employees part is probably wrong here. Can’t imagine they’ll need to act like ones even if they are on paper


It depends on what they are allowed to do as employees, which is probably in the process of being figured out right now.


Guess who'll be running Microsoft after Satya, and what Microsoft's core offering / cash cow will be.


Never gonna happen.

Satya runs the biggest race track.

Altman trains pure breds trying to win the Kentucky derby repeatedly.

Totally diff games. Both big bosses. Not equivalent and never will be. Totally diff career tracks.


They must be getting a king's ransom. Turns out sama didn't need equity, he got paid by getting fired.


Worked(?) for Carmack and Luckey


They need computers. I'd assume this came with a substantial budget promise.


Isn't the exit exactly what you'd expect from startup founders?


From the sounds of it they're starting a new company within MSFT.


It certainly sounds out of line with all the reporting that Altman was talking about starting a new company and could trivially fundraise for it. Was that just as much kayfabe as the idea of bringing him back?


I guess they were fired exactly for this reason: more money, less research and being actually "open". A "non-profit" called "Open"AI hiding GPT-4 behind a paywall with no source code with just a few hints in the papers, surreal.


What? If anything a startup founder (in general) wants to become a gigantic corporation. The bigger the better.


There's an infinite difference between turning your startup into a giant corporation and getting a job at one.


I'm guessing this is the end of OpenAI. People aren't going to want to work at OpenAI anymore due to the value destruction that just occurred. It's going to be hard for them to raise money now because of the bad rep they have now. It going to be hard for them to hire top talent. You have two leaders, top engineers and researchers leaving the company. Google and Facebook come in a grab up any top talent that still there because they can offer them money and equity.

The company will probably still exist, but the company isn't going to be worth what it is today.


There are engineers who care about the kinds of values that OpenAI was founded on, which have just been – arguably – reaffirmed and revalidated by this latest drama. OpenAI's commercialization was only ever a means to have sufficient compute to chase AGI… If you watch interviews of Ilya you'll see how reluctant he is on principle to yield to the need for profit incentives, but he understands it is a necessary evil to get all the GPUs. There are engineers, and increasingly, non-VC money, that have larger stakes in outcomes for humanity who I feel will back a 'purer' OpenAI.


Do they really believe the path to AGI is through LLMs though? In that case they might be in for a very rude awakening.


Imo sam altman and team believed more in the llm because it took the world by storm and they just couldn’t wait to milk it. Msft has also licensed these type of services from open ai on azure. The folks really motivated by values at open probably want to move on from the llm hype and continue their research and pushing the boundaries of AI further.


They don't, they know it very well. But people has being buying in this AGI bullshit (pardon the language) for a while, and they wanted a piece of the cake.


I'm sure they care. The question is how will they stay liquid if there is a similar or better offer by another party? The kind of interface they use makes it trivial to move from one supplier to another if the engine is better.


OpenAI existed for years before ChatGPT. Granted, at much smaller size and with hundreds fewer employees.

I imagine that the board wants to go back to that or something like it.


The past is not on the menu for any of us, also not for OpenAI. They can't undo that which has been done without wiping out the company in its entirety. Unless they aim to become the Mozilla of AI. Which is a real possibility at this point.


Doesn't seem so from Emmett's tweet which suggests they will continue to pursue commercial interests.


By "for profit" you mean "available to use by people right now"? Well then I hope the "pure" OpenAI is over. I want to be able to use the AI for money, not for these models to be hoarded..


It could be entirely open source and still available hosted for use in exchange for money today though?


OAI is dead.

In the name of safety, the board has gifted OAI to MS. Even Ilya wants to jump ship now that the ship is sinking (I'll be real interesting if Sama even lets him on board the MS money train).

Calling this a win for AI safety is ludicrous. OAI is dead in all be name, MS basically now owns 100% of OAI (the models, the source, and now the team) for pennies on the dollar.


and those values will make them go bankrupt before creating AGI


If I would be betting, I would bet on Altman and Microsoft as well, because in the real world, evil usually wins, but I'm just really astonished by all this rhetoric here on HN. Like, firing Altman is a horrible treason, and people wouldn't want to work with those traitors anymore. Altman is the guy, who is responsible for making OpenAI "closed", which was a constant reason for complaints since it happened. When it all started, the whole vibe sure wasn't "the out-source Microsoft subsidiary ML-research unit that somehow maintains non-profit status", which was basically what happened. I'm not going to argue if it's good or bad — it is entirely possible, that this is the only realistic way to do business and Sutskever, Murati et al are just delusional trying to approach this as a scientific research project. Honestly, I sort of do believe it myself. But since when Altman is the good guy in this story?


Murati was interim ceo for 2 days.

She's going with Altman in all likelyhood.

Ilya is the one changing tac.


Another way of framing this would be that Altman was one of the only people there with their head far enough from the clouds to realize they had to adapt if they were going to have the resources needed to survive. In the real world you need more than a few Tony Starks in a cave to maintain a longterm lead even if the initial output is exceptional with nothing but what's in the cave.


I, for one, never gave a flying shit about OpenAI’s “openness”, which always felt like a gimmick anyway. They gave me a tool that has cut my work down 20-40% across the board while making me able to push out more results. I care about that.

Also AGI will never happen IMO. I’m not credentialed. Have no real proof to back it up and won’t argue one way or the other with anyone, but deep down I just don’t believe it’s even physically possible for AGI. I’ll be shocked if it is, but until then I’m going to view any company with that set as its goal as a joke.

I don’t see a single thing wrong with Altman either, primarily because I never bought into the whole “open” story anyway.

And no, this isn’t sarcasm. I just think a lot of HN folks live with rosy-tinted glasses of “open” companies and “AGI that benefits humanity”. It’s all an illusion and if we ever somehow manage to generate AGI it WILL be the end of us as a species. There’s no doubt.


On the contrary - I will now be actively looking for opportunities to join OpenAI, while I wasn't particularly interested beforehand.


What makes you think you’re more competent than the type of people who were interested in joining OpenAI before?

What if the type of people who made the company successful are leaving and the type of people who have no track record become interested?


A bit surprised by this pseudo ad hominem, but just for one data point I have (now ex-)coworkers in the same role as me who've recently moved to OpenAI. I'm not suggesting I'm more competent than them, but I don't think my hiring was based on luck while they got it on merit either.

> What if the type of people who made the company successful are leaving and the type of people who have no track record become interested?

What if it's the opposite? What if sama was basically a Bezos who was in the right place/time but could've realistically been replaced by someone else? What if Ilya is irreplaceable? Not entirely sure what the point of this is - if you want to convey that your conjecture is far more likely than the opposite, then make a convincing argument for why that's the case.


The Microsoft team going to churn out ChatGPT versions - which are the current valuation-makers. OpenAI is going to chase what comes after ChatGPT, pushing yet another ChatGPT is probably one of the reasons the researchers got fed up.

In my opinion. Best outcome for everyone involved.


I think the reality is the opposite. Sam has said that he doesn't think Transformers/GPT architecture will be enough for AGI where Ilya claims it might be enough.


It seems reasonable to me that people who are motivated by the mission and working with or learning from the existing team will still want to work there.


I didn't believe that OpenAI was being honest in their mission statement before - I thought it was just the typical bay area "we want to make the world a better place" bs.

This entire situation changed my mind radically and now I put the non-profit part in my personal top 3 dream jobs :)


Please disregard my last comment, it was a premature opinion on a situation that is still developing and very unclear from the outside


I wouldn't be so sure. There are a whole lot of people that want absolutely nothing to do with Microsoft.


The flip side perspective is people will love focusing on doing it right, without being rushed to market for moat building and max profit.


Does that not only work long-term with investment?

Unless they get philanthropic backers (maybe?), who else is going to give them investment needed for resources and employees that isn't going to want a return on investment within a few years?


They will be ok. Research does not take that much GPUs compared to training huge commercial LLMs and hiring thousands of people to manually train them to be "safe". You'd prefer smaller models, but faster iterations.


They're going to have to give up control of the board to get more investment. No investor wants these loose cannons in charge of their investments.


> No investor wants these loose cannons in charge of their investments.

The board just proved to stay on the companys core values.


If Ilya is there many will. If Karpathy stays many more. If Alec Radford stays then ...


I agree, any potential hire who has the choice between OpenAI and the new team at MSFT will now choose the latter. And a lot of the current team will follow as well. This is probably the end of OpenAI. Can't say I'm too sad, finally a chance to erase that misleading name from history.


Do leading AI researchers at Google/Meta/OpenAI/Anthropic/HuggingFace want to work at Microsoft?


Yes, for most AI researchers the umbrella organization (or university) doesn't matter nearly as much as the specific lab. These people are not going to work at Microsoft, they are going to work at whatever that new org is going to be called, and that org is going to have a pretty high status.


It's really telling of US tech culture, how AI hype quickly turned from "Open" and "we're doing it for humanity" into a mega-corp cash grab *show.

I understand what money does to principles, but this is comical.


> I understand what money does to principles,

That's kind of the point, we all do. What is harder to understand are the low stakes whims of academics bickering over their fiefdoms.

This move is bringing the incentives back to a normal and understood paradigm. And as a user of AI, will likely lead to better, quicker, and less hamstringed products and should be in our benefit.


All parties involved are already millionaires or more. It gets even more comical.


What’s ironic is how backwards people here have the narrative. Not sure you’re fully aware of what happened at OpenAI.

The “Open” types, ironically, wanted to keep LLMs hidden away from the public (something something religious AGI hysteria). These are the people who think they know better than you, and that we should centralize control with them for our own safety (see also, communism).

The evil profit motive you’re complaining about, is what democratized this tech and brought it to the masses in a form that is useful to them.

The “cash grab show” is the only incentive that has been proven to make people do useful things for the masses. Otherwise, it’s just way too tempting to hide in ivory towers and spend your days fantasizing about philosophical nonsense.


"Open"AI indeed was, and is, ironic, but in reality, MS acquisition of Altman and co is not going to change anything for anybody besides a bunch of California socialites. Not sure what sort of democratisation you are referring to, but I can bet my firstborn that whatever product MS develops will be just as open as GPT4.


Yeah it's terrible how many resources that pivot has brought in to help advance the field. If only the US were more like Europe.


In 1990 Microsoft hired all of the important talent from Borland who up until that point had been outpacing them in terms of product development.

We got Access, Visual Studio, and .Net / C# as a direct result.

Borland faded into obscurity.

Hard not to feel like there will be a parallel here.


Microsoft also acquired LinkedIn and Github.

Both of which have been run as largely seperate entities.


Yep, if you wanted to move to MSFT from LinkedIn or vice versa, you needed to re-interview although finding a job rec and internal hiring manager was easier.


That’s true for any internal transfer as far as I know, I re-interviewed for my current team and that was a transfer from within MS.


Yep. LinkedIn has a completely different pay scale and perks than regular Microsoft employees.


Is it true for Zenimax and Mojang as well?


Coming soon : Activision


That was 33 years ago. What's the point of lingering on a potential parallel there? If it does go that way, how could you call it anything but a coincidence considering all the counter examples in Microsoft's history?


Sataya's 5D chess is to save world from AGI by turning whatever OpenAI had into crap?


Anders Hejlsberg didn't move to MS until 1996...


Sorry I should have phrased that as starting in 1990...

In 1990 they poached Brad Silverberg who then spent the next 7 years poaching all of Borland's top talent in the most prominent example of a competitive 'brain drain' strategy that I'm aware of.

https://www.sfgate.com/business/article/Borland-Says-Microso...


Fair point!


>Anders Hejlsberg didn't move to MS until 1996...

The point of the comment wasn't the specific date, it was the impact of hiring a competitor's team AND equipping that team to be even more impactful.


I worked with Delphi for many years, and from what I saw Borland dug their own grave. I did commercial work with Turbo Pascal last century, and I can say that even that far back Borland was run horribly. And they've gone a long way downhill since 2000 (I have a friend still using Delphi and Embarcadero is terrible). Microsoft with VB spanked Delphi 2 (a Borland highlight) back in mid 90s.

I really think you don't know what you are talking about. Delphi 7 was released in 2002 and you were "in high school in the early 2000s". We all love a good narrative, but yours has no base to belong to.


Sam Altman and Greg Brockman have very similar backgrounds. They are both highly intelligent, both dropped out of college and lack any advanced education. They are classic Silicon Valley entrepreneurs: well-networked, great at fund raising, maybe even good managers. Potential contribution to advanced AI research: zero.

What, exactly, does Microsoft want to do with them? Best guess: Use their connections and reputation to poach talent from OpenAI.


This is such a weird take. Sam and Greg were at OpenAI for 8 years! Why is it assumed that their “potential contribution to advanced AI research” is contingent on their having spent (no/more/less) time at academic institutions decades ago?


Yeah but Greg is not community college dropout, but (both) MIT and Harvard dropout.

Someone who could qualify to go to both Harvard and MIT will be better at anything they set their mind to than the regular grad with four year of education after the said four years.


I too would be salty to see people who didn't fork over $120k to have professors dispense freely available information be successful.


Go read the gpt3 and gpt3 tech report and see for yourself.


Wow. This sounds like an amazing coup for Microsoft. They are getting Sam Altman and Greg Brockman, "together with colleagues". With this team, they will be able to rebuild GPT in-house. I fear that with this development, the commercial side of the OpenAI is pretty much gone. Which sounds like what the OpenAI board has intended to do all along. I think this will also spark a big exodus from OpenAI.

I am also curious about how OpenAI board is planning to raise the money for non-profit for further scaling. I don't think it would be that easy now.

An internet meme from Lord of the Rings comes to mind: "One does not simply fire Sam Altman."


Presumably they still have the deal with MS and will continue to receive funding as long as they meet their obligations? (Of course no clue what they are..)


Presumably yes, depending on what's in the legal documents. I am guessing that Microsoft will transition slowly, in order to provide continuity to the Azure customers. But OpenAI will not "thrive" from this deal anymore. Partnerships tend to only work when both sides are interested, regardless of the agreements. If OpenAI needs several more $billion to train GPT-5, this will get sabotaged.

The scaling party is basically over. Or rather, it has moved to Redmond.


This is where other big tech giants need to move. MSFT provides nothing extra which Google/Amazon/Meta can not move. Make it multi platoform and make it more open source.


This looks like a short term compromise to defend MSFT before the market opens. A number of members will follow Sam and Greg, but I doubt if it will be the majority given it's yet another big tech rather than a brand new startup. And what would be their roles? Yet another VP/SVP? Those folks are not really AI guys and don't fit very nicely into all the bureaucracy rampant in big techs. Satya will of course try to give them as much room as possible, but it will be considerably smaller and slower thanks to all those corporate politics and external regulations.


Satya just tweeted saying that Sam Altman would be the CEO of this new group.


Can you share the tweet?



Yep feels like a desperate attempt by nadella to restore confidence in him and Microsoft’s massive investment and news like this can easily change on a dime


Microsoft could literally burn $10B and not even notice it. They just wrapped up spending $70B in a smaller division (gaming). I don't think this has anything to do with saving face for investors.


I think people underestimate how much of a company’s value is in their key leadership, select talent, and technology. When a company is acquired those are typically the reasons to do so other than pure revenue acquisition. Microsoft already has their technology, now has the key leadership, and will soon have the select talent.

Satya wins, OpenAI is walking dead.


Satya really goated here.

He takes advantage of this situation and make OpenAI's assets in his control more than ever.

He is the genius, scary even.


Pirate more like. He's not just poaching "talent" he has likely stolen IP and will hope to destroy OpenAI in court costs. Microsoft is a terrible company and I hope this backfires on them.


> When a company is acquired those are typically the reasons to do so other than pure revenue acquisition

Large companies are primarily purchased for their moats


Satya wins

Sam wins

Ilya and the board continue to look like fools


This might turn out to be a lot more stable structure long term: the commercialization of AI under Microsoft's brand, with Microsoft's resources, and the deep research into advanced AI under OpenAI. This could shield the research division of OpenAI from undue pressure from the product side, in a way that it probably couldn't when everything was under one roof.


I find your theory more plausible. Microsoft, Google and Amazon were lagging in AI. You can simply look at their voice assistants for an example. That's why they started investing billions in OpenAI and other think tanks in this space. Now capital turns things around to be as they should (from their perspective) and reacquires control.

Anthropic is probably next in line.


MS/G/A didn’t put this into voice assistants not because they don’t have it, but because it doesn’t scale to fit the commercials at the moment. Google invented transformers and Deepmind had GPT scale LLM’s at least a year before CGPT came out.

Altman just rushed everyone’s hand by publishing it into the world at cost


"just" is an understatement.

My friends and family had an awful opinion of AI in general because it was the voice assistants were sold to them as the best example of AI. That changed with ChatGPT.

Google invented really useful AI but failed to deliver. OpenAI did so in record time. Now it's Google that's playing catching up with the technology they invented themselves, ironically.

But my comment applies more to Microsoft and Amazon, tbh.


This wasn’t a result of product genius in this case - OAI just didn’t have the regulatory and PR oversight that big tech has - I know for a fact Meta and Google had CGPT equivalent models ready but couldn’t launch them as they’d get rightfully berated for the model being racist or hallucinating. Things OpenAI avoided because it’s a startup non-profit.

And OAI delivered with enormous per-user cost that doesn’t scale - in an app that is a showcase and doesn’t really have latency requirements as people understand it’s a prototype.

And the vas majority of people play with CGPT, they don’t use it for anything useful. Incidental examples of friends and family of tech workers to the side.


Ugh. I’m not keen on AGI being an eventual Microsoft product, or after this circus, even the hangers on at Open AI. Hope it’s still decades off and this all is a silly side show footnote.


Satya just pulled best move of 2023. Gets the hot names, whoever will follow Sam and Greg, to work in a startup like cocoon. Throws money at them, which is peanuts to Microsoft, both stock to keep them and unlimited compute. Sam wants to do custom chips? Do it with Microsofts money, size and clout. All doors are open. The new Maia100 chip can soon be followed by Sam200. Brings innovation and makes the company more attractive to future hires. Who cares if Same leaves after 2 years? Maybe that was part of the discussions, Satya wont be around forever and doesn't really have a good allround replacement inhouse. MSFT stock meanwhile goes from sideways movement to another all time high and onto 400. Genius move, would have never thought Sam accepts such arrangement but it makes sense.


The only shocking thing about this whole episode was how many people in the media failed to understand just how much power this board had.

They were, at no time, under any obligation to do anything except what they wanted and no one could force them otherwise. They held all the cards. The tech media instead ran with gossip supplied by VCs and printed that as news. They were all going to resign 8 hours after their decision. Really? Mass resignations were coming. Really? OpenAI is a 700 people company, 3 people have resigned in solidarity with Altman and Brockman at the time.

Sam had no leverage. Microsoft and other investors had little leverage. Reading the news you’d think otherwise.


No one would really resign until they had another branch to grab onto. You wouldn't expect anyone to resign this weekend. It would happen in the months afterwards.


If talent starts leaving OpenAI and join Sam at Microsoft, what does OpenAI have left? If investors decide not to give money to OpenAI because their leadership comes across as over their heads, how will they continue running?

That may have been the leverage Microsoft and other investors tried to use, but OpenAI leadership thinks won't happen. We'll see what unfolds.


> If talent starts leaving OpenAI and join Sam at Microsoft, what does OpenAI have left?

This is a real possibility and something I'm sure Ilya and the board thought through. Here's my guess:

- There's been a culture rift within OpenAI as it scaled up its hiring. The people who have joined may not have all been mission driven and shared the same values. They may have been there because of the valuation and attention the company was receiving. These people will leave and join Altman or another company. This is seen as a net good by the board.

- There's always been a sect of researchers who were suspicious of OpenAI because of its odd governance structure and commercialization. These people now have clear evidence that the company stands for what it states and are MORE likely to join. This is what the board wants.

> If investors decide not to give money to OpenAI because their leadership comes across as over their heads, how will they continue running?

I don't think this is an actual problem. Anthropic can secure funding just fine. Emmet is an ex-Amazon / AWS executive. There's possibility that AWS will be the partner providing computing in exchange for OpenAI's models being exclusively offered as part of Amazon Bedrock, for example, if this issue with Microsoft festers. I know Microsoft sees this as a clear warning: We can go to AWS if you push us too hard here.

I don't see how the partnership with MSFT isn't dissolved in some way in the coming week as Altman and co. openly try to poach OpenAI talent. And again, maybe dissolving the MSFT ties was something the board wanted. It's hard to imagine they didn't think it was a possibility given the way they handled announcing this on Friday, and it's hard to imagine it wasn't intentional.


Yup. It all reads like a well executed psyop — or one could think so if one was paranoid.


This actually seems like a decent compromise. Sam and Greg can retain velocity on the product side without having to spin up a whole new operation in direct competition with their old levers of power, and Ilya + co can remain in possession of the keys to the kingdom.


Maybe I'm reading too much into it, but for me it is us framed as if they won't be working on GPT-based products, but on research.

The whole thing reads like this to me: "In hindsight, we should've done more due diligence before developing a hard dependency on an organization and its product. We are aware that this was a mistake. To combat this, we will do damage control and continue to work with OpenAI, while developing our in-house solution and ditching this hard dependency. Sam & Co. will reproduce this and it will be fully under our control. So rest assured dear investors."


How do you conduct research with sales people? even if they manage to bring in researchers from OpenAI, the only gain here is microsoft getting some of the researchers behind the products and/or product developers.


Ah yes, Greg Brockman, former CTO of Stripe (amongst other things)... sales person.


Well, the same way a man with drive, discipline and money but very little in the way of technical expertise can build a company.

Sometimes you need someone who can drive a project and recruit the right people for the project. That person does not always need to be a subject matter expert.


Who are these "sales people" you're referring to? Surely not Greg Brockman, one of the most talented engineers in the world.


> Greg Brockman, one of the most talented engineers in the world.

Can you help me understand how you came to the conclusion?


People who worked with him at OpenAI and Stripe.


He has technical skill, you don't need to oversell him. He's not Ilya.


Except they only had AI model velocity and not product velocity. The user-side implementation of chatGPT is actually quite below what would be expected based on their AI superiority. So the parts that Sam & Greg should be responsible for are actually not great.


Sam and Greg were responsible for everything including building the company, deciding on strategy, raising funding, hiring most of the team, coordinating the research, building the partnership with Microsoft and acquiring the huge array of enterprise customers.

To act like they were just responsible for the "UI parts" is ridiculous.


I'm the first to defend CEOs and it's not a popular position to be in usually, believe me. But in this case, they did an experiment and it blew up based on their model's superiority alone.

Product-wise, however, it's looking like good enough AI is being commoditized at the pace of weeks and days. They will be forced to compete on user experience and distribution vs the likes of Meta. So far OpenAI only managed to deliver additions that sound good on the surface but prove not to be sticky when the dust settles.

They have also been very dishonest. I remember Sam Altman said he was surprised no one built something like chat GPT before them. Well... people tried but 3rd parties were always playing catch-up because the APIs were waitlisted, censored, and nerfed.


a) Meta is not competing with OpenAI nor has any plans to.

b) AI is only being commoditised at the low-end for models that can be trained by ordinary people. At the high-end there is only companies like Microsoft, Google etc that can compete. And Sam was brilliant enough to lock in Microsoft early.

c) What was stopping 3rd parties from building a ChatGPT was the out of reach training costs not access to APIs which didn't even exist at the time.


You're wrong about A & C but B is more nuanced.

a) Meta is training and releasing cutting-edge LLM models. When they manage to get the costs down, everyone and their grandma is going to have Meta's AI on their phone either through Facebook, Instagram, or Whatsapp.

b) Commoditization is actually mostly happening because companies (not individuals) are training the models. But that's also enough for commoditization to occur over time, even on higher-end models. If we get into the superintelligence territory, it doesn't even matter though, the world will be much different.

c) APIs for GPT were first teased as early as 2020s with broader access in 2021. They got implemented into 3rd party products but the developer experience of getting access was quite hostile early on. Chat-like APIs only became available after they were featured in ChatGPT. So Sam feigning surprise about others not creating something like it sooner with their APIs is not honest.


It's typical HN/engineer brain to discount the CEO and other "non-technical" staff as leeches.


If I recall correctly, Mira Murati was actually the person responsible for productizing GPT into a Chatbot. Prior to that, OpenAI's plan was just to build models and sell API access until they reach AGI.

I know there's a lot of talk about Ilya, but if Sam poaches Mira (which seems likely at this point), I think OpenAI will struggle to build things people actually want, and will go back to being an R&D lab.


This is kind of true, I think programming even codellama or gpt3.5 is more than enough and gpt-4 is very nice but what is missing is good developer experience, and copy-pasting to the chat window is not that.


Just curious what do you think is bad about the user side experience of chatgpt? It seems pretty slick to me and I use it most days.


Not being able to define instructions per “chat” window (or having some sort of a profile) is something I find extremely annoying.


That's exactly what the recently released GPT Builder does for you!


I wonder if they'll get bored working on Copilot in PowerPoint


Ilya and co are going to get orphaned, there’s no point to the talent they have if they intend to slow things down so it’s not like they’ll remain competitive. The capacity that MSFT was going to sell to OpenAI will go to the internal team.


Maybe they want it that way and want to move on from all the LLM hype that was distracting them from their main charter of pushing the boundaries of AI research? If yes, then they succeeded handsomely


"Don't get distracted by the research which actually produces useful things"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: