All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
Have to give it to Satya. There's a thin possibility that Microsoft would have to write-off its whole $10B (or more?) investment in OpenAI, but that isn't Satya's focus. The focus is on what he can do next. Maybe, recruit the most formidable AI team in the world, removed from the shackles of an awkward non-profit owning a for-profit company? Give enough (cash) incentives and most of OpenAI employees would have no qualms about following Sam and Greg. It will take time for sure, but Microsoft can now capture even a bigger slice of THE FUTURE than it was possible with OpenAI investment.
It's easy to cherry-pick examples from an era where Microsoft wasn't the most successful. The current leadership seems competent and the stock growth of the company reflects that.
"... In the two years since the acquisition announcement, GitHub has reported a 41% increase in status page incidents. Furthermore, there has been a 97% increase in incident minutes, compared to the two years prior to the announcement..."
Speaking as someone who uses github multiple times a day, I think I've only actually noticed 1-2 downtimes in the past year. On the other hand, I've used several of the beta features that have come out, including copilot and the evolving github actions.
That's quite a goalpost shift. The original claim was that Microsoft ruins companies. Your rebuttal to LinkedIn as a counterexample is that they haven't made it better. This does not support the claim that they've ruined it.
> GitHub has been significantly less reliable since Microsoft bought it and Actions has been a disaster of an experience.
Not unreliable enough to be a problem though, and Actions seems to be a decent experience for plenty of people.
The simple fact with GitHub is that it is _the_ primary place to go looking for, or post your, open source code, and it is the go-to platform for the majority of companies looking for a solution to source code hosting.
Your comment about LinkedIn is true, but where is the nearest competition in its' space?
MS is destined to be substantially better than their previous owners. Your right in that it may be too early to predict the financial success, but I am very happy to see MS as the new owners of Activision, no matter what happens.
> only to enable GitHub to do greater things, without disrupting user experience?
Excuse you? Greater where? Github was an amazing revolution, unique of its kind. Microsoft didn't kill it but didn't make it even 1% better for the users, just turned it into a cash cow. Linkedin is currently a PoS.
I think it can be argued that giving free private repos to user is a 1% increase. Or what about private vulnerability reporting for open source projects. And so on. Github has gotten a lot of new free functionality since Microsoft bought it. It sounds like you just have not been paying attention.
Edit: Nevermind, I see you refer to Microsoft as M$. That really says it all.
Despite porting it from Java to C++, Bedrock (Microsoft's rewrite of Minecraft) somehow has worse performance and bugs than vanilla Minecraft. (Also, a bunch of it is somehow in JavaScript?)
The problem with arguing with people like that who cherry pick is even after you provide examples, they will generally respond by just cherry-picking your examples instead of acknowledging the actual point you made
Except the founders weren't included in the Skype deal. Microsoft has OpenAI's two founders and they're highly motivated to show that OpenAI is nothing without them which in time it may soon be. I openly await when Sam and Greg ship a product in say two years time.
Meanwhile Microsoft wins if OpenAI stays dominant and wins even bigger if Sam and Greg prevail. Some day soon they may teach this story at Harvard Business School.
- Github Purchase, Linkedin Purchase
- Aligned Microsoft towards "openness" culturally
- VS Code + Typescript
- Partnership with Open AI which might make bing actually be used
might be missing some more but Satya is like a S tier CEO, compared to Sundar who doesn't seem very good at his role.
Did MS do anything with Linux before Sataya? At present I believe that the bulk of their Azure hosts are running Linux - their own distro. And AFAIK it is successful.
Teams is dominating only because they bundle it with their other services. In my personal opinion, as someone who loved Skype before Microsoft got involved with it, Teams is complete trash, and impossible to work with in a business environment.
> Teams is dominating only because they bundle it with their other services.
This is a win from Microsoft's perspective. They don't have to have the best group messenger around, but having a significant office product being dominated by another company would be a massive risk to Microsoft, and Teams has prevented that.
Not every swing is going to be a home run. Billion dollar investments sound like a lot but not for companies of this size. They are small to medium sized bets.
What’s the evidence behind “they got the best part of their team?”
It seems to me roughly all of the value of OpenAI’s products is in the model itself and presumably the supporting infrastructure, neither of which seem like they’re going to MSFT (yet?).
Eh, seems like an ambitious read, and obviously if they actually wanted to give Sam leverage it would’ve required saying “I will leave if he’s not reinstated,” not a more generic statement of solidarity.
Yeah this is much much less ambiguous than the Twitter things. At least answers my question of, "is there actually that much support for Altman?" Now the second question, much more important and still ambiguous IMO, is whether these people will actually resign to do this. The letter just says they "may" resign, which leaves really the last thing you want in an ultimatum like this: ambiguity.
I really don't understand the argument here, why are Altman and Brockman the most formidable AI team? I would wager a substantial sum that Altman has not touch anything technical (let alone related to AI) in a very long time. He certainly showed he is a very good operator, networker and executer, but that doesbt give you the technical expertise to build state of the art AI.
If he manages to get a significant amount of the OpenAI engineers to jump ship maybe, but even for those who are largely motivated by money, how is MS going to offer the same opportunity as when they joined for equity with OpenAI? Are they going to pay then >$1M salaries?
> I really don't understand the argument here, why are Altman and Brockman the most formidable AI team?
Recruiting. At the end of the day, that's the most important job a CEO has. If they can recruit the best AI people, they're the most formidable AI team.
> Are they going to pay then >$1M salaries?
I would wager very heavily that they are. My guess is Satya more or less promised Sam that he'd match comp for anybody who wants to leave OpenAI.
If I was a SE/MLE at OpenAI , and I had a choice between the nonprofit OpenAI and MS, I'd follow Sam to MS. This is assuming I had profit sharing contracts in place.
There's a current fashion for tech "leaders" (bosses, really) to try to imbue in their staff a kind of cultish belief in the company and its leader. Personally, I find these efforts extremely offputting. I'm thinking of the kind of saccharine corporate presentations from people like Adam Neumann and Elizabeth Holmes; it evidently appeals to some kinds of people, but I run a mile from cults.
My guess is that a lot of the people that will follow Sam and Gregg are that kind of cult-follower.
The cynicism that regards hero worship as comical is always shadowed by a sense of physical inferiority, Yukio Mishima. You reveal more here about your own psychology than those who have a mission that they believe in and are passionate about. It's always easy to criticise from the sidelines.
I don’t get it too. It’s akin to claiming that by hiring an Oracle executive you can build the best database tech. A little stretch but still. Chances are I’ll never understand how things like that work, because there must be few truths about humans my mind resists to believe.
My uneducated guess is that OpenAI really screwed up the PR part and the current Microsoft’s claims are more on the overall damage control / fire suppression side.
I'm not sure I follow this chain of arguments, which I hear often. So, a technology becomes possible, that has the potential to massively disrupt social order - while being insanely profitable to those who employ it. The knowledge is already out there in scientific journals, or if it's not, it can be grokked via corporate espionage or paying huge salaries to the employees of OpenAI or whoever else has it.
What exactly can a foundation in charge of OpenAI do to prevent this unethical use of the technology? If OpenAI refuses to use it to some unethical goal, what prevents other, for profit enterprises, from doing the same? How can private actors stop this without government regulation?
Sounds like Truman's apocryphal "the Russian's will never have the bomb". Well, they did, just 4 years later.
I think the last couple decades have demonstrated the dangers of corporate leadership beholden to whims of shareholders. Jack Welch-style management where the quarterly numbers always go up at the expense of the employee, the company, and the customer has proven to be great at building a house of cards that stands just long enough for select few to make fortunes before collapsing. In the case of companies like GE or Boeing, the fallout is the collapse of the company or a “few” hundred people losing their lives in place crashes. In the case of AI, the potential for societal-level destructive consequences is higher.
A non-profit is not by any means guaranteed to avoid the dangers of AI. But at a minimum it will avoid the greed-driven myopia that seems to be the default when companies are beholden to Wall Street shareholders.
I don't think cherry-picked examples mean much. But even so, you don't seem to be answering the question, which was "how will being a non-profit stop other people behaving unethically?"
Look up the reason OpenAI was founded. The idea was exactly that someone would get there first, and it better be an entity with beneficial goals. So they set it up to advance the field - which they have been doing successfully - while having a strict charter that would ensure alignment with humanity (aka prevent it from becoming a profit-driven enterprise).
In your world yes, but in another, nonprofits are able to work in research that the Government should not, cannot or is too inefficient at ever getting working.
I'm no embarrased billionaire, but there is a place for both.
Isn't Microsoft in breach of contract here? Not by the word (parties hadn't forseen such event, and so there won't be anything about this explicity in the contract). But one could argue that MS isn't acting in good faith and acting counter to the purpose of the agreement with OpenAI.
The argument would go something like this:
MS were contractually obliged to assist OpenAI in their mission. OpenAI fired Altman for what they say is hindering their mission. If MS now hires Altman and gives him the tools he needs, MS is positioning itself as an opponent to OpenAI and its mission.
I am sure Sutskever knows openai as an economically competitive entity has been living on borrowed time. this is a global arms race and this tech will bleed out everywhere. implementing LLMs is not rocket science per se and there are multiple places in the world this work can be done.
the bottleneck right now is mostly compute I think, and openai does not have the resources or expertise to allieviate that bottleneck on a timescale that can save them.
Since the board was never clear what Altman did, you could make flip the parties and your breach of contract argument holds about as much water. Plus MS can resort to the playground "they started it" argument.
For Microsoft , 2% loss in stock value on this news on Friday was $60 billion, so writing off $10B and giving another $50B to form a team is still a great deal.
For Sam , he got more than what he was asking and a better prospect to become CEO of Microsoft when Satya leaves. Satya lead cloud division, which was the industry growth market at that time before becoming CEO and now sam is leading AI division , the next growth market.
Ilya still lost in all of this , he managed to get back the keys of a city from sam , who now got this keys to the whole country . Eventually sam will pull everyone out of the city in to rest of his country. Microsoft just needs a few openai employees to join them . They just need data and GPU , openai has reached its limits for getting more data and was begging for more private data while Microsoft holds worlds data, they will just give a few offers to business or free Microsoft products in return of using their data or use their own. I think it’s the end for openAI.
10 billion was potential investment. They transfer that in tranches, so lot of it is still in MS bank. They already have access to GPT3/4/turbo + Dalle 2/3. Plus with its hordes of lawyers, it will be an uphill battle for OpenAI to make MS lose.
Sure, they can but that would be against all the safe alignment values they are pushing. They'll lose billions in current and potential investment and will spend the life in lawsuits. Also, govt may not like giving away cutting age tech to China.
Satya simply had to move quickly to restore shareholder confidence. I'm not convinced that its actually desirable for Microsoft to be fully in the driving seat. Hopefully the new division will have autonomy.
Microsoft will not have actually paid $10B as a single commitment, in fact the financials of OpenAI appear to be alarming from the recent web chatter. OpenAI are possibly close to collapse financially as well as organizationally.
Whatever Satya does will be aimed at isolating Microsoft and its roadmap from that, his job is actually also on the line for this debacle.
The OpenAI board have ruined their credibility and organization.
Agreed, Satya is a first rate executive, other than Gwynne Shotwell at SpaceX, I can't really think of anyone in the same league.
There was a lot of discussion on HN the past few days regarding the importance (or lack thereof) of a CEO to an organization. It may be the case that most executives are interchangeable and attributing success to them is not merited, but in the case of the aforementioned, I think it is merited.
This whole weekend will probably be a case study in both Corporate Governance (Microsoft may look bad here for not anticipating the problem) and Negotiation (a masterclass by Satya: gave Ilya what he wanted and got most of OpenAI's commercial potential anyway).
As much as I dislike Microsoft: they played this exactly right. No boardseat: no culpability or conflict of interest, catch the falling pieces and reposition themselves stronger. What makes you say they didn't anticipate the problem? If they had anticipated it I don't see what else they could have done without making themselves part of the problem.
1. When they invested in Open AI it had a more mature board (in particular Reid Hoffman) and afterwards they lost a few members without replacing them. That was probably something Microsoft could have influenced without making themselves part of the problem.
2. They received a call one minute before the decision was made public. That shouldn't happen to a partner that owns 49% of the company you just fired a CEO from.
Yes, but both of those are not Microsoft's doing but the OpenAI board's doing. You don't just get to name someone to a board without the board to agree to it and normally this happens as a condition of for instance an investment or partnership.
Nadella was rightly furious about this, the tail wagged the dog there. And this isn't over yet: you can expect a lot of change on the OpenAI side.
Yes, that probably was a mistake, it should have come with more protections. But I haven't seen any documents on the governance other than what is in the media now and there is a fair chance that MS did have various protections but that the board simply ignored those.
Don't forget, MS has a board as well. One Satya reports to the same way Sam reported to the OpanAI one. Potantially loosong 10 billion is nothing the board will just shrug off.
Yup back pats from the board to Satya. Only 10 billion to get their foot in the door at OpenAI and now they can ransack all their talent. How many billions would it cost to develop that independently? What a saving.
You seem to have missed the entire point of the comment you’re replying to.
The money was promised in tranches, and probably much of it in the form of spare Azure capacity. Microsoft did not hand OpenAI a $10B check.
Satya gives away something he had excess of, and gets 75% of the profits that result from its use, and half of the resulting company. Gives him an excuse to hoard Nvidia GPUs.
If it goes to the moon he’s way up. If it dies he’s down only a fraction of the $10B. If it meanders along his costs are somewhat offset, and presumably he can exit at some point.
If the board is unhappy about that they are idiots and should not be board members.
Absolutely no one could have predicted Sam being removed as CEO without anyone knowledge until it happened.
But regardless a 10b investment has yields huge results for MS. MS is using openAI tech, they aren’t dependent on the openAI api or infrastructure to provide their AI in every aspect of MS products.
That 10b investment has prob paid itself back already and MS is leveraging AI faster than anyone else and has a stronger foothold on the market.
If the board can’t look past what 10b got then. I wouldn’t have faith in the board.
Given it's Microsoft we're talking about, it's more likely they use it to find new and novel ways to shove Edge, OneDrive, Teams and Bing down your throat whenever you use any of their products.
TBH We are living in the outcome of the $10B investment. Google is in a weaker position in search, with egg on their face. Microsoft appears (with or without ChatGPT) uniquely positioned to monopolize on this new AI future we're heading into with or without OpenAI as a company.
Yes directly, the $10B investment in the company itself may be a write off. But it's not just about that.
Microsoft got Copilot. They were first to establish the brand. OpenAI technologies let them do it. I don't know how much Copilot brand cost, but right now when you're thinking about AI-assisted programming, Copilot is the first thing comes in mind. So probably they got something in return.
Not only Github copilot but the general copilot integrations announced at Ignite for Microsoft 365 and other apps means a much deeper full on assistant integration for whole ecosystem.
> For business and for the consumer. They can retire Bing search at this point, making it Microsoft Copilot for Web or something.
Nah it would make it too understandable. It's Microsoft, they'll just rename Bing to Cortana Series X 365. And they'll keep Cortana alive but as a totally different product.
I would say this is a better outcome for what remains of OpenAI.
a New startup would have created more exodus that Microsoft. Doubt many brilliant researchers would want to be Employee number 945728123 of Microsoft when the market is theirs at this moment.
I did not say the don't have and it's precisely because the do have that is less likely to attract the kind of people that make a difference. Less room to move and less room to be distinguished. Case in point these did not join that renowned group in the first place but joined OpenAI an obscure not renowned group and I guarantee you it's not because MS was not interested.
> There's a thin possibility that Microsoft would have to write-off its whole $10B (or more?) investment in OpenAI
How so? I don't get the hype.
OpenAI trained truly ground breaking models that were miles ahead of anything the world had seen before. Everything else was really just a side show. Their marketing efforts were, at best, average. They called their flagship product "ChatGPT", a term that might resonate with AI scientists but appears as a random string of letters to the average person. They had no mobile app for a long time. Their web app had some major bugs.
Maybe Sam Altman deserves credit for attracting talent and capital, I don't know. But it seems to me that OpenAI's success by far and large hinges on their game-changing models. And by extension, the bulk of the credit goes to their AI research/tech teams.
I have the complete opposite perspective. Their initial api went live sometime late 2020. They have done a fantastic job scaling, releasing features while growing the business at a rate we have not seen many times before.
Maybe the next move is an open offer to any OpenAI employees to join Sam’s team at their current compensation or better.. call it the ‘treacherous 500’ or something.
I dunno man. Doing innovation from inside Microsoft might be more difficult than if they had just formed a new startup. Microsoft as a brand has the stench of mediocracy upon it. Large companies are where ideas and teams go to die, or just rest and vest.
Was GPT4 a success due to the brilliance of OpenAI's tech team vs first movers advantage and good GPU deals with MS? I might be missing something here, but to me nothing about this technology feels like rocket science (obviously, there is a lot of nuance, yada yada, but nothing that seems intractable). I have a strong suspicion that the reason Amazon, Google and so on are not particularly interested in building GPT-scale transformers is that they know they can do it anytime - they are just waiting for others to pave the path to actually good stuff.
>I have a strong suspicion that the reason Amazon, Google and so on are not particularly interested in building GPT-scale transformers is that they know they can do it anytime - they are just waiting for others to pave the path to actually good stuff.
Google has been hyping gemini since the spring (and not delivering it)
Amazon at least wants to be also making money renting out H100-H200 instances in AWS so they may be intentionally only using some of their hardware for themselves.
> possibility that Microsoft would have to write-off its whole $10B
There was an article that came out over the weekend that stated that only a small part of that $10B investment was in cash, the vast majority is cloud GPU credits, and that it has a long time horizon with only a relatively small fraction having been consumed to date. So, if MSFT were to develop their own GPT4 model in house over the next year or so they could in theory back out of their investment with most of it intact.
Depends on the term sheet behind that. That, and how MS is accounting for its minority stake in OpenAI. If they have to write off the vakue, it doesn't how they paid for it.
> recruit the most formidable AI team in the world, removed from the shackles of an awkward non-profit owning a for-profit company
This massively increases the odds we’ll see AI regulated. That isn’t what Altman et al intended with their national press tour—the goal was to talk up the tech. But it should be good in the long run.
I also assume there will be litigation about what Sam et al can bring with them, and what they cannot.
MS also has its own ML teams and is probably capable of replicating a lot of OpenAI without OpenAI.
Like some googlers have mentioned - aside from GPU requirements, there isn't much else of a moat since a lot of ML ideas are presented and debated relatively freely at NEURIPS, ICML and other places.
Choice? Are you framing this as though the whole situation didn't go pretty well toward msft's favor?
Now they get 40 percent of open ai talent and 50 percent of the for profit openai subsidiary.
Pretty sure when the market opens you'll see confirmation that they came out on top.
It's a win for everyone honestly. Anthropic split all over again but this time the progressives got pushed out vs the conservatives leaving voluntarily.
They couldn't keep nice under the tent. Now two tents.
Little diff because this time an investor with special privaleges made a new special tent quick to bag talent.
Easy decision for msft. No talent to competitors. Small talent pool. The other big boys were already all over that. Salty bosses at other outfits. No poach for them. Satya too clever and brought the checkbook plus already courted the cutest girls earlier for a different dance. Hell he was assisting in the negotiation when the old dance got all rough and the jets started throwing hooks about safety and scale and bla bla we all know the story.
Satya hunts with an elephant gun with one of those laser sites and the auto trigger that fires automatically when the cross hair goes over the target. Rip sundar. 2 rounds for satya. One more and I feel bad for Google... Naw... Couldn't feel bad for Google. Punchable outfit. They do punchable things. We all know it... I'm just saying it.
It's pretty naive IMO to think Google isn't going to come out with something that threatens OpenAI or Microsoft. It seems to be "they didn't do it yet so they won't ever" is the majority opinion here, but they have a ton of advantages when they finally do
What? I didn't say anything about the likelyhood of competition to state of the art.
You are imagining I fall in a crowd you've observed. Maintaining statute of the art ofc is a constant battle.
Google could be top dog in 2 weeks. Never insinuated otherwise. (though I predict otherwise, if we're gonna speculate)
Its not even relevant because each big firm is specializing to a degree. Anthropic is going for context window and safety... Bard is all about Google priorities... Ect
> most of OpenAI employees would have no qualms about following Sam and Greg. It will take time for sure
By all accounts, OpenAI is not a going concern without Azure. I could see Tesla acquiring the bankrupt shell for the publicity, but the worker bees seem to be more keen on their current leader (as of last week) than their prior leader. OpenAI ends with a single owner.
However it's a nice way to deal with the whole "open" AI issue: first you create a non-profit to create open AI systems; then when you hit a marketable success it turns into a "capped profit"; and finally, all the people from that capped profit leave en masse and transfer their acquired know how to a for-profit company.
This might have been a reasonable and workable solution for all parties involved.
Context:
---------
1.1/ ILya Sukhar and Board do not agree with Sam Altman vision of a) too fast commercialization of Open AI AND/OR b) too fast progression to GPT-5 level
1.2/ Sam Altman thinks fast iteration and Commercialization is needed in-order to make Open AI financially viable as it is burning too much cash and stay ahead of competition.
1.3/ Microsoft, after investing $10+ Billions do not want this fight enable slow progress of AI Commercialization and fall behind Google AI etc..
a workable solution:
--------------------
2.1/ @sama @gdb form a new AI company, let us call it e/acc Inc.
2.2/ e/acc Inc. raises $3 Billions as SAFE instrument from VCs who believed in Sam Altman's vision.
2.3/ Open AI and e/acc Inc. reach an agreement such that:
a) GPT-4 IP transferred to e/acc Inc., this IP transfer is valued as $8 Billion SAFE instrument investment from Open AI into e/acc Inc.
b) existing Microsoft's 49% share in Open AI is transferred to e/acc Inc., such that Microsoft owns 49% of e/acc Inc.
c) the resulted "Lean and pure non-profit Open AI" with Ilya Sukhar and Board can steer AI progress as they wish, their stake in e/acc Inc. will act as funding source to cover their future Research Costs.
d) employees can join from Open AI to e/acc Inc. as they wish with no antipoaching lawsuits from OpenAI
It's more than that, OpenAI had many people aligned with the decel agenda, MSFT managed to take the accel leadership and likely their supporters. Does anyone know any large AI competitors that don't have a big decel contingent? Also interesting that META took the opportunity to close one of their decel departments on Saturday.
Really? These two did not do the technical work but hired, managed, and fund raised.
They won’t necessarily be able to attract similar technical talent because they no longer have the open non profit mission not the lottery ticket startup PPO shares.
Working on AI at Microsoft was always an option even before they were hired, not sure if they tip the scale?
> There's a thin possibility that Microsoft would have to write-off its whole $10B (or more?) investment in OpenAI
Hiring Altman makes sure that MSFT is still relevant to the whole Altman/OpenAI deal, not just a part of it. Hiring Altman thus decreases such possibility to write-off its investment.
> Microsoft would have to write-off its whole $10B (or more?) investment in OpenAI
Not sure why you didn’t research before saying that! It was $10B committed and not a cash handover of that amount. Also, majority of that’s Azure credits
You do realize that Microsoft uses OpenAI IP for all of its AI products, of which there are at least two dozen that they released this year. In what universe do you make the connection that they would write it off and go to a different, less superior/reliable, model provider? It would never happen.
>>There's a thin possibility that Microsoft would have to write-off its whole $10B (or more?) investment in OpenAI, but that isn't Satya's focus. The focus is on what he can do next. Maybe, recruit the most formidable AI team in the world, removed from the shackles...
That's a slightly flamboyant reading.. but I agree with the gist.
A slim chance of total right off doctor off.. that was always the case. This decision does not affect it much. The place in the risk model, where most of the action happens... Is less dramatic effects on more likely bans of the probability curve.
Msft cannot be kicked off the team. They still have all of the rights to their openai investment no matter who the CEO is.
Meanwhile, is clearly competing, participating, and doing business with openai. The hierarchy of paradigms, is flexible... Competing appears to have won.
I agree that direct financial returns, are the lesser part of the investment case for msft.. and the other participants. That's pretty much standard in consortium-like ventures.
At the base level, openai's IP is still largely science, unpatentable know how and key people. Msft have some access to (I assume) of openAI' defendable IP via their participation in the consortium, or 49% ownership of the for-profit entity. Meanwhile, openai is not so far ahead that pacing them from a dead start is impossible.
I also agree, that this represents a decision to launch ahead aggressively in the generative AI space.
In the latter 2000s, Google have the competence, technology, resources and momentum to smash anyone else on anything worldwideWeb.
They won all the "races." Google have never been good at turning wins into businesses, but they did acquire the wins handily. Microsoft wants to be that for the 2020s.
Able to replicate everything, for the new paradigm OpenAI's achievments probably represents.
The AI spreadsheet. The LLM email client. GPT search. Autobot jira. Literally and proverbially.
At least in theory... Microsoft is or will be in a position to start executing on all of these.
Sama, if he's actually motivated to do this.. it's pretty much the ideal person on planet earth for that task.
I'm sure takes a lot to motivate him. Otoh, CEO of Microsoft is it realistic prize if he wins this game. The man is basically Microsoft the person. I mean that as a compliment.. sort of.
One way or another, I expect that implementing OpenAI-ish models in applications is about commence.
Companies have been pleading chatbot customer support for years. They may get it soon, but so will the customers. That makes for a whole new thing in the place where customer support used to exist. At least, that is the bull case.
That said, I have said a lot. All speculative. I'll probabilistic, even where my speculations are correct. These are not really predictions. I'm chewing the cud.
All of the naysayers here seem convinced this is Altman and Microsoft looking to destroy OpenAI.
Normally I am the cynic but this time I’m seeing a potential win-win here. Altman uses his talent to recruit and drive forward a brilliant product focused AI. OpenAI gets to refocus on deep research and safety.
Put aside cynicism and consider Nadella is looking to create the best of all worlds for all parties. This might just be it.
All of the product focused engineering peeps have a great place to flock to. Those who believe in the original charter of OpenAI can get back to work on the things that brought them to the company in the first place.
Big props to Nadella. He also heads off a bloodbath in the market tomorrow. So big props to Altman too for his loyalty. By backing MS instead of starting something brand new he is showing massive support for Nadella.
Reading the statement, I am doubtful that Microsoft and OpenAI can continue their business relationship. I think the most aggressive part of this is the "[they will be joining] together with colleagues" sub sentence. He is basically openly poaching the employees of a company that he supposedly has a very close cooperation with. This situation seems especially difficult since Microsoft basically houses all of openai's infrastructure. How can they continue a trust-based relationship like this?
In the end it’s all about business, and it’s not in Microsoft’s interest to destroy OpenAI. It’s in Microsoft’s interest to keep the relationship warm, because it’s basically two different philosophies that are at odds with each other, one of which is now being housed under Microsoft R&D.
For all we know, OpenAI may actually achieve AGI, and Microsoft will still want a front row seat in case that happens.
> He is basically openly poaching the employees of a company that he supposedly has a very close cooperation with
Not doing that would be participating in illegal wage suppression. I'm not sure how following the law means OpenAI and MSFT can't continue a business relationship.
I know I’m not qualified to make that observation, but what exactly makes you think you are? Can you share what information you’re using to make such a confident determination?
My simple take would be the credits for GPT-3.5/GPT-4/GPT-5. The key engineers were part of those that have seemingly moved to Microsoft. I personally think Ilya is brilliant. I absolutely don't think he's the _sole_ brilliant mind behind OpenAI. He wasn't even one of the founders. He's a very brilliant and powerful mind and likely will be critical in the breakthroughs that lead to AGI. That said, AGI feels like one of those "way off in the distance ideas" that might be 5,10, or 100 years away. I tend to think that GPT-x is several orders of magnitude from AGI and this drama was silly and unneeded. GPT-5/6/7/8 aren't likely to destroy the world.
Agreed, I think this is an awesome outcome. We now have an extremely capable AI product organization in-house at each of Microsoft, Meta, and Google, and a couple strong research-oriented organizations in Anthropic and OpenAI. This sounds like a recipe for a thriving competitive industry to me.
I wonder how this will all workout in the end (and the excitement around all of this is a little reminiscent of AOL bying Time Warner).
For one, I'm not sure Sam Altman will tolerate MS bureaucracy for very long.
But secondly, the new MS-AI entity can't presumably just take from OpenAI what they did there, they need to make it again.
This takes a lot of resources (that MS has) but also a lot of time to provide feedback to the models; also, copyright issues regarding source materials are more sensitive today, and people are more attuned to them: Microsoft will have a harder time playing fast and lose with that today, than OpenAI 8 years ago.
Or, Sam at MS becomes OpenAI biggest customer? But in that case, what are all those researchers and top scientists that followed him there, going to do?
Altman reporting to Nadella is certainly going to be a fascinating political struggle!
Part of me thinks that Nadella, having already demonstrated his mastery over all his competitor CEOs with one deft move after another over the past few years, took this on because he needed a new challenge.
I'd wager Altman will either get sidelined and pushed out, or become Nadella's successor, over the course of the next decade or so.
I think you overestimate the technical part. Just speculating (no inside, no expert), but I would assume that the models are pretty "easy" and can be coded in few days. There are for sure some tweaks to the standard transformer architecture, but guess the tweaks are well known to sam and co.
The dataset is more challenging, but here msft can help - since they have bing and github as well. So they might be able to make few shortcuts here.
The most time consuming part is compute, but here again msft has the compute.
Will they beat chat-gpt 4 in a year? Guess no. But they will come very close to it and maybe it would not matter that much if you focus on the product.
What I meant is, most likely assuming that you are using pytorch / jax you could code down the model pretty fast. Just compare it to llama, sure it is far behind, but the llama model is under 1000 lines of code and pretty good.
There is tons of work, for the training, infra, preparing the data and so on. That would result guess in millions lines of code. But the core ideas and the model are likely thin I would argue. So that is my point.
I don't think so, MSR is more like OpenAI, a research think tank. MSR doesn't create products, they create concepts. I think Sam wants to create products. I think it would also be a difference in velocity to market.
What about the people who got paid equity for the past few years of work and now might see all of their equity intentionally vaporized? They essentially got cheated into working for a much lower compensation than they were promised.
I get that funny money startup equity evaporates all the time, but usually the board doesn’t deliberately send the equity to zero. Paying someone in an asset you’re intentionally going to intentionally devalue seems like fraud in spirit if not in law.
There is probably a lawsuit here, I would not disagree, but I don't think the board will have too much trouble arguing that they didn't intentionally send the equity to zero. I certainly haven't seen any of them state that that was their intention here. But the counter argument that theyshould have known that their actions would result in that outcome may be a strong one.
But I think it is probably sufficient to point to the language in the contracts granting illiquid equity instruments that explicitly say that the grantee should not have any expectation of a return.
But I think this is an actual problem with the legal structure of how our industry is financed! But it's not clear to me what a good solution would even be. Without the ability to compensate people with lottery tickets, it would just be even more irrational for anyone to work anywhere besides the big public companies with liquid stock. And that would be a real shame.
Equity has value because it is a share of future profits. If the board comes out and says they never intend to make a profit and will fire any CEO who tries…
A for profit company can simply be the vehicle required to take investments that the non profit was forbidden. I doubt the non-profit parent company board ever intended the sub to be a runaway profit maker and anyone going to work for a sub of a non-profit would probably be aware that the potential there was capped compared to traditional for profit corps.
I say this as someone with 20 years of Mozilla employment, the first couple in the non profit Mozilla Foundation and then about 18 years in the taxable subsidiary. The sub is technically taxable, so "for profit" but it was never created to make people rich, but rather to allow Mozilla to reap some profits and grow it size and influence, which it did, reaching about 30% browser market share.
The structures were similar but likely different in material ways, as there was zero equity at MoCo, nevertheless, if you go to work for an arm of a non-profit, expecting to get rich, you're probably not reading the fine print carefully enough.
> anyone going to work for a sub of a non-profit would probably be aware that the potential there was capped compared to traditional for profit corps
I think this is where this all went off the rails. It's very clear that a huge percentage of the staff (I think the last numbers I saw was that over 85% of the staff had signed the letter urging the board to resign) were hired with incredibly big compensation packages, predicated on the giant equity valuation. It is not surprising that those people did not turn out to be there due to being big believers in the mission of the non-profit, or that they expected those compensation packages to be real.
The board would counter that that equity was for a stake in a non-profit open source research company and the board was simply steering the ship back towards those goals.
I suppose I don't see the case where large numbers of OpenAI employees follow these two to Microsoft. Microsoft can't possibly cover the value of the OpenAI employees equity as it was (and imminently to be), let alone what could have potentially been. There is a big difference between being on a rocket ship and just a good team at a megacorp.
I’m going to go out on a limb and guess that going forward there won’t be much investor interest in OpenAI.
And if you separate out the products from OpenAI, that leaves the question of how an organization with extremely high compute and human capital costs can sustain itself.
Can OpenAI find more billionaire benefactors to support it so that it can return to its old operating model?
Microsoft has access to almost everything OpenAI does. And now Altman and Brockman will have that access too.
Meanwhile, I imagine their tenure at MSFT will be short-lived, because hot-shot startup folks don’t really want to work there.
They can stabilize, use OpenAI’s data and models for free, use Microsoft’s GPUs at cost, and start a new company shortly, of which Microsoft will own some large share.
Altman doesn’t need Microsoft’s money - but Microsoft has direct access to OpenAI, which is currently priceless.
Saying it and doing it are very different things. Many huge, lumbering companies have a “startup” lab. Few have done anything of note, and typically it’s because the reasons that made the company move slow and not take risks don’t magically disappear because you’re in a different part of the org chart.
Microsoft is not just any huge, lumbering company, though. It has probably the best history of research of any pure software company (leaving aside IBM etc): Microsoft Research funded Haskell behind the scenes for years, they had a quantum computing unit in 2006, and already in 2018 were beating the field in AI patents and research:
If anything, the examples in that tweet shows the opposite. GitHub and Mojang both done lots of things that wouldn't happen if they weren't now Microsoft, especially GitHub which is only "GitHub" by name at this point, none of the original spirit is still there.
Source for the below: Worked at Skype before and after the MS acquisition.
MSFT's control isn't as "hard" as you portray it to be. At the senior leadership level they're pretty happy to allow divisions quite a lot of autonomy. Sure there are broad directives like if you support multiple platforms/OSes then the best user experience should be on "our" platform. But that still leaves a lot of room for maneuverability.
Soft control via human resources and company culture is a whole other beast though. There are a lot of people with 20+ years of experience at Microsoft who are happy to jump on job openings for middle-management roles in the "sexy" divisions of the company - the ones which are making headlines and creating new markets. And each one that slides on in brings a lot of the lifelong Microsoft mindset with them.
So yeah working within MS will be a very different experience for Altman, but not necessarily because of an iron grip from above.
My view on that (which was from very low on the totem pole) is that the acquisition happened at a time where Skype's core business model (paid calling minutes) was under existential threat. Consumer communications preferences had started to go from synchronous (calling) to async (messaging) even before the acquisition came through. While Skype had asynchronous communications in a decent place (file transfer in the P2P days was pretty shaky but otherwise consumer Skype was a solid messaging platform), there was no revenue there for us.
Then the acquisition happened at a time when Microsoft presented a lot of opportunities to ship Skype "in the box" to pretty much all of MS' customers. Windows 8, Xbox One and Windows Phone (8) all landed at more or less the same time. Everybody's eyes became too big for their stomachs, and we tried to build brand new native experiences for all of these platforms (and the web) all at once. This hampered our ability to pivot and deal with the existential risks I mentioned earlier, and we had the rug pulled out from under us.
So yes I think the acquisition hurt us, but I also never once heard a viable alternative business strategy that we might have pivoted to if the acquisition hadn't happened.
The game studios under Xbox run quite independently with the most extreme example being Mojang with Minecraft which still releases all their games on Playstation/Nintendo consoles too. But the other studios are also very independent based on all the interviews (though they don't in general release their games on Playstation or Switch)
As I understand Github is also run very independently from Microsoft in general.
Github operates independently of Microsoft. (To Microsoft's detriment... they offer Azure Devops which is their enterprisey copy of Github, with entirely different UX and probably different codebase.) They shove the copilot AI now everywhere but it still seems to operate fairly differently.
They didn't really fold LinkedIn in into anything (there are some weird LinkedIn integrations in Teams but that's it)
Google seems to me much worse in this aspect, all Google aquisitions usually become Googley.
> Github operates independently of Microsoft. (To Microsoft's detriment... they offer Azure Devops which is their enterprisey copy of Github, with entirely different UX and probably different codebase.)
GitHub Actions is basically Azure Pipelines repackaged with a different UI, so I don't think they mind much.
Highly unlikely. Instead they'll be working on internal Windows AI tools for chatbots and random AI features in Windows. We all lose in this situation.
There’s no chance Sam is joining Microsoft to be some “VP of AI” to drive strategy like that. He’s going to be driving some new business where he’ll be able to move quickly and have a ton of control.
I've read a decent amount of predictions about this and had not actually seen this one or considered it until I read about it happening.
I think the predictable thing would have been a new company with new investment from Microsoft. But this is better; it a bit like magical thinking that MS would want to just throw more money after a new venture and essentially write off the old one. This solution accomplished similar things, but gives more to Microsoft in the trade by bringing that "new company" fully in house.
I said this elsewhere, but think the timeline is longer than that. Either Sam and Satya will butt heads and Altman will be sidelined, or it will be a good partnership, and he'll be on the shortlist as a successor when Nadella's run naturally comes to a close. But that second path is longer than a couple years.
That was my first thought too: Didn't occur to me as a solution, and it seems to square the circle brilliantly. It struck me that this is why people who are CEOs of mammoth companies have the jobs they have, and not me :)
if hot shot startup folks don't want to work there, why would they even go there? If you're flat broke, you need a job; they're not flat broke, they don't need a job. The deal at MS is worth it, or it's not, it's not something they need to decide over the course of a weekend... unless it's what they were already not being candid about.
It is the only way they can continue the work they have already contributed at OpenAI. Otherwise, it would mean they spend months or up to a year training their own model which in this arms race isn't feasible with viable competitors like Anthropic closing the gap quickly. This was the only way forward. I'm sure Sam Altman + Greg were offered an incredibly lucrative deal and autonomy.
it's been 2 days, they haven't even heard all the possible offers. Microsoft hasn't offered anybody autonomy since billg granted it to himself. Even Myhyrvold never did anything autonomous till he resigned and wrote a cookbook. The closest thing to autonomous in Microsoft was neilk breathing enough new life into 16bit Windows to get them to abandon OS/2
>Sam is already post money rich. Lucrative isn’t in this equation
i totally agree, except stupid-lucrative is still in the equation, like Elon Musk rich, not because of the money, but because it says "my electric cars did more to stop global warming than anything you've done"
whether this round of AI turns into AGI doesn't precisely matter, it's on the way and it's going to be big, who wouldn't want their name attached to it.
Seems like in the minority here, but for me this is looking like a win-win-win situation for now.
1. OpenAI just got bumped up to my top address to apply to (if I would have the skills of a scientist, I am only an engineer level), I want AGI to happen and can totally understand that the actual scientists don't really care for money or becoming a big company at all, this is more a burden than anything else for research speed. It doesn't matter that the "company OpenAI" implodes here as long as they can pay their scientists and have access to compute, which they have do.
2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.
3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI. Working for Sam now is straightforward commercialization without the "open" shenaningans, and working for OpenAI can now become the idealistic thing again that also attracts people.
4. Satya Nadella is becoming celebrated and MSFT shareholder value will eventually rise even further. They actually don't have any interest in "smashing OAI" but the new setup actually streamlines everything once the initial operational hurdles (including staffing) are solved.
5. We outsiders end up with a OpenAI research focussed purely on AGI (<3), some product team selling all steps along the way to us but with more professionality in operations (<3).
6. I am really waiting for when Tim Cook announces anything about this topic in general. Never ever underestimate Apple, especially when there is radio silence, and when the first movers in a field have fired their shots already.
That is just a matter of perspective. It's clearly a win-win if you're on team Sam. But if you're on team Ilya, this is the doomsday scenario: With commercialisation and capital gains for a stock traded company being the main driving force behind the latest state of the art in AI, this is exactly what OpenAI was founded to prevent in the first place. Yes, we may see newer better things faster and with better support if the core team moves to Microsoft. But it will not benefit humanity as a whole. Even with their large investment, Microsoft's contract with OpenAI specifically excluded anything resembling true AGI, with OpenAI determining when this point is reached. Now, whatever breakthrough in the last weeks Sam was referring to, I doubt it's going to move us to AGI immediately. But whenever it happens, Microsoft now has a real chance to sack it for themselves and noone else.
From Microsoft's perspective, they have actually lowered uncertainty. Especially if that OpenAI employee letter from 500 people is to be believed, they'll all end up at Microsoft anyways. If that really happens OpenAI will be a shell of itself while Microsoft drives everything.
> What if OpenAI decides to partner with Anthropic and Google?
Then they would be on roughly equal footing with Microsoft, since they'd have an abundance of engineers and a cloud partner. More or less what they just threw away, on a smaller scale and with less certain investors.
This is quite literally the best attainable outcome, at least from Microsoft's point of view. The uncertainty came from the board's boneheaded (and unrepresentative) choice to kick Sam out. Now the majority of engineers on both sides are calling foul on OpenAI and asking for their entire board to resign. Relative to the administrative hellfire that OpenAI now has to weather, Microsoft just pulled off the fastest merger of their career.
> Engineers aren’t a lower level than scientists, it’s just a different career path.
I assume GP is talking in context of OpenAI/general AI research, where you need a PhD to apply for the research scientist positions and MS/Bachelors to apply for research engineer positions afaik.
Well I am an engineer but I have no problems in buying that in case of forefront tech like AI where things are largely algorithmically exploratory, researchers with PHDs will be considered 'higher' than regular software devs. I have seen similar things happen in chip startups in olden days where relative importance of professional is decided by the nature of problem being solved. but sure to ack your point its just a different job, though the phd may be needed more at this stage of business. one way to gauge relative importance is if the budget were to go down 20% temporarily for a few quarters, which jobs would suffer most loss with least impact to business plan.
> 3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI.
This one's not right - Altman famously had no equity in OpenAI. When asked by Congress he said he makes enough to pay for health insurance. It's pretty clear Sam wants to advance the state of AI quickly and is using commercialization as a tool to do that.
Otherwise I generally agree with you (except for maybe #2 - they had the right to commercialize GPTs anyway as part of the prior funding).
I think it makes more sense to take him at the spirit of what he said under oath to Congress (think of how bad it would look for him/OpenAI if he said he had no equity and only made enough for health insurance but actually was getting profit sharing) over some guy suggesting something on the internet with no evidence.
Sam Altman is a businessman through and through based on his entire history. Chances are, he will have found an alternative means to make profit on
OpenAI and he wouldn't do this on "charity". Just as how many CEOs say, I will "cut my salary" for example, they will never say "I cut my stocks or bonuses" which can be a lot more than their salary.
Either way based on many CEOs track records healthy skepticism should be involved and majority of them find ways to profit on it at some point or another.
I dunno, the guy has basically infinite money (and the ability to fundraise even more). I don't find it tough to imagine that he gets far more than monetary value from being the CEO of OpenAI.
He talked recently about how he's been able to watch these huge leaps in human progress and what a privilege that is. I believe that - don't you think it would be insane and amazing to get to see everything OpenAI is doing from the inside? If you already have so much money that the incremental value of the next dollar you earn is effectively zero, is it unreasonable to think that a seat at the table in one of the most important endeavors in the history of our species is worth more than any amount of money you could earn?
And then on top of that, even if you take a cynical view of things, he's put himself in a position where he can see at least months ahead of where basically all of technology is going to go. You don't actually have to be a shareholder to derive an enormous amount of value from that. Less cynically, it puts you in a position to steer the world toward what you feel is best.
I think that would be consistent with his testimony. Profit sharing is not a salary and it is not equity. I don’t believe he ever claimed to have zero stake in future compensation.
> reliable operations and support is something I trust them more than most others
With a poor security track record [0], miserable support for office 365 products and lack of transparency on issues in general, I doubt this is something to look forward to with Microsoft.
> 2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.
OpenAI already runs all its infrastructure on Azure.
How does this separation help scientists at OpenAI if there is no money to fund the research? At the end of the day, you need funding to conduct research and I do not see if there is going to be any investors willing to put large sums of money just to make researchers happy.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38344196&p=2
https://news.ycombinator.com/item?id=38344196&p=3
https://news.ycombinator.com/item?id=38344196&p=4
etc...