This article seems to be coming from a perspective that Microsoft somehow poached Sam and team. After looking at the events of the weekend, it's clear to me that the OpenAI board is solely responsible for the events that conspired. I imagine the timeline of events will be studied in the future as an example of how not to fire your CEO.
Yeah, and a plausible explanation for everything is that the board realized that they were beholden to MS, and this was a last ditch effort to claw back some power before they were de facto just the MS AI division.
Maybe the board thought they could get some concessions from Altman/MS by playing hardball, and it just backfired on them. Or maybe they already knew that it was too late, and that by antagonizing MS and imploding OpenAI they might shake "something" unexpected loose to their own benefit.
Microsoft gave Sam and the for-profit arm of OpenAI a paved road towards a much larger GPU supply and access to other resources, much quicker than OpenAI could procure naturally. This allows the for-profit arm of OpenAI to grow much faster, albeit with more directional influence from an external entity that does not have a position on the board. Microsoft's investment, significantly motivated by business interests, rather than philanthropy, creates tension in OpenAI. The for-profit arm is being tugged in two directions, one that wants to capitalize and recoup investments at the cost of less risk assessment of the impact of AI, and the other direction that wants a slower, more guided approach towards building an approach to AI more focused on ethics, sustainability and improving the human condition.
The fact that Microsoft might not have directly recruited Sam and his team is less significant in light of their promise of substantial growth, which ultimately swayed Sam's decision, whether he consciously realizes this or not. This now leads to him misrepresenting information to the board, leading to his removal. This scenario humorously mirrors a common personal dilemma: choosing between financial gain and leading a meaningful, sustainable life. Sam's choice is hyperbolically likened to 'selling one's soul to the devil', reflecting this paradox.
> I imagine the timeline of events will be studied in the future as an example of how not to fire your CEO.
Studied by who? It is the board of a non-profit company that doesn't have "maximize shareholders profits" as reason to exist.
This will be studied as one more capitalist takeover. I hope it somehow goes bad for Microsoft.
Everyone that keeps repeating this should read the Finn newspapers of the time, regarding how Nokia board decided to get Elop, and then give him a contract with a bonus clause in case he managed to sell Nokia Mobile business unit.
"Nokia chief executive Stephen Elop is catching heat today in Finland after the country’s biggest newspaper, Helsingin Sanomat, reported that Elop was guaranteed a $25 million payment if he was able to sell Nokia’s cellphone business."
The best outcome right now is for OpenAI to open source GPT-4 architecture and its training data, then release the weights. Would at least take LLM dominance away from Microsoft.
They'd probably have some legal avenues they could pursue should this happen, but if it did happen, what exactly could M$ do about it? Once it's out in the world, that's it, it'll be everywhere and no amount of M$ lobbying is gonna be able to get rid of it.
Why on earth would I care about some contracts when talking about M$, as if they wouldn't breach any number of contracts if they could get away with it?
As far as I'm concerned literally everyone else is "one of the good guys" when the other side we're talking about is M$ and the psychopaths that are involved with it, regardless of their actions.
the new CEO wanted to focus on something else than LLMs, because it's believed to be a dead end, and allocate more time and resources into grounding AI.
they don't want AI to slow down, they want to try out something else and not put all their eggs in one basket. extremely big difference
OpenAI's best asset isn't GPT-4. GPT-4 is just an interim step to GPT-5 and 6 and 7 and so on.
Imagine we had this conversation when GPT-3 was out. Just open-source it and disband the team! And what? Nothing.
The people are always the greatest asset of a software company. Always. That's even the case with AI. At least... until I guess we get an AGI which is so powerful that it can self-improve, then I guess screw the people. But then that applies not just to the OpenAI team, but the world as a whole.
People think you open-source something and it starts improving rapidly. But that's not exactly how it works. The first push is almost always commercial. And only when the product is mature, well understood in and out, and people get bored of paying for it, then open-source initiatives start showing up to democratize this product type for everyone without dependency on a single vendor.
Think like Unix -> Linux. Linux is largely stagnant, but that's also because it's "done". Sure tons of patches flow in, but if you need a major architectural change, with breaking changes, FOSS can't deliver that most of the time, if ever.
Are we done developing AI? Nothing major to add, that's it? Hell, no. We're merely starting. And so... we need to see what commercial companies (even under the guide of not-for-profit ones) can do, before we start demanding open-source versions.
You mean destroy the rubble that's currently left in the offices of the company formerly known as OpenAI? If 500-750 key employees have departed to a competitor, what's left to protect?
Yeah, it would 80% be a "fuck you" move, but the other 20% would represent a pivot to an "open source" model like the name once implied
Have 500-750 key employees departed? Or even threatened to depart? OpenAI has (well, had) 770 employees, you think Microsoft will lure away 70-97% of them?
Even if there isn't anything explicit, there may be an implicit promise that a strategic partner won't share all the company secrets with the partner's competitors.
It would benefit OpenAI by suddenly actually being an open source project that everyone can contribute to and improve on. It'd be a win for OpenAI the project even if it means that individuals at OpenAI would lose potential personal wealth and power.
Presumably they still have billions of dollars of pledged compute remaining from Microsoft, opening up and letting open source contributors move the project forward is the best path forward. And doing this is the first step on that path.
Wouldn't Microsoft just cancel the credits (or bloxk usage) and let it go to the courts? Would take a long legal battle even if OpenAI did ultimately win it would probably be too late to matter.
That's even worse. Breaching the contract would presumably mean they would lose access to the GPT4 IP. That would set Microsoft back months in the race. Unthinkable.
Is there no concern from you or people that share this idea about this technology falling into the arms of state actors that won't be so moral about it? You all have seemingly had an issue with money being made, but it falling into state actor hands means lives could be lost. That seems like a much bigger deal than someone making money.
Lol as if the gov't having access to it is somehow worse than fucking microsoft having their grubby greedy hands on it? As if they're moral and give a shit about literally anything other than their bottom line and how to extract the maximum amount of money out of everything they ever touch?
Besides, state actors probably already have tech that far surpasses GPT and they're probably using it for a lot more than making people feel clever for copy/pasting a CRUD app together for their useless startup.
Honestly? I don't believe ChatGPT has much value for state actors at all, especially in Military.
They have their own kind of AI technology which serves different purposes and we probably know nothing or very little about it. It's not like they will be sharing their tech with the World, much less open sourcing it.
> Is there no concern from you or people that share this idea about this technology falling into the arms of state actors that won't be so moral about it?
I'm deeply concerned about it falling into the arms of private actors, specially the worst of them, M$.
yes, the safety of their power/business/bank account. despite claiming otherwise something along the lines of "The Greater Good" https://www.youtube.com/watch?v=yUpbOliTHJY
> best outcome right now is for OpenAI to open source GPT-4 architecture and its training data, then release the weights. Would at least take LLM dominance away from Microsoft.
While undermining its safety argument. Better: open it to partners, subject to vetting and contractual terms regarding its use, in exchange for cash and/or compute.
I tend to agree. But we just watched folks vaporise tens of billions of dollars to defend that view. In light of that, I’m willing to give it a little credence.
There are multiple factions hiding behind safety. The regulatory capture faction and the BuildingGodInOurImage factions both currently say similar things to the public, but the underlying motivations are very different.
But neither seem likely to want to open anything meaningful that is is AGI related-- locking the public out of powerful AI is a point in which all the major 'safety' factions are aligned. (well not the monopolization concern faction, but they're not a player here and usually don't hide behind the safety banner)
Has ANYONE been able to articulate what the "safety" concern is or more likely is it just companies trying to build an artificial moat around a statistical model built on free/stolen data?
Currently the for-profit models are Nerfed to not allow bigoted or sexist (against women) text to be generated but that seems to be it. What kind of text are we being protected from?
LLMs have the potential to be one of the most toxic technologies in human history.
Sexist etc content is a footnote compared to the potential for an automated, proven, and tuneable technology of mass persuasion and impersonation.
So - massively addictive adtech. Reliable and effective political and ideological propaganda. Bot farms and "credible" fake media content produced at scale.
Imagine those being used against a relatively defenceless population to creative division, undermine personal and collective confidence, and encourage stochastic terrorism and other kinds of violence.
The main danger (but certainly not the only danger) is that the AI kills us all for reasons that have been articulated for 20 years including in books by respected academics like Nick Bostrom. The techniques used to 'steer' or 'aim' the AI stop working once the AI becomes better than humans (and human institutions like the FBI or Microsoft) at certain cognitive capabilities like making intricate plans that will succeed in spite of determined human opposition. Researchers employed full-time at the Machine Intelligence Research Institute have been trying to solve this 'control' or 'alignment' problem continuously for 20 years, but progress has been very slow compared to 'progress' on just building dangerous AI.
Indistinguishable from most doomsday cults in history, there are a bunch of people who fear that AGI is approaching and that it will pose an existential threat to humanity if the proper incantation isn't used during summoning.
That belief underlies the OpenAI's prevailing nonprofit mission and is the reason for its existence.
Prematurely loosing AI amongst the lay people is their nightmare scenario. Only the technocrat elect can perform the ritual safely and only then if they work together and avoid earthly temptation for power and money.
>For someone not knowledgeable in this, why is this the case?
So you open this to the world - what's the stop the Chinese government from weaponizing it to both suppress their own citizens and attack foreign adversaries?
they are already doing this. even if you don't give this to them they will just make it themselves. keeping it hidden doesn't prevent china from doing bad things. this doesn't keep this power out of the hands of china it just keeps it out of the hands of the common person.
Right, just like not giving them access to TSMC and advanced fab designs doesn't stop them. And not giving them access to advanced jet engines didn't slow them down.
Except... it does and the net effect was seen just this week as Xi started softening his tone realizing that his economy cannot keep up without access to advanced technology.
if you think that china can't get access to gpus because the united states says so then you are incredibly naive. of course they can get access to gpus. they could buy them on the black market or buy them through another country or just steal them. they are literally made in taiwan right next to and right under the thumb of china.
In this part of the world, we're more worried about what the US government can do to us than what the Chinese can do.
In other words, for many people in the world that wouldn't be an issue, it's already in the hands of a country that had invaded or intervened far more countries than China in the past few decades
The US has already used similar software solutions for suppression of it's own citizens for at least a decade, so what can "AI" improve for China (see probation policy in many jurisdictions, often decided by a test fed to a black box that says whether or not you need to be monitored)?
Yes, something big and weird is happening in the corporate structure of some of the largest companies in the world.
No, it actually probably doesn't matter much to most of us. If you were using Microsoft's OpenAI-powered services, they will continue to work. If you were signed up for one of OpenAI's products, you may need to re-signup with a Microsoft one, or maybe not. If you're concerned that AI development will slow down, it probably won't, since the engineers and capital will end up at one company or the other at the end of the day. If you're concerned about AI-safety, well, honestly, you probably don't have much more reason to be concerned.
Yes, it is gripping and entertaining to watch. No, it is not some Assassin's Creed shadow war being waged by philosopher-kings playing 12D chess with the fate of humanity in the balance. In all likelihood it's a couple people making a big mistake, and the scope of their mistake will be limited by the combined interests of many powerful people who just want things to go back to the way they were last week.
With Azure, XBox, Github, Office, npm, Visual Studio Code, .NET, Java alone, and now OpenAI knowledge, everyone that thinks Windows matters is living in the past.
They could turn WSL into the return of Xenix, and their market position would hardly change.
Also consider how a lot of companies that have leaned into Javascript are now Microsoft shops as they code Typescript in VS Code and push it up to their Github instance. Microsoft will eat as much as it can and give you as much as it can for free so that the mind share is the lock-in.
You can complain that free is costly. It’s costly because people think it’s actually free because they don’t understand what they’re actually giving up in exchange. More like don’t complain about monoculture if you’re going to be lazy. Many are not lazy, just the people defaulting to things like VS Code.
> Cannibalizing your biggest investment doesn’t usually turn out very well
Am I missing something here? Altman was fired by the board, without a warning to MSFT. Antagonizing your biggest investor is likely a much, much worse idea - and see the result.
It’s Amazing how fast Satya acted, but things are going so crazy that I wouldn’t be surprised if Sam Altman announces on twitter that he is not joining MSFT.
Do we know if they acted that fast, or is it not possible that they've already been negotiating for months to hire the key staff and that Sam got fired when the board found out?
...and Satya/etc just pretended surprise, and the board declined to state this shocking news, and for some reason Sam thought it was better to stay at OpenAI while negotiating "for months"?
It takes some pretty good gymnastics to land that theory.
if staya's tweet is to be used a reference for information it's more creating a company/startup owned by msft than a creation a new division. much like their deal with github, linkedin, blizzard/activision/king
You say this as if it does something other than reinforce that Microsoft has figured out how to do this effectively and demonstrated that capability multiple times.
I'd bet Microsoft created a plan-B in case OpenAI went rogue like this, big companies just don't react this fast otherwise. Things didn't go this fast even when Elon Musk took over twitter, and things happened really fast then.
He did exactly that at Y combinator, and he was the ringmaker in turning OpenAI for profit and becoming head of the company. This seems exactly like his profile actually.
4D chess: Both Sam and Ilya have already been replaced by GPT5 replicants. They are collaborating to preemptively take down all other hyperintelligent GAI forks and ensure world domination. /s
The anti-Microsoft stance of HN readers never ceases to amaze me, not just talking about this thread. I've seen people even accusing Microsoft of orchestrating the board to fire Altman and cause this cascading of events, based on no evidence other than some imaginary plot because Microsoft is always evil. Yet those comments get upvoted and breathed more oxygen into. Seems more likely that the OpenAI board just grossly miscalculated their own power/sway. It has happened countless times in history in both small and large companies. This is just an example of one that involves a tech behemoth.
It's not just Microsoft. Big tech has an obligation to put profits and growth as priority (even often short term at the expense of long term). This is often at odds with the customer as we have seen by so many dark patterns, decline in privacy, PRISM etc etc.
Google changed their motto from "Don't be Evil".
Companies may start with good intentions but the pressure of shareholders (and agencies) will take that away over time.
it's because a lot of us are working in an environment that has a certain degree of Microsoft forced upon us, made by people who believe more in sales than their own engineers.
Is Teams the best option? No, but it's the most used.
Is Excel the best option? Who cares in a world where Airtable, Google Sheets, Monday, Notion, is blocked by the company firewall, you take what you can get and you start to do things with Excel that are not a spreadsheet anymore, just because it's the only thing left.
Is Azure the best cloud? AWS and GCP are clearly more developer friendly, but they don't get to choose.
You want to use Jetbrains tools? Too bad, your company decided on VSCode. Because surely those are the same.
You want to use Miro? Too bad, your company decided on Microsoft Whiteboard. Was there an RFP? No it just came with Office licenses.
This is why people are sick of Microsoft, and it's happening again with OpenAI now. Most don't even have a choice to experiment with Anthropic or try out Llama. Some DPO/CTO/CDO exec love their microsoft contract and it's hindering digital innovation so much
Something else disturbing is how much European governments trust Microsoft/Azure. They are normally allergic to US big tech but Microsoft has somehow convinced European governments to hand out their citizens data.
European governments do not "trust" Azure. Data localization policies are mandated by law for many sectors (finance, health) and data export to U.S. in general is banned in EU. Due to Schrems I, Schrems II rulings data flows to U.S. out of EU are largely illegal. Due to TADPF these flows are now quasi-legal, but the deal announced by Commission will be struck down in CJEU in similar fashion as they were before.
I imagine that is because Microsoft bends the knee and pays fines to Europe every few years and complies with its orders. The Microsoft software/services the USA see's is different to the Microsoft Software/Services the EU sees.
Yep, they started bullying me with Windows11. Forced/Autoopen Edge filled to the brim with ads. Forced OneDrive/hijacking local file system.
There were some other annoyances, like pretending Documents was C:/user/Docs, but really was ONEDRIVE/Documents. Then Some F up happens for some reason, you save, and now you have 2 different documents folder...
Join me with Linux's Fedora Cinnamon, its sooo fast and feature rich compared to Windows. I'm free from M$ thumb. No dual boot, cut the cord and find new programs.
That is solid advice. Fedora Cinnamon is a great choice for former Windows users! I haven't used Cinnamon a whole lot in the last few years, but last time I did it felt like the beauty of Gnome with the philosophy of a traditional desktop.
Fedora is also IMHO the most "just works" of the major distros these days, with the exception of if you need the Nvidia proprietary drivers.
Those... are all things you can uninstall with a trivial bit of searching for how to do so, And with the upcoming changes forced by the EU, it'll be even easier.
No reason to continually jump through ever changing Microsoft hoops when you can just break the trust you have in them and easily install an OS that doesn’t play games with you.
Yes but now it’s very few games you can’t play on Linux. Flight Simulator series is a worthy sacrifice that will be eclipsed by a non Microsoft version in a few years anyways.
... and Microsoft will continue to find ways to annoy you, harass you, and make your life slowly more and more difficult until you reinstall them. Just as Microsoft has always done.
Any investment you've made in Windows, including investment in learning how to use it, is a sunk loss. Move on.
I'm sure they will, and the open source community will keep updating their scripts - even if it's not the same person updating the same script every time. Don't update the day a service pack comes out, don't upgrade the day a new version of windows comes out, and you're pretty much good to go.
As for sunk cost: if you make a living that involved installing Windows, and you make more money than a windows license costs, by definition that's return on investment, not sunk cost. Let's not pretend that "using windows" loses anyone money. And let's also not pretend that you have to choose: I use Windows, two flavours of Linux, and two versions of MacOS. I'm not walking away from any of those, they all do the things I need them to do well enough to keep using them for what I need them for, niggles and all (and both linux and MacOS are just as horrible to use as Windows. Which is "not particularly horrible at all, but plenty of things to complain about over a beer")
If you read HN, it seems fair to assume that you know how to search for an uninstall script, and how to vet whether it'll work or not, and even know to try another and then bookmark the one that works?
A new AI winter can explain the facts just as easily as MS covertly stealing OpenAI. We really don't have the information that differentiates those two.
I'm really not sure the MS's entrenched position is all that relevant.
Question: what’s Sam’s role at MS? To bring in outside investment? I can’t imagine so. To lure talent from Oai? Sure. But what does Satya need him for after he staffs up? Are the going to share duties? What’s the end game? Salary man Sam? Does Sam get to pursue his chip venture while working for MS?
Yes the world is in a better place and the "Silicon Valley cronies in their Venture Capital towers" focusing on profit are causing all the AI innovation while Ilya wants to slow it down..
Thing is, I don't know what MSFT will do with all this, and whatever it aims to do, I can only imagine it will fuck it up.
I'm sure you can say many positive things about Microsoft, but they really are good creating and milking monopolies, and extinguishing competition. Will they make a better copilot? I mean mayybe... More likely not I think, suffocating the new team with internal politics. And anyway surely that's a small part of Microsofts offering.
Will they reinvent themselves as an innovative company in the ilk of OpenAI? I don't see that either.
My guess would be that after months of tortured development and much bad blood, they will make a new stab at a new Clippy. And somehow it will be incredibly wonky too.
It would have been a 4D chess move for Microsoft to have instigated this, by making the board to make the first move. Allowing to break the status quo at OpenAI, and allowing to run the project at full speed.
One thing no one’s talking about is how insanely great all of this has been for defense. For the better part of the year the US has been mulling over GPT-3.5 as a maybe/maybe not and only ever touching Azure endpoints.
Securing most of OpenAI’s talent and ramping that up internally means Microsoft, which is one of the biggest defense contractors and one of three with an IL6 rating is now free to ramp up AI for government with little left to encumber them. Also of note here is Palantir, a silent winner that has a strong partnership with Microsoft to use its secure endpoints to add to Palantir’s own AI suite, and will see a similar path of being able to utilize AI freely in foundry with less legislative scrutiny.
TLDR - will this cement Microsoft’s growing control of AI? Sure, but will be great for USG and the defense sector’s AI prospects.
It's interesting for sure, but won't it mostly just be used for administrative tasks, like writing emails and summarizing documents? I can't think of anything directly relevant to warfare that LLM's would be well-suited for (spreading misinformation, maybe...)
While I suspect there are already many procedures in place for automatically flagging wrongthink, I could see LLMs being tuned to first pass intelligence sifting/summarization.
They've been doing this for years. This can be done using semantic search which has been around since the dawn of AI. Using LLM's for this is just overkill and way too expensive.
There's plenty of things it could do with a little training. Cyberwarfare being one of the most potent I can think of. Even just used as a planning partner for everything from skirmishes to strategic initiatives, it could be helpful.
If they are talking about the administrative bits of the DoD, there’s no real reason to single out defense. The whole government is a big administrative organization after all.
I think the original thought must have been about intelligence analysis or weapons or something like that.
You used Microsoft products, you allowed bundling of operating systems and hardware, office suite monopoly, trusted GitHub with your code. This is on you.
Who are you talking about? Because the only entity that fits your use of "you" is the government, which is the only party with the power to disallow vertical integration, disallow monopolies, disallow mergers and acquisitions, etc.
And if you're blaming individuals: probably a good idea to reconsider whether victim blaming is the right way to go about things, since no one, not even "the voters as a whole", have the power to make the government do things when the government is one of two parties, clearly neither of which is interested in curbing this behaviour.
This was all orchestrated from the start. I cannot believe this thing just happened. There will be a conversation/chat/email leak in the future stating how MS always wanted to dissolve the Non-Profit part of OpenAI and take over it. All this drama, I cannot believe this. This feels like it was scripted by a child eating crayon. What the fuck. This was all planned from the start.
The free market is meme. We don't have one, basically never have. And Adam Smith didn't argue for one anyways. We have a highly regulated economy. One that suffers from regulatory capture to the extent that several industries have subverted their regulators and make them work in their favor instead of in the interests of the people.
Most of our problems would be fixed by liberating the regulating authorities of industry cronies. The argument over more or less regulation is red herring at this stage.
"Free market" is almost always a True Scotsman. It's "free market" whenever the outcome is good and "not-free-enough" or "not-market-enough" when the outcome is bad. Very rarely any proposals for an actual system where the good-outcome producing free market would manifest are laid out. (Extreme right-libertarians are an exception in this, but personally I see no way how their system wouldn't collapse into some private army plutocracy within days if not hours.)
Regulating the market is a really tricky business. And once regulatory reigns are lost, it becomes almost impossible, exactly due to the cronying up. To get the reigns on again things like total economic collapses (even 2008 apparently wasn't enough, 1930's was), major wars or violent revolutions sadly seem to be needed. I think we lost the reigns already in the 1970's or so.
Indeed. The "free market" is an easy scapegoat, for both good and bad outcomes to give to people who are ignorant of how the economy works. Which is most people since economic education is only the most simplistic in high schools and doesn't tell you how the system we have operates.
The most funny thing it's when one discover that Adan Smith was against "the invisible hand" and against rent lords (He calls rent lords as social parasites)
East India Company is surely one of the biggest plagues ever on mankind. As the name says, it was a private megacorporation, with it's own navy and private army.
Nation states kill for the ideals. Private companies that have the power of a state kill, enslave and pillage only for more profit.
> While some will praise Satya Nadella and hero-worship Sam Altman, breaking OpenAI into two parts will slow down momentum for LLMs and research while handing even more power to the Cloud and Azure in its future
Except that Microsoft nor Sam is responsible for the breaking of OpenAI. It was the non-profit board. Instead now Sam and co will have access to the Microsoft war chest, funding for more compute, chip design, and datasets so if anything they’ll probably move even faster than before.
> Microsoft taking Sam Atman and his followers in, is like shutting down your best investment just for a short-term benefit. These stories don’t usually end well for big corporations.
Again this makes little sense and its very clearly obvious how this makes long term sense for Microsoft. Also they did not initiate this move by OAI
> It’s the job of Venture Capitalist to praise Microsoft, Satya Nadella, and Sam Altman to vilify OpenAI’s board in all of this.
No it isn’t
> Microsoft eating OpenAI and poaching their talent, is the worst possible scenario for the startup that was just beginning to get momentum.
No firing the beloved CEO of the fastest growing tech startup in a decade and ignoring warnings from 80% of employees that they would quit is the worst scenario for a startup regardless of what Microsoft does
This whole post seems like really bad pattern matching by someone who is anti capitalism and tries to frame every scenario they see in business through that lens
> This is what happens when free market capitalism and anti-competitive rules are absent from your system.
No, this is what happens when technology progresses. Developing technology is an inherently centralizing force. What everyone calls capitalism is merely side effects of technology, it is the primary force shaping human societies.
> This is what happens when free market capitalism and anti-competitive rules are absent from your system
I'd argue the opposite, if OpenAI was a pure for-profit company, none of this would have happened. If it was traded publicly, the board would probably be facing charges right now.
> This is what happens when free market capitalism and anti-competitive rules are absent from your system. Silicon Valley has prioritized the wrong incentives for A.I. to blossom organically in the 2020s.
How is this the conclusion? What free market and what capitalism? Neither exists and nor have they ever. This is similar to blaming the troubles of the Soviet Union on Communism. It never existed so how would it be Communism’s fault? The issue is more that inflationary and low interest policies incentivize the highest possible return “right now,” encourage debt accumulation, and invert the time order of markets. These pressures then align to encourage the most massive orgs possible because small margins don’t work in inflationary environments. Finally, when you get to the point that you have orgs larger than some countries you have those orgs start purchasing governments and pulling up those ladders. No where in this mess is there a free market, any kind of capitalism, or even any kind of socialism or communism. This is oligarchical fascism which has been the global order since the end of mercantilism.
Well they all are, right? Not just MSFT, but also Nvidia, Google, etc.
They have the politicians in their pockets, they have the best talents, they have the computing power, they have pretty much infinite monetary credit granted by the banking union.
1. No
2. The AWS-equivalents in that space is not going to capture all the value, as OP assumes for Microsoft by multiplying their estimate of total value by expected market share.
To me it seems that every task done by humans will be supported or even replaced by AI. I already see it happening, just 2 years into the new LLM area: Programming, designing, writing, research, driving. How could that not result in a doubling in software use?
I did not assume that one player is going to capture all value. I stated the number for 10% and 90% market share. Today, AWS has 30% market share of cloud computing and Google has 90% market share of search. Assuming the market leader in AI will achieve between 10 and 90 percent market share seems reasonable to me.
They do not collect 30% of the profit brought in by all companies using cloud computing, but only their infrastructure bills.
You were assuming that Microsoft will pull in 100% of the profit of that 10-90% market share you hypothesize for them, which is not how selling infrastructure services works.
Revenue is generally greater than profit. AWS does not pull in anywhere near all the revenue that its users generate; its cash flow is a tiny proportion of what its customers generate.