OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).
The ultimate success of this strategy depends on what we might call the enterprise AI adoption curve - whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions OpenAI is positioning itself to provide over cheaper but potentially less polished alternatives.
This is strikingly similar to IBM's historical bet on enterprise computing - sacrificing the low-end market to focus on high-value enterprise customers who would pay premium prices for reliability and integration. The key question is whether AI will follow a similar maturation pattern or if the open-source nature of the technology will force a different evolutionary path.
The problem is that OpenAI don't really have the enterprise market at all. Their APIs are closer in that many companies are using them to power features in other software, primarily Microsoft, but they're not the ones providing end user value to enterprises with APIs.
As for ChatGPT, it's a consumer tool, not an enterprise tool. It's not really integrated into an enterprises' existing toolset, it's not integrated into their authentication, it's not integrated into their internal permissions model, the IT department can't enforce any policies on how it's used. In almost all ways it doesn't look like enterprise IT.
This remind me why enterprise don't integrated OpenAI product into existing toolset, trust is root reason.
It's hard to provide trust to OpenAI that they won't steal data of enterprise to train next model in a market where content is the most valuable element, compared office, cloud database, etc.
Microsoft 360 has over 300 million corporate users - trusting it with email, document management, and collaboration etc. It’s the defacto standard in larger companies especially in banking, medicine and finance that have more rigorous compliance regulations.
The administrative segments that decide to sell their firstborn to Microsoft all have their heads in the clouds. They'll pay Microsoft to steal their data and resell it and they'll defend their decisions making beyond their own demise.
As such Microsoft is doing the right choice in outright stealing data for whatever purpose. It will have no real consequences.
IT policy flick of the switch disables that, such as at my organization. That was instead intended to snag single, non-corporate, user accounts (still horrible, but I mean to convey that MS at no point in all that expected a company's IT department to actually leave that training feature enabled in policy).
It doesn't need to / it already is – most enterprises are already Microsoft/Azure shops. Already approved, already there. What is close to impossible is to use anything non Microsoft - with one exception – open source.
They betrayed their customers in the Storm-0558 attack.
They didn't disclose the full scale and charged the customers for the advanced logging needed for detections.
Not to mention that they abolished QA and outsourced it to the customer.
Maybe they aren't, but when you already have all your documents in sharepoint, all your emails in outlook and all your databases VMs in Azure, then Azure OpenAI is trusted in the organization.
For some reason (mainly because Microsoft has orders of magnitude more sells reps than anything else) companies have been trusting Microsoft for their most critical data for a long time.
For example when they backed the CEOs coup against the board.
With AI-CEOs - https://ai-ceo.org - This would never have happened because their CEOs have a kill switch and mobile app for the board for full observability
OpenAi enterprise plan especially says that they do not train their models with your data. It's in the contract agreement and it's also visible on the bottom of every chatgpt prompt window.
It seems like a damned if you do, damned if you don't. How is ChatGPT going to provide relevant answers to company specific prompts if they don't train on your data?
My personal take is that most companies don't have enough data, and not in sufficiently high quality, to be able to use LLMs for company specific tasks.
The model from OpenAI doesn’t need to be directly trained on the company’s data. Instead, they provide a fine-tuning API in a “trusted” environment. Which usually means Microsoft’s “Azure OpenAI” product.
But really, in practice, most applications are using the “RAG” (retrieval augmented generation) approach, and actually doing fine tuning is less common.
> The model from OpenAI doesn’t need to be directly trained on the company’s data
Wouldn't that depend on what you expect it to do? If you just want say copilot, summarize texts or help writing emails then you're probably good. If you want to use ChatGPT to help solve customer issues or debug problems specific to your company, wouldn't you need to feed it your own data? I'm thinking: Help me find the correct subscription to a customer with these parameters, then you'd need to have ChatGPT know your pricing structure.
One idea I've had, from an experience with an ISP, would be to have the LLM tell customer service: Hey, this is an issue similar to what five of your colleagues just dealt with, in the same area, within 30 minutes. You should consider escalating this to a technician. That would require more or less live feedback to the model, or am I misunderstanding how the current AIs would handle that information?
100% this. If they can figure out trust through some paradigm where enterprises can use the models but not have to trust OpenAI itself directly then $200 will be less of an issue.
> It's hard to provide trust to OpenAI that they won't steal data of enterprise to train next model
Bit of a cynical take. A company like OpenAI stands to lose enormously if anyone catches them doing dodgy shit in violation of their agreements with users. And it's very hard to keep dodgy behaviour secret in any decent sized company where any embittered employee can blow the whistle. VW only just managed it with Dieselgate by keeping the circle of conspirators very small.
If their terms say they won't use your data now or in the future then you can reasonably assume that's the case for your business planning purposes.
lawsuits over the legality of using using someone's writing as training data aren't the same thing as them saying they won't use you as training data and then doing so. they're different things. one is people being upset that their work was used in a way they didn't anticipate, and wanting additional compensation for it because a computer reading their work is different from a person reading their work. the other is saying you won't do something and then doing that anyway and lying about it.
It's not that anyone suspects OpenAI doing dodgy shit. Data flowing out of an enterprise is very high risk. No matter what security safeguards you employ. So they want everything inside their cloud perimeter and on servers they can control.
IMO no big enterprise will adopt chatGPT unless it's all hosted in their cloud. Open source models lend better to the enterprises in this regard.
> IMO no big enterprise will adopt chatGPT unless it's all hosted in their cloud
80% of big enterprises already use MS Sharepoint hosted in Azure for some of their document management. It’s certified for storing medical and financial records.
Cynical? That’d be on brand… especially with the ongoing lawsuits, the exodus of people and the CEO drama a while back? I’d have a hard time recommending them as a partner over Anthropic or Open Source.
It's not enough for some companies that need to ensure it won't happen.
I know for a fact a major corporation I do work for is vehemently against any use of generative A.I. by its employees (just had that drilled into my head multiple times by their mandatory annual cybersecurity training), although I believe they are working towards getting some fully internal solution working at some point.
Kind of funny that Google includes generative A.I. answers by default now, so I still see those answers just by doing a Google search.
I've seen the enterprise version with a top-5 consulting company, and it answers from their global knowledgebase, cites references, and doesn't train on their data.
I recently (in the last month) asked ChatGPT to cite its sources for some scientific data. It gave me completely made up, entirely fabricated citations for academic papers that did not exist.
The behavior you're describing sounds like an older model behavior. When I ask for links to references these days, it searches the internet the gives me links to real papers that are often actually relevant and helpful.
I don’t recall that it ever mentioned if it did or not. I don’t have the search on hand but from my browser history I did the prompt engineering on 11/18 (which perhaps there is a new model since then?).
I actually repeated the prompt just now and it actually gave me the correct, opposite response. For those curious, I asked ChatGPT what turned on a gene, and it said Protein X turns on Gene Y as per -fake citation-. Asking today if Protein X turns on Gene Y ChatGPT said there is no evidence, and showed 2 real citations of factors that may turn on Gene Y.
So sorry to offend your delicate sensibilities by calling out a blatant lie from someone completely unrelated to yourself. Pretty bizarre behavior in itself to do so.
as just another example, chatgpt said in the Okita paper that they switched media on day 3, when if you read the paper they switched the media on day 8. so not only did it fail to generate the correct reference, it also failed to accurately interpret the contents of a specific paper.
I’m a pretty experienced developer and I struggle to get any useful information out of LLMs for any non-trivial task.
At my job (at an LLM-based search company) our CTO uses it on occasion (I can tell by the contortions in his AI code that isn’t present in his handwritten code. I rarely need to fix the former)
And I think our interns used it for a demo one week, but I don’t think it’s very common at my company.
Won’t name my company, but we rely on Palantir Foundry for our data lake.
And the only thing everybody wants [including Palantir itself] is to deploy at scale AI capabilities tied properly to the rest of the toolset/datasets.
The issues at the moment are a mix of IP on the data, insurance on the security of private clouds infrastructures, deals between Amazon and Microsoft/OpenAI for the proper integration of ChatGPT on AWS, all these kind of things.
But discarding the enterprise needs is in my opinion a [very] wrong assumption.
Very personal feeling, but without a datalake organized the way Foundry is organized, I don’t see how you can manage [cold] data at scale in a company [both in term of size, flexibility, semantics or R&D]. Given the fact that IT services in big companies WILL fail to build and maintain such a horribly complex stack, the walled garden nature of the Foundry stack is not so stupid.
But all that is the technical part of things. Markets do not bless products. They bless revenues. And from that perspective, I have NO CLUE.
This is what's so brilliant about the Microsoft "partnership". OpenAI gets the Microsoft enterprise legitimacy, meanwhile Microsoft can build interfaces on top of ChatGPT that they can swap out later for whatever they want when it suits them
I think this is good for Microsoft, but less good for OpenAI.
Microsoft owns the customer relationship, owns the product experience, and in many ways owns the productionisation of a model into a useful feature. They also happen to own the datacenter side as well.
Because Microsoft is the whole wrapper around OpenAI, they can also negotiate. If they think they can get a better price from Anthropic, Google (in theory), or their own internally created models, then they can pressure OpenAI to reduce prices.
OpenAI doesn't get Microsoft's enterprise legitimacy, Microsoft keep that. OpenAI just gets preferential treatment as a supplier.
On the way up the hype curve it's the folks selling shovels that make all the money, but in a market of mature productionisation at scale, it's those closest to customers who make the money.
$10B of compute credits on a capped profit deal that they can break as soon as they get AGI (i.e. the $10T invention) seems pretty favorable to OpenAI.
I’d be significantly less surprised if OpenAI never made a single $ in profit than if they somehow invented “AGI” (of course nobody has a clue what that even means so maybe there is a chance just because of that..)
Leaving aside the “AGI on paper” point a sibling correctly made, your point shares the same basic structure as noting that any VC investment is a terrible deal if you only 2x your valuation. You might get $0 if there is a multiple on the liquidation preference!
OpenAI are clearly going for the BHAG. You may or may not believe in AGI-soon but they do, and are all in on this bet. So they simply don’t care about the failure case (ie no AGI in the timeframe that they can maintain runway).
OAI through their API probably does but I do agree that ChatGPT is not really Enterprise product.For the company the API is the platform play, their enterprise customers are going to be the likes of MSFT, salesforce, zendesk or say Apple to power Siri, these are the ones doing the heavy lifting of selling and making an LLM product that provides value to their enterprise customers. A bit like stripe/AWS. Whether OAI can form a durable platform (vs their competitors or inhouse LLM) is the question here or whether they can offer models at a cost that justifies the upsell of AI features their customers offer
That's why Microsoft included OpenAI access in Azure. However, their current offering is quite immature so companies are using several prices of infra to make it usable (for rate limiting, better authentication etc.).
> As for ChatGPT, it's a consumer tool, not an enterprise tool. It's not really integrated into an enterprises' existing toolset, it's not integrated into their authentication, it's not integrated into their internal permissions model, the IT department can't enforce any policies on how it's used. In almost all ways it doesn't look like enterprise IT.
What according to you is the bare minimum of what it will take for it to be an enterprise tool?
SSO and enforceable privacy and IP protections would be a start. RBAC, queues, caching results, and workflow management would open a lot of doors very quickly.
have used it at 2 different enterprises internally, the issue is price more than anything. enterprises definitely do want to self host, but for frontier tech they want frontier models for solving complicated unsolved problems or building efficiencies in complicated workflows. one company had to rip it out for a time due to price, I no longer work there anymore though so can't speak on if it was reintegrated.
Decision making in enterprise procurement is more about whether it makes the corporation money and whether there is immediate and effective support when it stops making money.
I don't think user submitted question/answer is as useful for training as you (and many others) think. It's not useless, but it's certainly not some goldmine either considering how noisy it is (from the users) and how synthetic it is (the responses). Further, while I wouldn't put it past them to use user data in that way, there's certainly a PR/controversy cost to doing so, even if it's outlined in their ToS.
In enterprise, there will be long content or document be poured into ChatGPT if there isn't policy limitation from company, which can be a meaning training data.
At least, there's possibility these content can be seen by staff in OpenAI as bad case, there's still existing privacy concerns.
No, because a lot of people asking you questions doesn't mean you have the answers to them. It's an opportunity to find the answers by hiring "AI trainers" and putting their responses in the training data.
Yeah it's a fairly standard clause in the business paid versions of SaaS products that your data isn't used to train the model. The whole thing you're selling is per-company isolation so you don't want to go back on that.
Whether your data is used for training or not is an approximation of whether you're using a tool for commercial applications, so a pretty good way to price discriminate.
I wonder if OpenAI can break into enterprise. I don’t see much of a path for them, at least here in the EU. Even if they do manage to build some sort of trust as far as data safety goes, and I’m not sure they’ll have much more luck with that than Facebook had trying to sell that corporate thing they did (still do?). But if they did, they will still be facing the very real issue of having to compete with Microsoft.
I view that competition a bit like the Teams vs anything else. Teams wasn’t better, but it was good enough and it’s “sort of free”. It’s the same with the Azure AI tools, they aren’t feee but since you don’t exactly pay list pricing in enterprise they can be fairly cheap. Co-pilot is obviously horrible compared to CharGPT, but a lot of the Azure AI tooling works perfectly well and much of it integrates seamlessly with what you already have running in Azure. We recently “lost” our OCR for a document flow, and since it wasn’t recoverable we needed to do something fast. Well the Azure Document Intelligence was so easy to hook up to the flow it was ridiculous. I don’t want to sound like a Microsoft commercial. I think they are a good IT business partner, but the products are also sort of a trap where all those tiny things create the perfect vendor lock-in. Which is bad, but it’s also where European Enterprise is at since the “monopoly” Microsoft has on the suite of products makes it very hard to not use them. Teams again being the perfect example since it “won” by basically being a 0 in the budget even though it isn’t actually free.
Man, if they can solve that "trust" problem, OpenAI could really have an big advantage. Imagine if they were nonprofit, open source, documented all of the data that their training was being done with, or published all of their boardroom documents. That'd be a real distinguishing advantage. Somebody should start an organization like that.
The cyber security gatekeepers care very little about that kind of stuff. They care only about what does not get them in trouble, and AI in many enterprises is still viewed as a cyber threat.
One of the things that i find remarkable in my work is that they block ChatGPT because they're afraid of data leaking. But Google translate has been promoted for years and we don't really do business with Google. Were a Microsoft shop. Kinda double standards.
I mean it was probably a jive at OpenAIs transition to for-profit, but you’re absolutely right.
Enterprise decision makers care about compliance, certifications and “general market image” (which probably has a proper English word). OpenAI has none of that, and they will compete with companies that do.
Sometimes I wish Apple did more for business use cases. The same https://security.apple.com/blog/private-cloud-compute/ tech that will provide auditable isolation for consumer user sessions would be incredibly welcome in a world where every other company has proven a desire to monetize your data.
Teams winning on price instead of quality is very telling of the state of business. Your #1/#2 communication tool being regarded as a cost to be saved upon.
It’s “good enough” and integrates into existing Microsoft solutions (just Outlook meeting request integration, for example), and the competition isn’t dramatically better, more like a side-grade in terms of better usability but less integration.
You still can't copy a picture out of a teams chat and paste it into an office document without jumping through hoops. It's utterly horrible. The only thing that prevents people from complaining about it is that it's completely in line with the rest of the office drone experience.
In my experience Teams is mostly used for video conferencing (i.e. as a Zoom alternative), and for chats a different tool is used. Most places already had chat systems set up (Slack, Mattermost, whatever) (or standardize on email anyway), before video conferencing became ubiquitous due to the pandemic.
And yet Teams allows me to seamlessly video call a coworker. Whereas in Slack you have this ridiculous "huddle" thing where all video call participants show up in a tiny tiny rectangle and you can't see them properly. Even a screen share only shows up in a tiny rectangle. There's no way to increase its size. What's even the point of having this feature when you can't see anything properly because everything is so small?
Seriously, I'm not a fan of Teams, but the sad state of video calls in Slack, even in 2024, seriously ruins it for me. This is the one thing — one important thing — that Teams is better at than Slack.
consider yourself lucky, my team uses skype business. Its skype except it cant do video calls or calls at all. Just a terrible messaging client with zero features!
I’m not sure you can considering how broad a term “better” is. I do know a lot of employees in a lot of non-tech organisations here in Denmark wishes they could still use Zoom.
Even in my own organisation Teams isn’t exactly a beloved platform. The whole “Teams” part of it can actually solve a lot of the issues our employees have with sharing documents, having chats located in relation to a project and so on, but they just don’t use it because they hate it.
Email, Jitsi, Matrix/Element, many of them, e2e encrypted and on-premise. No serious company (outside of US) which really care about it's own data privacy would go for MS Teams, which can't even offer decent user experience most of the time.
> I wonder if OpenAI can break into enterprise. I don’t see much of a path for them, at least here in the EU.
Uhh they're already here. Under the name CoPilot which is really just ChatGPT under the hood.
Microsoft launders the missing trust in OpenAI :)
But why do you think copilot is worse? It's really just the same engine (gpt-4o right now) with some RAG grounding based on your SharePoint documents. Speaking about copilot for M365 here.
I don't think it's a great service yet, it's still very early and flawed. But so is ChatGPT.
Agreed on the strategy questions. It's interesting to tie back to IBM; my first reaction was that openai has more consumer connectivity than IBM did in the desktop era, but I'm not sure that's true. I guess what is true is that IBM passed over the "IBM Compatible" -> "MS DOS Compatible" business quite quickly in the mid 80s; seemingly overnight we had the death of all minicomputer companies and the rise of PC desktop companies.
I agree that if you're sure you have a commodity product, then you should make sure you're in the driver seat with those that will pay more, and also try and grind less effective players out. (As a strategy assessment, not a moral one).
You could think of Apple under JLG and then being handed back to Jobs as precisely being two perspectives on the answer to "does Apple have a commodity product?" Gassée thought it did, and we had the era of Apple OEMs, system integrators, other boxes running Apple software, and Jobs thought it did not; essentially his first act was to kill those deals.
The new pricing tier suggests they're taking the Jobs approach - betting that their technology integration and reliability will justify premium positioning. But they face more intense commoditization pressure than either IBM or Apple did, given the rapid advancement of open-source models.
The critical question is timing - if they wait too long to establish their enterprise position, they risk being overtaken by commoditization as IBM was. Move too aggressively, and they might prematurely abandon advantages in the broader market, as Apple nearly did under Gassée.
Threading the needle. I don't envy their position here. Especially with Musk in the Trump administration.
The Apple partnership and iOS integration seems pretty damn big for them - that really corners a huge portion of the consumer market.
Agreed on enterprise - Microsoft would have to roll out policies and integration with their core products at a pace faster than they usually do (Azure AD for example still pales in comparison to legacy AD feature wise - I am continually amazed they do not priorities this more)
Except I had to sign in to OpenAI when setting up Apple Intelligence. Even though Apple Intelligence is doing almost nothing useful for me right now at least OpenAI’s AOI number's go up.
Right now Gemini Pro is best for email, docs, calendar integration.
That said ChatGPT Plus us a good product an I might spring for Pro for a month or two.
ChatGPT through Siri/Apple Intelligence is a joke compared to using ChatGPT's iPhone app. Siri is still a dumb one trick pony after 13 years of being on the market.
Supposedly Apple wont be able to offer a Siri LLM that acts like ChatGPT's iPhone app until 2026. That gives Apple's current and new competitors a head start. Maybe ChatGPT and Microsoft could release an AI Phone. I'd drop Apple quickly if that becomes a reality.
Well one key difference is that Google and Amazon are cloud operators, they will still benefit from selling the compute that open source models run on.
For sure. If I were in charge of AI for the US, I'd prioritize having a known good and best-in-class LLM available not least for national security reasons; OAI put someone on gov rel about a year ago, beltway insider type, and they have been selling aggressively. Feels like most of the federal procurement is going to want to go to using primes for this stuff, or if OpenAI and Anthropic can sell successfully, fine.
Grok winning the Federal bid is an interesting possible outcome though. I think that, slightly de-Elon-ed, the messaging that it's been trained to be more politically neutral (I realize that this is a large step from how it's messaged) might be a real factor in the next few years in the US. Should be interesting!
Fudged71 - you want to predict openai value and importance in 2029? We'll still both be on HN I'm sure. I'm going to predict it's a dominant player, and I'll go contra-Gwern, and say that it will still be known as best-in-class product delivered AI, whether or not an Anthropic or other company has best-in-class LLM tech. Basically, I think they'll make it and sustain.
Somehow I missed the Anduril partnership announcement. I agree with you. National Security relationships in particular creates a moat that’s hard to replicate even with superior technology.
It seems possible OpenAI could maintain dominance in government/institutional markets while facing more competition in commercial segments, similar to how defense contractors operate.
Now we just need to find someone who disagrees with us and we can make a long bet.
It feels strange to say but I think that the product moat looks harder than the LLM moat for the top 5 teams right now. I'm surprised I think that, but I've assessed so many L and MLM models in the last 18 months, and they keep getting better, albeit more slowly, and they keep getting smaller while they lose less quality, and tooling keeps getting better on them.
At the same time, all the product infra around using, integrating, safety, API support, enterprise contracts, data security, threat analysis, all that is expensive and hard for startups in a way that spending $50mm with a cloud AI infra company is not hard.
Altman's new head of product is reputed to be excellent as well, so it will be super interesting to see where this all goes.
One of the main issues that enterprise AI has is the data in large corporations. It's typically a nightmare of fiefdoms and filesystems. I'm sure that a lot of companies would love to use AI more, both internally and commercially. But first they'd have to wrangle their own systems so that OpenAI can ingest the data at all.
Unfortunately, those are 5+ year projects for a lot of F500 companies. And they'll have to burn a lot of political capital to get the internal systems under control. Meaning that the CXO that does get the SQL server up and running and has the clout to do something about non-compliance, that person is going to be hated internally. And then if it's ever finished? That whole team is gonna be let go too. And it'll all just then rot, if not implode.
The AI boom for corporations is really going to let people know who is swimming naked when it comes to internal data orderliness.
Like, you want to be the person that sell shovels in the AI boom here for enterprise? Be the 'Cleaning Lady' for company data and non-compliance. Go in, kick butts, clean it all up, be hated, leave with a fat check.
Did not know that stack, thanks.
From my perspective as a data architect, I am really focused on the link between the data sources and the data lake, and the proper integration of heterogenous data into a “single” knowledge graph.
For Palantir, it is not very difficult to learn their way of working [their Pipeline Builder feeds a massive spark cluster, and OntologyManager maintains a sync between Spark and a graph database. Their other productivity tools then rely on either one data lake and/or the other].
I wonder how Glean handles the datalake part of their stack. [scalability, refresh rate, etc]
ChatGPTs analogy is more like google. People use enough google, they ain’t gonna switch unless is w quantum leap better + with scale. On the API side things could get commoditized, but it’s more than just having a slightly better LLM in the benchmarks.
There exists no future where OpenAI both sells models through API and has its own consumer product. They will have to pick one of these things to bet the company on.
That's not necessarily true. There are many companies that have both end user products and B2 products they sell. There are a million specific use cases that OpenAI won't build specific products for.
Think Amazon that has both AWS and the retail business. There's a lot of value in providing both.
AI can be used for financial gain, to influence and lie to people, to simulate human connection, to generate infinite content for consumption,... at scale.
In the early days of ChatGPT, I'd get constantly capped, every single day, even on the paid plan. At the time I was sending them messages, begging to charge me $200 to let me use it unlimited.
The enterprise surface area that OpenAI seems to be targeting is very small. The cost curve looks similar to classic cloud providers, but gets very steep much faster. We started on their API and then moved out of the OpenAI ecosystem within ~ 2years as costs grew fast and we see equivalent or better performance with much cheaper and/or OS models, combined with pretty modest hardware. Unless they can pull a bunch of Netflix-style deals the economics here will not work out.
The "open source nature" this time is different. "Open source" models are not actually open source, in the sense that the community can't contribute to their development. At best they're just proprietary freeware. Thus, the continuity of "open source" models depends purely on how long their sponsors sustain funding. If Meta or Alibaba or Tencent decide tomorrow that they're no longer going to fund this stuff, then we're in real trouble, much more than when Red Hat drops the ball.
I'd say Meta is the most important player here. Pretty much all the "open source" models are built in Llama in one way or the other. The only reason Llama exists is because Meta wants to commoditize AI in order to prevent the likes of OpenAI from overtaking them later. If Meta one day no longer believes in this strategy for whatever reason, then everybody is in serious trouble.
> OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).
Also important to recognize that those clocks aren’t entirely separated. Monetization timeline is shorter if investors perceive that commodification makes future monetization less certain, whereas if investors perceive a strong moat against commodification, new financing without profitable monetization is practical as long as the market perceives a strong enough moat that investment in growth now means a sufficient increase in monetization down the road.
Has anyone heard or seen it used anywhere? I was in-house when it launched to big fanfare by upper management and the vast majority of the company was tasked to create team projects utilizing Watsonm
Watson was a pre-LLM technology, an evolution of IBM's experience with the expert systems which they believed would rule the roost in AI -- until transformers blew all that away.
Am I the only one who's getting annoyed of seeing LLMs be marketed as competent search engines? That's not what they've been designed for, and they have been repeatedly bad at that.
> the commoditization clock (how quickly open-source alternatives catch up)
I believe we are already there at least for the average person.
Using Ollama I can run different LLMs locally that are good enough for what I want to do. That's on a 32GB M1 laptop. No more having to pay someone to get results.
For development Pycharm Pro latest LLM autocomplete is just short of writing everything for me.
"whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions"
While safe in output quality control. SaaS is not safe in terms of data control. Meta's Llama is the winner in any scenario where it would be ridiculous to send user data to a third party.
Yes, but how can this strategy work, and who would choose ChatGPT at this point, when there are so many alternatives, some better (Anthropic), some just as good but way cheaper (Amazon Nova) and some excellent and open-source?
Microsoft is their path into the enterprise. You can use their so-so enterprise support directly or have all the enterprise features you could want via Azure.
There is really not a lot of Open source large language models with that capability. the only game changer so far has been meta open sourcing llama, and that's about it with models of that caliber
The ultimate success of this strategy depends on what we might call the enterprise AI adoption curve - whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions OpenAI is positioning itself to provide over cheaper but potentially less polished alternatives.
This is strikingly similar to IBM's historical bet on enterprise computing - sacrificing the low-end market to focus on high-value enterprise customers who would pay premium prices for reliability and integration. The key question is whether AI will follow a similar maturation pattern or if the open-source nature of the technology will force a different evolutionary path.