Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are GPT wrapper companies "AI" companies?
11 points by janalsncm 4 months ago | hide | past | favorite | 23 comments
I’m wondering if there is a consensus on this one. I see many companies have AI in their name and call themselves AI companies, but when I look into how their product works, it is mostly just making API calls and prompt engineering. Is this what an “AI company” is?

And as a follow up question, what would you call a company that was training original ML models (not even just LLMs)? Is that also an “AI” company and if so, should we draw a distinction?




Disclaimer: I'm building an AI wrapper company

That's an interesting question and initially I read it as the common critique I hear: are AI wrapper companies, actual companies? And my answer is yes, as long as you provide value and someone is paying you are a company.

Now, are they "AI" companies? I'm leaning towards no. It depends on how much AI you are doing. If you provide a nice UI with just an API call to get the answer, then no. If you do your own embedding, vector DB querrying, re-ranking, not sure, but still leaning no. But if you start optimizing your embedding model, maybe play with fine-tuning LLMs, then yes.

It's not black and white and you can't blame startups for trying to appear bigger than they are.


This is like saying a web company is a DB company because they just built their software with a DB as their backend.

It's a silly nomenclature that some VCs and the entrepreneurial community started labelling the AI companies as GPT wrappers, to pretend that they are less useful, when in reality what matters is if they have users that are happy with their service.

And we can all be clear on this, at the moment, except Meta (because they use it for their recommendation algorithm), all AI companies using H100s are losing money, and lots of it.

I just read an article that OpenAI will spend $7B this year to make $1B ARR, and they are literally the leader in that space (well, Claude is better at the moment, but we expect them to release a better model since they started training it a while ago).

Every AI company doing LLMs are default dead companies on their way to bankrupcy. The ones that provide the API layer are even more likely to get to chapter 11 with the high training + staffing costs.


One or two (or 5) will be in the next FAANG, tho. And the current acronym can easily lose meaning.


It's a complex issue. I've seen many companies use "AI" as a buzzword, but true "AI companies" often develop novel models or algorithms, not just wrap existing APIs.

I choose to embrace it and that's why my company is literally called AnotherWrapper lol


If it's more expensive + less reliable + slower, then it's simply a wrapper.

The real companies should be doing all angles better - cheaper, faster, higher quality output, more reliable. It doesn't matter if it's simply "prompt engineering". You can add value by adding exponential backoff or using different AI providers when one is down or slow (as they usually are).

There's a lot of excitement around RAGs because it lets startups play along this faster+cheaper+quality. There's domain expertise - a skilled writer can make better writing apps than some random with good prompting skills. Perplexity is probably adding extra caching layers in front of the AI so they can do it cheaper and faster.


Do they make money? If they had to pay the true cost of GPT, would they still be in business?


This is a good point. In a couple of years (or sooner) they’ll be looking for a smaller, cheaper API solution and someone will make a fortune providing it.


I don't think anyone will make a fortune providing cheap AIs at a loss. In the end it might just be AWS Bedrock+llama.


A cheap classifier would actually be profitable since it would be millions of times less expensive than an LLM. These companies are using Lamborghinis to run errands when a 2004 corolla would be just as good.


OpenAI sells a cheap classifier tier product for $0.40/MTok (babbage-002) while gpt-4o-mini is $0.15/MTok. Multimodal AI is now cheap enough to run on ads.

They might still be subsidizing some of the cost but maybe it's more like LEDs vs bulbs at this point. The power of quantum lighting for cheaper and better.


Are they not paying the true cost through the API today?


No, or OpenAI etc would be profitable. I believe they're all heavily subsidized by investors right now in a race to cement the first-mover advantage.

Now if they had their own llama instance or similar that they're doing on their own machines, that would be closer to the true cost? But I doubt that quality is sufficient, or can keep up with the latest training.


It's possible the unit economics might not be that bad. It's just that they're paying people a lot. Every one of FAANG were criticized for burning money, now they're some of the most profitable companies in the world, first mover or not. AI is a little weird in that OpenAI just vastly accelerated competitors.


The true cost for OpenAI includes almost a decade of salaries and office space for employees, since before covid, along with a very hefty training bill. Even if inference were free for them, it'll take a while before they're profitable.


That must've been a wild time to be working there. Imagine going to work thinking one day in a few short years it's going to become one of the world's most important companies. I'd be walking back from lunch thinking "oh man, oh man..." the whole time, lol.

And then to actually see it all actually happen, rise to the top, only for a failed coup to almost dismantle your organization, and then to survive THAT and keep going. Must've been nuts. I hope some of them write a memoir about it later.


A company is a company. But I don’t think they are anything special. I see them as a company who’s entire model is the instagram api.


We built a software product that writes code. Even though it’s obviously very intelligent, it is not based on an LLM at all. We are going back and forth whether to characterize it as AI for this very reason. The term is being very overused right now.


We built a generative AI product that makes product design 10x faster. Yes, we're using AI models and making API calls but the functionality of the product is still generative AI in nature. It is generating something new.


LLM just like engine, and AI application like a car. I believe that most AI companies will be wrapper company, they just build a car not necessary to make engine from beginning.


Coke is just a wrapper company for Sugar and Water companies.

If it sells at margins better than its cost of production, it's a money making machine, it's a company right there.


It is not only “just making API calls”. In many cases you need to do mote than that, like RAG, constructing prompt, data pipeline for whole process, etc.


Is there no true AI company out there?


No they aren’t. They are software companies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: