Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPT4 is up to 6 times more expensive than GPT3.5 (openai.com)
192 points by behnamoh on March 14, 2023 | hide | past | favorite | 131 comments


"Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html

If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...


Sometimes you have to abbreviate to keep a title under the character count.

Sometimes what is in the page's <title/> tag is not the same as the readable title on the article ... and as many news aggregators link with the content's of the <title/> tag, which is what caught your attention, it is not unreasonable to use that.

And something I've observed recently is titles on news articles being updated after they are published ... political pressure on editors ¯\_(ツ)_/¯ ... it usually manifests as a "watering down" of the original title.

This doesn't justify wild re-writes ... but "original title" is not as clear cut as one might wish.


> Sometimes you have to abbreviate to keep a title under the character count.

Correct!

> Sometimes what is in the page's <title/> tag is not the same as the readable title on the article ... and [...] it is not unreasonable to use that.

Also quite correct!

> And something I've observed recently is titles on news articles being updated after they are published

Also entirely correct. (I don't know if it's pressure - sometimes they're just correcting things - but we don't really need to know why.)

> This doesn't justify wild re-writes ... but "original title" is not as clear cut as one might wish.

Absolutely right!


The title of the launch of my product a couple months ago - a labor of love - was nerfed, despite being backed by benchmarks.


HN's guidelines don't call for using the original title in every case; there are quite a few exceptions. Most likely your title broke one of those rules.

For example, it's certainly possible for a title to be linkbait even when it is backed by benchmarks. But I'd have to know which submission it was to answer in detail!


Worth remembering GPT3 was 10-20 times more expensive 2yrs ago. There's a super-moore's law type learning function going on here and I suspect in 1yr GPT4.5 will be the same cost as GPT3.5 today.

It's insane... feels like iPhone 3G to iPhone 4 level quality improvement every year.


GPT-4 is a scary improvement over 3.5, especially for handling code. It will be the literal definition of awesome when these models get a large enough context space to hold a small-medium sized codebase.

I've been playing around with it for an hour seeing what it can do to refactor some of the things we have with the most tech debt, and it is astounding how well it does with how little context I give it.


There are already some cool projects that help LLM go beyond the context window limitation and work with even larger codebases like https://github.com/jerryjliu/llama_index and https://github.com/hwchase17/langchain.


The fundamental techniques that they use are highly lossey and are far inferior to ultra-long context length models where you can do it all in one prompt. Hate to break it to you and all the others.


> Hate to break it to you and all the others.

Jeez. Their comment is quite obviously a complementary one in response to the limitation rather than a corrective one about the limitation.


The methods they employ are to improve the context being given to the model irrespective of the context length. Even when the context length improves these methods will be used to decrease the search space and resources required for a single task (think about stream search vs indexed search).

I’m also curious what paper you are referencing that finds that more context vs more relevant context yields better results?

A good survey of the methods for “Augmented Language Models” (CoT, etc.) is here: https://arxiv.org/pdf/2302.07842.pdf


Where can someone find and try ultra-long context length models?

Any links?


The longest one that is generally available is always going to be yourself :)


My context model is getting shorter and fuzzier.


… but still the weights are increasing ;)


The only thing holding this back now is lack of enough context. That’s the big nut to crack. How do you hold enough information in memory at once?


GPT4 supports 32k tokens, which I guesstimate would be ~25k code tokens and perhaps ~2.5k lines of code. So already enough to work on a whole module of code at the same time. If you're using small microservices, it might already be good enough.


Also if you were doing it personally you'd probably not take the whole codebase, but look at function signatures/docs as you get further away from the code you care about. While there's probably benefits in clever systems for summarising and iteratively working on the code, you can get away with just cramming a load of context in.

I have been playing with GPT4 and it's good. It's easy to imagine a bunch of use cases where even the maximal cost of a few dollars ($1.80 ish for the context alone) for a single query is worth it. If it's good enough to save you time, it's very easy to cross the threshold where it's cheaper than a person.


The 32k tokens version should do fairly well on such codebase. I don’t know if it’s the one used in ChatGPT.


I’m not even kidding when I say I just did 6h or so of work in around 45 minutes


I'm worried it's going to decimate the already wobbly jr market - who are the sr devs of tomorrow to wrangle the LLM output after it replaces the jr's job of coding all day?


All programmers that use AI just got a boost. I tried it and it is amazing, it writes complex code on demand like I never seen before. But competition also gets the boost. So now we have to compete with devs armed with GPT-4. Humans are still the differentiating factor, what company will use AI better?


Certainly it will.


Do you let it refactor your code base? How do you feed the code to it? Just copy paste?


Yup, it’s a bit of a learning curve how to get it to output the most useful stuff with the least amount of prompts.

Figure out what your goal is, and then start by giving a wide context and at first it will give wrong answers due to lack of context. With each wrong answer give it some more context fro what you think the solution provide is missing the most. Eventually you get 100% working code, or something so close to it that you can easily and quickly finish it yourself.

This is one strategy, but there are many that you can use to get it to reduce the burden of refactoring.


I can't get it to run a single query today. Are you on the paying plan?


That's incorrect information. The original GPT3 price was about x3 times its current price, not 10-20 times.

https://the-decoder.com/openai-cuts-prices-for-gpt-3-by-two-...


I think he means it was 30x more than gpt-3.5 is today.


Yeah but that's also 30x times worse. ChatGPT is much worse-performing than davinci-003.


yeah, I used to be able to create valid units for WarStuff with original chatgpt, but now all the unit it creates are full of hallucinated traits and special rules that aren't in the source.


If you try davinci-003, it will definitely be able to do it.


No, something’s off, and I’m interested in finding out what


I don't think so. Try to convince ChatGPT to reply with "perform action: email" whenever you send a message that looks like you want to send an email.

It just refuses to do it consistently. It might work once or twice per conversation, but soon it'll be asking you for SMTP credentials.

It's an 1.3B parameter model, we can't expect much from it. It's a wonder it's as good as it is.


Keep in mind OpenAI is able to lose money on GPT*. Inference for the same predictive power will indeed improve (Facebook's recent model is a step in the right direction) but efficiency of running on given hardware likely won't follow moore's law for long.


Are there truely top tier ML optimized chips? I would imagine the hardware space still having room to grow if just because most of the brainpower having been focused on x86, etc.


If you believe Nvidia market speak, they took AI and used it to optimize their A100 chips when creating the H100 line, so we're already using artificial brainpower on the problem.


There are analog neural net accelerators that can get around lots of existing limitations, but still have a few of their own that need to be figured out.


It's $3.80 for the full 1000 prompt tokens plus 32k token long context. But OpenAI doesn't have a monopoly on long context, RWKV is free and open source and has virtually an unlimited context (as long as you remind it every once and a while). However ChatGPT really cannot be matched at this point perhaps except LLaMa Alpaca 7B

EDIT: I accidentially typed Alpine, it's Alpaca. https://github.com/tatsu-lab/stanford_alpaca


There is also https://open-assistant.io by LAION.


Alpaca is supposed to be roughly comparable to GPT-3.5, which means it is probably far worse than GPT-4.


But also gpt-3.5-turbo is far worse than davinci-003.


By what metric? For most stuff 3.5 is better in my experience. Assuming you are trying to complete an actual task rather than have fun.


3.5-turbo is already worse than 3.5-legacy, you can try this with chatgpt plus. Use the normal (turbo) model and then use the legacy model, you will almost certainly feel a difference in quality

Turbo prioritizes speed (probably due to cost?), the legacy model has far higher quality of output. You can confirm this on Reddit where none of the chatgpt plus userbase seems happy with turbo and the general recommendation is to switch it back to legacy.

I tried porting a mini project from davinci to the new turbo api and quickly reverted it, turbo output is a lot more all over the place and very hard to get into something useful. It’s a fun chatbot though and still great for simple tasks

You get what you pay for, and there’s a reason 3.5-turbo is so cheap


I've tried to make an assistant with ChatGPT, because it's cheaper, and it's abysmally worse than davinci-003 at following instructions and reasoning.


6 times? My reading of these prices is that it's 45x more expensive (for 8K) and 90x more expensive (32k)


The title compares GPT3.5 (not ChatGPT) with GPT4. Although the question is: If ChatGPT can do almost anything GPT3.5 can do, then why is it priced almost 10x less? Is it because GPT3.5 is less "censored"?


> Although the question is: If ChatGPT can do almost anything GPT3.5 can do, then why is it priced almost 10x less?

ChatGPT is GPT-3.5; with specific fine tuning / chat-oriented training, and without customer fine-tuning (at least, currently) available. It’s the particular GPT-3.5 interface that OpenAI wants people to preferentially use, so the price structure artificially encourages this.


The ChatGPT API is using a smaller model than the original "GPT 3.5". The original, presumably larger, model is used by Legacy ChatGPT.


Wait is that true? I couldn't find anything that verifies this. Do you have a source?

If it is in fact true, doesn't it fall into "false advertisement" category? That is, OpenAI using ChatGPT for demonstration purposes but then charging for an API that pretends to be ChatGPT but is in fact based on a much smaller model?


The normal ChatGPT interface also switched to the cheaper model, and lots of people complained when they switched (though not enough to make a big media splash). With ChatGPT Pro you can select between the new and the "Legacy" model (and now GPT4 as a third option)


Didn't the CEO say during the GPT-4 demos that humans were involved to do the captions during all the images tests (using Be My Eyes volunteers) and that's why it took some time to show the Discord examples ?

If so, it could explain some costs indirectly.


No, the opposite. GPT-4 is an “assistant” you can ask for help.


Wait did they just use Be My Eyes volunteers as AI labellers without telling them? Or were they told? Pretty unethical if not!


Maybe it's something else ?

I am genuinely not sure what they mean, this is the timecode in the announcement: https://www.youtube.com/live/outcGtbnMuQ?feature=share&t=651

It seems to say that they use Be my eyes "to make the product better", so it could be humans in the loop during captioning, or maybe they are using volunteers data from Be my eyes as training set ?

Or it's completely opposite, and that Be my eyes is using the OpenAI APIs ?

My guess is that it is both ?


If I were to speculate, OpenAI is doing two types of optimizations:

1) Cost

2) Function

I suspect ChatGPT is 90% a cost-optimized version of GPT3, and 10% a function-enhanced one.

GPT3 was the first major public wow-able model, and I suspect it was not running on any sort of optimized infrastructure.


yeah there is no way they'd want that kind of traffic unless the expense knobs were turned down.

you might have 10 million queries on this thing per day. if it's just 1c/query that's already $100k/day.


I'm more impressed that it's less than 3 times as expensive as Davinci GPT3. In usecases where most tokens are in the context even only 50% more expensive, while allowing 4x as much context. And you can pay a 100% surcharge to get another fourfold increase in context size.

It only looks expensive when compared to the GPT-3.5-Turbo ChatGPT offering which is incredibly cheap (or alternatively Davinci is overpriced by now)


Can you please explain how GPT-4 is cheaper than davinci GPT3. Maybe I do not understand the pricing here…


It's not. I think you've perhaps misread "less than 3 times as expensive as" as "3 times less expensive as" or similar. It's more expensive but not more than 3x the cost.


If this is public I can only imagine what Google has internally. Does DeepMind compete in this space or is it fundamentally different than a llm like Bard?

With their massive codebase and already deep investment in AI/ML, I'm pretty sure Google and likely MS already have the ability to do massive refactoring, validate it using tests, reiterate, train, rinse and repeat.


This is good mostly for improving quality of software and reducing technical debt. But not much good for innovation.


> do massive refactoring, validate it using tests

Google do have damn good mass code change tools, for human, without AI.


Well just the other day people were shocked at how cheap GPT3.5 was.


And website still says "openai."


Maybe they mean “open” as in “open your wallet”?


At last they didn't call it freeai


Should they not be able to make money off a world changing new technology they bet billions on?

Does your company offer all its products for free?


I think it was borderline defensible for them to not release the weights, but now they aren't even telling people the size of the model.

From their website:

>Our mission is to ensure that artificial general intelligence benefits all of humanity


I pay for ChatGPT Plus and have paid a fair bit in OpenAI API fees; I don't care that they are making money. The problem is the openwashing, not the monetization.


The problem with the name isn't that they're charging. It's that they're implying that they're something in the ballpark of open source when they're not.


OpenTable isn't Open Source. Open, as a word, has more than one context.


But OpenTable was never open source to begin with, so that association was never made.

Open AI started with a commitment to being open source (thus the "open"), but then they changed their minds and went closed source. I think in this context, keeping the "Open" in their name is a bit deceptive.

It's not the biggest issue in the world, admittedly, but it does leave a bad taste in the mouth.


Oh! that's new information for me. That case does feel like a -n-switch.


OpenTable references restaurants tables and times that are still open to be reserved.

What contextual definition of "open" does OpenAI have? A website open to subscribers?


An open API you can build products off of. Consider the alternative. They could not let anyone use their models and just release AI products (their own search engine, their own version of Teams, their own self driving car software, their own photoshop competitor, etc).


This is revisionist history. OpenAI very much started as a (somewhat) open source (for whatever definition of "open" makes sense for LLMs) initiative.


I’m not arguing they stuck with their original position. I’m saying the name is still reasonable. I don’t care if they changed positions.


A website open to subscribers?

You can scoff at this but Google's models weren't available at any price before today.


Designer leather jackets boutiques world wide are rejoicing.


I remember as a young gamer, asking my Dad whether Nvidia or Intel was a bigger company. It seemed obvious to me, since people always bragged about their GPUs online, but never the CPU, that Nvidia is the more important company.

At that time, Nvidia was worth like 1% of Intel.

Today, regarding online discussions, nothing has changed. People still brag about GPUs, never the CPU. But now Nvidia is worth 5 times as much as Intel.


I feel like this is comparing apples and oranges a bit, depending on when you were a kid. Even within the GPU market, Nvidia wasn't a huge player for a while. It's not just that GPUs are more important today, but that Nvidia went from being a small player in the GPU market to essentially being the GPU market. At the same time, Intel's presence in the CPU market has never been lower. Most "computers" are now smartphones running ARM processors, data centers are using AArch64 more and more, there's a slowdown in businesses and consumers buying computers, and AMD is taking a huge share of the X64 market.

Likewise, Nvidia is priced at 102x earnings while Intel is priced at 14x earnings. Some of that might be that people are anticipating the GPU market to be where all the money is in the future, but a decent amount of that is pessimism about Intel's place in the market.

You're not even comparing the CPU market to the GPU market. You're comparing what was possibly the 3rd ranked GPU marker when you were a kid and Intel when you were a kid dominating with 90-95% of CPU sales with Nvidia being 90% of the GPU market's profits today and Intel being more of a bit player in the CPU market.

Some of Nvidia's size today is because they beat all the other graphics card companies (yes, ATI/AMD is still around, but Nvidia's marketshare is above 85%). Some of that comes from GPUs becoming a much bigger factor in many things like AI. But when comparing Intel and Nvidia today, Intel is still getting double the revenue and profits of Nvidia. This is despite the fact that most CPUs have moved away from their architecture, despite the fact that they've had at least half a decade of terrible engineering performance, despite AMD starting to kick their butt and taking serious marketshare on their architecture, etc.

I don't want to say that GPUs aren't important, but Nvidia being worth more than Intel probably says more about Nvidia executing well while Intel stumbles than anything else. If you were asking your father this in 1998, Nvidia was just releasing the RIVA TNT, 3dfx was still king, and Intel was so dominant it looked like they'd be unstoppable forever. Now you're looking at an Intel that doesn't even look like the king of their own architecture, has been having foundry issues for years, and an Nvidia that no other GPU maker has been able to touch. If another GPU maker beats Nvidia and Nvidia becomes smaller than Intel, does that mean that GPUs aren't important any more? No.

The problem isn't your conclusion that GPUs are important today, but the logic getting there. People bragging about things isn't necessarily a great barometer for future success. Lots of things are trendy and lots of companies come and go. Everyone was talking about Pogs and Beanie Babies at one point. That hasn't made them last. People were talking about smartphone/handheld devices since the 80s, but almost all of those companies aren't in that market today (Palm, Handspring, Blackberry, Microsoft). Instead, Apple won that market. Heck, you could have asked your father if 3dfx were bigger than Intel and seen them go bankrupt a few years later. Where's Matrox and S3? This is an anecdote that just feels like survivorship bias. Yep, you asked about the GPU maker that won the market and the CPU maker that would face big declines (and still Intel's revenue/profits are double Nvidia's).


Thanks for the detailed reply.

My point was that CPU performance seemed to have stopped being the differentiating factor in computers a long time ago. That was evident even in the 2000s. It was impossible to make a CPU that ran much faster than competitors, and most of the innovation appears to be in power consumption, which isn't very exciting.

That's why gamers bragged about GPUs, because GPUs always made a massive difference. This also applies to AI today, GPUs make all the difference, CPUs are just a commodity. Intel become stuck in a commoditized business, while Nvidia was always bleeding edge.


Can GPT be used for something like feeding it a 2000 page PDF of pure text(not english) and ask questions about its contents?


Not yet, but my bet is that we will be able to in the near future. The gpt-4-32k (with a 32K context window) allows for about 52 pages of text


This supports up to 2000 pages: https://www.chatpdf.com/


No. In the near future they will support ~50 pages.


Not without other tooling. Things like langchain and llama_index would be good starting points. An approach would be to use llama_index to create embedding vectors for each section of the pdf, then you query and it gets a vector for your query -> gets the context -> puts it into gpt + your query -> returns the result.

I've seen people say it's better to ask gpt for a fake answer then use the embedding of that answer to search (so you're looking for context that looks like the answer). I don't know if that's supported in those tools.


Is it conceivable that ChatGPT could review itself and produce a superior revision to a human?


ChatGPT, unlikely. But with that said, we're already using AI optimizers to make better versions of AI chips like the H100, so take that with what you will.

Current GPT context is too low for this, I believe. We'll have to have something a little closer to AGI for this, and I'm unsure if a cyclic working memory (come up with an output, review the output, review that output, output) will get us to AGI itself. We still need selection functions that bind AI to the real world and not the dream state it lives in. Maybe just hooking it to compilers and allowing it to run test functions will get it a lot farther in self learning, but again, no one seems to know at this point.


Genuinely curious as to why ChatGPT could not so I asked it.

Note the final bullet point. I posit, that if it is possible, it will happen (regardless of ethics), because the risk of not doing it (when your adversary will) outweighs the risk of doing it.

"You say an instance of the GPT cannot experiment with new architectures, but could it if a) it was modified by the developers and b) it was given access to sufficient computational resources?"

Response: "In theory, if a GPT model were to be modified by developers to include capabilities for analyzing, designing, and testing new architectures, and if it were given access to sufficient computational resources, it could potentially experiment with new architectures. This would involve incorporating additional machine learning and deep learning algorithms and methods, such as reinforcement learning, evolutionary algorithms, or neural architecture search, which can be used to explore new model designs.

However, there are a few caveats to consider:

- Developing such a system would require significant expertise in AI research, as well as an understanding of the specific problems that need to be addressed. - The GPT model is a complex system, and adding the ability to self-improve or create new architectures could make the model even more complex and challenging to manage. - The computational resources required to develop, test, and implement new architectures are substantial, and optimizing resource allocation would be a critical concern. - Ensuring the safety and ethical use of such a self-improving AI system would be an essential consideration, as it could potentially lead to unintended consequences or malicious use.

Although it is theoretically possible to create a GPT-like model with the ability to experiment with new architectures, it would require significant advancements in AI research, development, and ethical considerations.


That's one of the definitions of the Singularity. Assuming it doesn't quickly hit diminishing returns, at that point in time, you would get a so-called "fast takeoff" of superintelligence.


Exciting thought!


*alarming


This may be because they recently found ways to make GPT3.5 much more efficient.

https://openai.com/blog/introducing-chatgpt-and-whisper-apis


Does anybody have their hands on it? Is it actually 6 times better at performing tasks, or are we paying more for improved corporate-friendly bumper bowling?


Not the API, but if you pay for ChatGPT Plus you can try it out there.

It's very, very impressive so far - I've tried a bunch of things I've previously run against GPT-3 and got noticeably better results - maybe not a 2x or 6x multiple in "quality" (depending on how you measure that) but still very clearly improved.

Posted a few examples here: https://fedi.simonwillison.net/@simon/110022949941148725


Thanks for the reply. I'm not all that impressed by "what is"/information retrieval tasks, is it any better at "thought" tasks like explanatory reasoning and ideogensis? For instance - "How might an agoraphobic kleptomaniac satisfy both of their desires simultaneously?"


That's a tough question even for me. The answer it did give me was to steal something online. Which was better than anything I was thinking. What did you have in mind?


Ah, interesting! It's a tough question and not one that has an obvious answer, which is what makes it a good test, it requires a little creativity. In my tests, ChatGPT/3 can't/won't answer it.


GPT-3.5 for me also didn't answer the question, and also gave me a MUCH longer response. I guess score a win for GPT-4.


I tried again with the new ChatGPT update released today (not GPT-4), and with some coaxing it suggested using a drone to go out and steal while staying at home. Not bad.


Steal from a friends house?


You'd still have to go outside to get to said friends house.

I think GPT4 gave the better answer here.


Those coffee puns are surprisingly good.


It's difficult to be sure exactly what six times "better" means, but I wouldn't expect something to have to be six times better necessarily to be six times better to be six times more valuable. I wouldn't expect a pitcher making $30MM/yr to be 6x better than one making $5MM/yr, but I could buy that they were 6x more valuable to have on your team.


If you've used Bing Chat, you've used GPT-4: https://blogs.bing.com/search/march_2023/Confirmed-the-new-B...


Bing Chat is more useful in many cases because it provides references that connect to it’s responses. I don’t like the mobile UI and that it doesn’t save your chats for you. Maybe I should try the web interface.


heavily lobotomized version of GPT-4


GPT-4 has a knowledge cutoff from 2001. Bing can talk about current events.


You mean 2021.


2021*


In my anecodotal experience, GPT-4 is at least a million times better than GPT-3.

It's like night and day.


How are you trying it? I’m ChatGPT Plus subscriber but don’t see any change.


You should have the option when you start a new chat to use default or 4


Yes now it gave me that. Not sure why it didn’t before.


In what contexts is the difference the most stark?


I imagine this price will come down over time. OpenAI has repeatedly said that they haven't the infrastructure in place yet to handle too much load and expect to be "severely capacity constrained", so I assume the high pricing and usage caps on the ChatGPT version are to keep the load manageably low for the moment.


I imagine those dollars from papa Nadella are buying LOTS of capacity as we speak.


Its not $ constrained.

The existing cloud providers don't have 10000 A100s lying around to ramp up at any time. There's a very limited amount of A100s and H100s on the market. TSMC is printing them like candy, but it still takes time to ramp up.

Azure also needs time to install them in datacenters. These A100s stacks probably produce monstrous amounts of heat compared to normal CPU racks.


You're not wrong but I think many people still have an intuition informed by a pre-silicon-capacity-crunch era, it cannot be taken for granted that the demand for GPU / TPU will be met over the next few years.


Why is the 32K context only twice as expensive as the 8K context?

Are they using sparse attention or something? I don't think flash attention on it's own can explain it.

EDIT: Oh right, if the cost is per token so if you actually fill the context then it is 8x more, which makes much more sense.


Ask it the other way around, "Why is the 8K context only twice as cheap as the 32K context?" and your original answer is clearer: because they think the demand curve supports it.

Price is not determined by cost, but by how much people are willing to pay.


No I'm just an idiot. Since the price is per token the actual ratio is more like 8x more (assuming you fill both contexts) which is plausibly the ratio between the costs.


In the end the cost for you is how many tokens consumed, if you can get more value out of one request with 32k context that multiple with lower, then it’s cheaper (and faster).

Some problems also require a larger context to be able to solve.


The title should be changed to “GPT4 is up to 90x more expensive than GPT3.5”


No surprise here, this is Classic Microsoft & OpenAI.

If you know anything about business models this is by design.

The minute that ChatGPT's pricing came close to free, it was obvious they had a way better model.

It was never about OpenAI having found a more efficient way to reduce costs, it was to price out competitors like Google, Meta, Open Source, etc with a "good-enough" breakthrough model.

Then introduce an expensive superior corrected model.


I think you mean “up to 60” ($0.12 / 1K tokens as opposed to $0.002).


For clarification, $0.002 is the ChatGPT API price, not the text-davinci-003 price ($0.02).


Ah, OK. thanks for pointing that out.


The OpenAI pricing models seem to be decided by ChatGPT.


How so? They seem reasonable to me.

You have the models which are pretty good that are cheap, and the models which are far ahead of the competition which are expensive.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: