Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is paid ChatGPT Plus worth it?
27 points by JaDogg on Nov 18, 2023 | hide | past | favorite | 83 comments
Last time I checked there was a limit to access GPT-4 even after paying.

GPT-3.5, Bard, Bing is free -- any real advantage? specially considering limited acess for individual users? (even after paying)

Now there is even a waitlist to pay for it. I am hitting the capability wall of free version regularly.

If you are using it, are you not running into 50 per month limit? Is that enough for you? << seems like 50 per month is wrong it is 50 per 3-4 hours, but it seem to change sometime. I can probably hit 50 per 3-4 hours limit trivially when I'm focused. I have been throttled of hackernews/reddit/etc for posting too fast for example.

(UK based if that matters)

PS. I'm using copilot. But I see that it works fine as a autocomplete for Python. I tried using it for C++ but I had to delete chunk of it as I ran into segmentation faults (very easily). It is defenitely good at generating tests tho.




As a counter-point, I paid for it for three months immediately after its launch, had a lot of fun, and then cancelled my subscription.

Yes, GPT4 is very noticeably smarter than GPT 3.5 or any of the competitors like LLAMA 2. I rank almost all LLMs at the same level as a high school student, and GPT4 at the level of a graduate student just starting out as a junior employee.

However, I don't need a "junior" assistant, I need more of "me", which GPT4 can't currently emulate.

If I were to hire a flesh & blood human junior tech, that would be with the expectation that the junior would learn with time and pick up the specific techniques and approaches I prefer to use. Currently, GPT models cost an absurd amount of time, effort, and money to customise, and there's little chance that this would result in the equivalent of a trained employee.

These days I don't write much code, and the code that I write I can bang out faster than I can explain what I want to ChatGPT.

I've also found that the act of coding is a part of the learning process. I can't really understand something until I've taken it apart myself and put it back together again.

I keep telling my customers: I can teach it to you, but if I learn it on your behalf, you'll end up knowing nothing yourself.


Personally I'm on the wait list, mainly because it cracked a problem in seconds that one hour plus on stackoverflow hadn't solved. I didn't go to ChatGPT first because (erroneously) I figured the issue was too esoteric for it to crack.

Specifically, following an update to Samba 4.13 our diskless DOS systems failed to map their drives on the server. ChatGPT came up with the answer first time, with a bunch of specific settings in smb.conf, unclear which did the trick but it worked at first attempt (after numerous failed SO-inspired guesses).

The day before, a co-worker sent a draft of a Web UI layout that was clearly doable but well beyond my CSS abilities . Four hours later, with copilot assistance, the web page (a complex form) got a big thumbs up.

Today, I asked a question about how to solve a problem I'd struggled for some months. No useful answers on SO, lots of net-poking and thinking got me a working solution that seemed pretty cool. And...I asked ChatGPT, first try it gave me CSS that was definitely better than my efforts.

On the other hand, in my specific domain of expertise, the answer are either egg-suck or trival.

So (for me, personally) the real boost is in areas I have a little knowledge and can figure out the right question to ask an AI.


Yes I have this experience with GPT 3.5, where my code is superior when it actually matters. But I'm not sure if GPT 4 will help me with that. There is a waitlist to pay now.

I recently tried to use it write a .scad parser. I ended up writing it myself as code it generated was useless. (This was even in Python and I tried rephrasing and giving it multiple examples)

It also loves to import libraries. Which I do not really like I prefer avoid dependencies as much as I can.


It seems to me people’s expectations vary a lot.

For me ChatGPT has replaced a lot of Google searches and skimming stackoverflow just to get some details for a small obstacle in my workflow.

Even if answers are not a hundred percent correct all of the time, it saves so much time to get to “good enough”.

I use it to generate shell scripts, SQL queries and general boilerplate like configuration files.

A couple of days ago I went from “I’d like to have a local DNS server running to test one aspect of my code” to a running docker container of CoreDNS with the configuration I needed for my tests in a matter of minutes. The same task would have taken forever had I googled for the information myself and stitched together snippets from the docs until it worked the way I wanted.

I run into the message limit occasionally, but 50 messages per 3 hours is more than enough most of the time.


I see. this make sense to me. But I wanted to try to see if it is sufficient... but there is a waitlist for paid now. (Perhaps recent drama has something to do with that)


Bing Chat (Choose "Creative Mode" or "Use GPT-4" depending on UI) is GPT-4 with retrieval augmented generation using the Bing Search engine. It seems to be tuned a little differently, but I haven't found it any better or worse than GPT-4 with browsing enabled.

Note that if you don't pick "Creative Mode" or "Use GPT-4, you get Microsoft's own LLM.

There is a new GPT-4 Turbo available from OpenAI that is ahead of GPT-4 on many benchmarks:

https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

My guess it that it will soon be incorporated into Bing Chat. So if money is tight, I would stick with Bing Chat (with the appropriate mode turned on). Note that Bing Chat is very different from vanilla Bing Search.

Bard and GPT 3.5 are decidedly inferior to GPT-4. I wouldn't waste my time using them.

Github Copilot is also markedly inferior to GPT-4 in generating code based on instructions.

Sometimes I get a little impatient waiting for Bing Chat to generate an answer. If the question is not too complicated, I have found perplexity.ai (w/o GPT-4) to be low latency and high factuality. In fact when searching for research papers on a topic, I have often found it superior to the alternatives.


Not sure if you've used Bing Chat recently. It's performing worse and worse over time. One pattern that I keep running into in almost every conversation is that it doesn't acknowledge the question and just answers something else. When I clarify myself and ask for a better answer it starts repeating itself verbatim, again and again, and there's no way to get it out of this.

Additionally Bing Chat has a limit of messages per chat, which means you lose your context over and over again, which is a significant impediment when context is important to the problem at hand.

Another problem is Bing Chat is much more likely to search online, than ChatGPT 4. ChatGPT searches when this is logically necessary, like asking for latest news. Bing almost always searches, and then IGNORES its own knowledge, in order to give out a poor digest of what it read online. The problem is that what it read online is often not relevant to the question, because it depends on how accurate the search query was. And if I wanted just to read the search query I'd go search myself.


Yes, I have been using it the last few days. I haven't noticed the failure case of Bing Chat not answering the question at all. Are you sure you had the modes I recommended turned on? Try reporting the problems to Mikhail Parakhin, the Microsoft CEO of Advertising & Web Services, on Twitter. Most of his messages are about bugs and features in Bing Chat and generative AI products.

https://twitter.com/mparakhin?lang=en

Definitely agree that Bing Chat's message limit per conversation needs to be kept in mind. I haven't found that to be a problem in practice, since I start a new topic with the most relevant recent context. OpenAI's GPT-4 seems to be doing some smart summarization behind the scenes since you can keep going.

I agree that there are occasions when the search results confuse Bing Chat, and GPT-4 (w/o browsing) performs better.

Your thoughts about where GPT-4 excels over Bing Chat are quite insightful! It would be great to see some examples prompts/questions where you feel that the quality has degraded over time. I am sure Parakhin would be interested as well.


Thank you for the pointers. Frankly I mix up the modes a lot usually, trying to get a good answer, and since it's been a few weeks (I kind of got annoyed and stuck to ChatGPT-4 from then on) and I can't immediately recall if Creative Mode does that specifically. My memory is all modes do, but that may be wrong. So I'll follow your advice and also noted who to ping on Twitter, cheers :)


If you were logged into Bing, those prompts may be in your history. They can be viewed using the Edge browser.

A few weeks ago, I had spotty service with Bing Chat where it would keep resetting the conversation which I assumed was due to load. In general all these LLM services are in constant flux because they are tuning both the models and UI. They feel like alpha quality products in terms of stability.


>Github Copilot is also markedly inferior to GPT-4 in generating code based on instructions.

Is that still true? I noted a significant improvement with the chat function. Especially the ability to mark sections of code for review/discussion is something that you cannot easily do with ChatGPT4.


Github Copilot Chat is in beta and is marketed as a separate product. I suspect it is using a later generation (and better) underlying model. I used it many months ago, and I agree it is much better than vanilla Github Copilot. I haven't done a careful comparison to GPT-4, however. As you point out, the integration with the IDE does save a lot of time. Developers should definitely try it out.

https://docs.github.com/en/copilot/github-copilot-chat/about...


I cannot pay even if I want to. :( There is a waitlist. Are people just using OpenAI APIs? or chatgpt+?


The differences between 3.5 and 4 are substantial, at least for development (you didn't provide your use case). I'm using it everyday, all day. My IDE (cursor.sh) has a wrapper around the API. It's a game changer for me. I was able to launch my SaaS twice as fast, and the code is more reliable (I'm not a senior developer, while gpt4 gives senior solutions, most of the times)


Do you run into the limit of 50 per month? or is there a way to pay more to get more GPT-4? How exactly are you using it? (50 per 3-4 hour is the correct limit, still seem small)


50 per 3 hours isn't so bad when I am using it. Writing a prompt + the response takes a minute, then it takes a couple minutes on average to go over the response and only then do I end up asking again. In using it I've only ended up hitting the limit a few times, and most of those times it's been a few minute wait.

API access can get you more if you would like, you would just have to use a different UI or make your own.


50 per month? You mean every 3 hours?


yes you are correct. I seem to have imagined that.


"Hallucinated" it, perhaps?


haha yeah


After the recent updates to the UI, the old 50 messages every 3 hours warning went away. Though I haven't really tried pushing it past that limit to tell if that restriction is still there.


It still is, and I run into it sometimes but only when using it repeatedly


Paying for GPT-4 is worth it.

ChatGPT Plus is one way of doing so and works fine. There is no 50 per month limit; not idea where you got that from. The limit is 50 per 3 hours, but I've honestly never hit it.

You can also just get an API key and use it that way via a number of different tools, wrappers, and frontends. Depending on your use case, that may be better or worse.


No 50 per month limit; not idea where you got that from. The limit is 50 per 3 hours. --- ah I seem to have hallucinated that :(.

So this might actually be fine. OK


Hell yes!

I currently do Java, Spring Boot and Rust development.

ChatGPT is at least 100 times better than Google Search for helping you with Hibernate/JPA stuff, especially related to eager fetching and joins of child entities with child entities with child entities. I recently struggled with an issue where ChatGPT helped me with a solution for using the ILIKE operator in PostgreSQL with Hibernate which doesn't support this operator. I implemented a workaround (https://olavgg.com/show/how-to-use-spring-boot-jpa-criteriab...)

It is crazy how fast I learn about new things that I don't know how to search for and most of the examples ChatGPT spits out do work 100%. If not, you're already 95% there and only need to apply a minor change! The power of ChatGPT makes me at least 100% more productive. I can do twice as much every day as I now have a lot less "blockers" wasting my days. This also makes programming a lot more fun :D


Are you not running into the limit for paid version? how are you working with in 50 per month limit? (Sorry 50 per 3-4 hour, human error)


per 3 hours and usually not

but I also have a second subscription if I do


Indeed I have made an error. Updated.


I use copilot for python and I wouldn't use Chatgpt for code completion or anything but writing large portions of code or functions. I generally use copilot for code.

I use ChatGPT for writing letters and emails. It saves me tons of time. I also use it to generate quick emails and other things. I really like it and it's quick and convinient. I'm sure there are alternatives that I could run locally and I would like to avoid the restrictions that I run into every once in a while.

However, I don't think $20 a month is bad for writing letters and I prefer not to use a search engine that's trying to generate links when I'm just trying to write emails.

I generally don't use it for facts, I just tell it the facts in a few sentences and it cleans it up and generates the whole email and it's great.

I would say the nice thing about paying $20 a month is you get a reliable good service and aren't continually switching and second guessing it and they continually upgrade it and add new features.


Thanks. Here is my conclusion

- Majority of people find it useful at the price point. (only 2 disagree it seems from all the comments here)

- Instead of using chatgpt, directly using OpenAI API might make more financial sense and I might need to write a wrapper on top of their APIs (or use one that is already there)

- Limits are 50 or 40 per 3 hours. API to test that - https://chat.openai.com/public-api/conversation_limit (1 person say they have not hit the limit and there seem to be no limit, I cannot test that unfortunately)

- There is a waitlist for ChatGPT+ So I cannot pay even if I want to. :(

- Almost everyone agree GPT-4 is a lot better. (High school vs Degree level as someone has pointed out)

- There seem to be a cost dashboard with ability to set limits for API.


As a developer, I'd pay $100 a month for it if they allow 32k+ contexts. At least half my code is now generated from prompts.


Use the API and have unlimited use that you pay for.


I see.. so that might be better for my usecase.


They have 128k contexts right now, and you do not even have to pay $100 for that.


It has 128k context, but in my experience it's very poor at utilizing it. As in it stops paying attention to large chunks of the context (i.e. doesn't factor them into its answers) unless you specifically point it out.

This affects also GPTs personas. It starts out by following the instructions then gradually reverts to default GPT in the course of the conversation. Then when you ask why's that happening, it's like "oops, sorry, I'm totally <persona> now, see?" And then 5-6 messages later it forgets again.


The limit is not per month but per 3 hours.

Though they change the limit based on the current load. It has been very low in the past and I hit the limit once or twice since the availability of ChatGPT plus.

You can check the current limit in this JSON document: https://chat.openai.com/public-api/conversation_limit

You don't need to be logged in.


This is not accurate. The JSON shows 50 per 180 minutes. I and many others are getting 40 per 180 minutes right now.


Same for me. It may be a recent UI bug with the limit message being hardcoded in the internationalisation.


Yes you are correct. I have updated original question thank you.


Well, I have signed up for copilot initially but I held out with gpt plus. I cancelled copilot pretty quickly as I wasn't really using it much, but then I signed up to chatgpt plus.

It is worth it because of better access (no more "were busy now") and Gpt4.

Also in my take, yes chatgpt is worth it if you're working as a full tech stack person and you're often asked to accomplish tasks with unknown to you, but popular software. Then instead of searching through pages and pages of crappy docs (or when such docs do not exist) you can just ask chatgpt and then just verify(because it will be wrong a lot, but you can often get it to verify it's own answers).

For example, let's say you're very familiar with Jenkins and github, but your client asks you to port their cicd pipelines to on site gitlab. Then, considering a lot of stuff for gitlab enterprise are barely documented you'll spend lots of time experimenting. Having chatgpt at hand to speed this up is very beneficial.


What you get from ChatGPT Plus depends on your needs and how much value you derive from it. In addition to increased usage limits and priority access, ChatGPT Plus offers advanced customization options. Plus might be worth your money if you frequently use ChatGPT for business, education, or personal projects that require more usage than the free version allows. Also, if you value faster response times and priority support, it may be a good option for you. Think about your usage patterns and whether the additional features are worth the cost.


long story short, depends on your needs. for me, after using tools like advanced data analysis and photo recognition it would be really really hard to live without especially the photo recognition since i live in a foreign country. it has become a part of my workflow in many different things and although there are other models like you've mentioned, the overall experience both on desktop and mobile isn't nearly as good as what openai provides.


My concern is it I will run into the 50 per month limit. Are you running into that?

nearly as good as what openai provides. - do you mean what they provide for corporate clients?

(50 per 3-4 hours not 50 per month, sorry about that human error)


what is this 50/month limit you keep talking about? you need to get your facts right. That's quite the weird behavior to come to a technical forum and ask to be convinced to use the most groundbreaking tech of at least 10 years based on wrong assumptions


I thought I updated the post. Yes I'm aware it was 50 per 3-4 hours. I have made a human error.


It's not 3-4h, it was 3h but it's gone for months, at least there are months I don't get a message about reaching it - and I use it nonstop


perhaps that is why we have a waitlist to purchase it now.

what does this say to you? https://chat.openai.com/public-api/conversation_limit


For the people using VS Code, how do you "integrate" it in your workflow?

Are you using an extension (with an API key) or copy/pasting in the web chat?


I have seen some third party plugins, I'm reluctant to try


Usually no - it's often better to get API access and install a custom frontend (lots of examples on GitHub - hosted or self hosted web apps, Signal/Whatsapp/Discord integrations, command line tools, ...). You'll pay for what you use - I typically pay $5-$10 per month, and that's with some code that uses GPT from Python scripts and generates most of the cost.


We share the account and split the cost. I have shared the account with like 10 people. Yes, you can see what other people write, but that is fine for us.


Is there some near realtime cost viewer there if you have used this already. I'm pretty sure I can hit the 50 messages per 3-4 hours limit.


Yes, there's a cost dashboard, and as far as I can tell it is near real time. There are also cost limits, both configurable by you and enforced by the platform (since they sell this service on monthly credit - you pay at the end of the month). BTW, you also get access to other OpenAI API endpoints, like Whisper transcription or Dall-e.


That seems useful. Thank you.


I bought it for the first three months, cancelled because I didn't use it frequent enough. Now I resubscribed but mainly because I needed image assets and DALLE3+GPT4 is really cool way to generate il image ideas.

I do believe that it might be cheaper for many developers to pay per request using the API platform and use one of the many frontend clones with your API key.


How easy it is to cancel? can you get for 1 month then cancel and get another month later?


It’s literally a single click under your account settings.


If you use enough to justify the cost, right?

I ran the numbers and it saves me a few hours a week on average, so it really pays for itself. If you don't use it then it doesn't make sense.


Do you run into the limit of 50 per 3-4h? is that changed now. I see there is also a wait list to join paid version


This is going to be a rather Scandinavian opinion, but it's worth it when your company pays for it. With the exception, that you need to be actually capable of using it for your work without sharing trade secrets or breaking the GDPR and so on.

I'm a fairly senior developer and I use it quite a lot. I think 3.5 can do most of the things you need most of the time, but 4 and the API's help you save a lot of time. It's not really world changing in that it's still not very good at actually writing code that couldn't be auto-generated before LLM's, but it's just sooooooo much more efficient at doing all that auto-generation than what you had available before, and with the API's it integrates fairly seamlessly into many IDE's. Just remember to deny it access to any code that's actually secret. I work in the energy sector, so we actually has some of that, but 90% of what I've worked on over the decades could've frankly all been open source with not great issues. But again, it'll depend on what you do. If you're in HR, contract management or similar, you'll probably not be able to share much of what you do with it legally.

But like I started out saying, get your company to pay for your subscription if you use it for work. It's a tiny fee compared to a lot of the other money your company already spends on you.


I'm not using it for office work. I just want to use it for personal projects that I want to build.


I’m not sure I would pay for it for private projects over just using 3.5, but everything I said about its efficiency bonus is still true, so it’s probably largely down to you. I think it’s worth it, but not enough for me to buy it, at least not yet and at its price. I have friends who pay for it no questions asked for private use, so I’m probably in the minority.


What exactly is scandinavian about this? I didn't get the reference.


Not spending your own money to buy tools that help you at work. I have the impression, and that could be totally wrong, that it would be far more “normal” for Americans do to so to get ahead.


It's absolutely worth it. It's like the difference between using Notepad and a proper IDE.


Do you run into the limit of 50 per month? is that changed now. I see there is also a wait list to join paid version. (Sorry 50 per 3-4 hour, it is still small)


It’s a limit per 3 hours, the limit used to be 25, then 50, then down to 40 this week. And yes I do run into it so I take a break or use the API.


Updated.


Yes, it's a generational step up from 3.5 and is actually useful in the day to day


its the cheapest sub by far of all I am subbed to, if I measure by value gained from it


How are you working with in the limitation of 50 per month? (Sorry 50 per 3-4 hour)


I don't need more than that. Also, for some things, I use the API.


Unless they changer recently, it's not 50 per month but per 3 or 4 hours ?


It is 40 per 3 hours, it was just changed a day or two ago.


Yes you are correct. I have made an error. Original post is updated.


It’s cheaper to get an API key and use a client such as MacGPT.


Yes. It’s not too much for a useful tool.


Are you running into the 50 per month limit for + users to access GPT-4? (Sorry 50 per 3-4 h)


Yeah. GPT-3.5 is useless, 4 is just on the margin of being better than using my old workflow.

Definitely worth the money. (Never run into the limit)


GPT-3.5 and open source models need more RAG and prompt engineering but it can be as good as GPT4.


honestly if chatgpt4 would charge 200$ per month I would by now also probably pay it

even at 500$ i would still think about it


no


yes




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: