Hacker News new | past | comments | ask | show | jobs | submit login

The argument of more compute power for this plan can be true, but this is also a pricing tactic known as the decoy effect or anchoring. Here's how it works:

1. A company introduces a high-priced option (the "decoy"), often not intended to be the best value for most customers.

2. This premium option makes the other plans seem like better deals in comparison, nudging customers toward the one the company actually wants to sell.

In this case for Chat GPT is:

Option A: Basic Plan - Free

Option B: Plus Plan - $20/month

Option C: Pro Plan - $200/month

Even if the company has no intention of selling the Pro Plan, its presence makes the Plus Plan seem more reasonably priced and valuable.

While not inherently unethical, the decoy effect can be seen as manipulative if it exploits customers’ biases or lacks transparency about the true value of each plan.




Of course this breaks down once you have a competitor like Anthropic, serving similarly-priced Plan A and B for their equivalently powerful models; adding a more expensive decoy plan C doesn't help OpenAI when their plan B pricing is primarily compared against Anthropic's plan B.


Leadership at this crop of tech companies is more like followership. Whether it's 'no politics', or sudden layoffs, or 'founder mode', or 'work from home'... one CEO has an idea and three dozen other CEOs unthinkingly adopt it.

Several comments in this thread have used Anthropic's lower pricing as a criticism, but it's probably moot: a month from now Anthropic will release its own $200 model.


Except Anthropic actually has the ability to deliver $200/month in value whereas OpenAI lost the script a long time ago.

Not a single one of OpenAI’s models can compete with the Claude series, it’s embarrassing.


> Not a single one of OpenAI’s models can compete with the Claude series, it’s embarrassing.

Do you happen to have comparisons available for o1-pro or even o1 (non-preview) that you could share since you seems to have tried them all?


Even o1?


As Nvidia's CEO likes to say, the price is set by the second best.

From an API standpoint, it seems like enterprises are currently split between anthropic and ChatGPT and most are willing to use substitutes. For the consumer, ChatGPT is the clear favorite (better branding, better iPhone app)


It might not affect whether people decide to use ChatGPT over Claude, but it could get more people to upgrade from their free plan.


An example of this is something I learned from a former employee who went to work for Encyclopedia Brittanica 'back in the day'. I actually invited the former employee to come back to our office so I could understand and learn from exactly what he had been taught (noting of course this was back before the internet obviously where info like that was not as available...)

So they charge (as I recall from what he told me I could be off) something like $450 for shipping the books (don't recall the actual amount but it seemed high at the time).

So the salesman is taught to start off the sales pitch with a set of encylopedia's costing at the time let's say $40,000 some 'gold plated version'.

The potential buyer laughs and then salesman then says 'plus $450 for shipping!!!'.

They then move on to the more reasonable versions costing let's say $1000 or whatever.

As a result of the first example of high priced the customer (in addition to the positioning you are talking about) the customer is setup to accept the shipping charge (which was relatively high).


This is called price anchoring.


This is also known as the Door-in-the-face technique[1] in social psychology.

[1]: https://en.m.wikipedia.org/wiki/Door-in-the-face_technique


That’s a really basic sales technique much older than the 1975 study. I wonder if it went under a different name or this was a case of studying and then publishing something that was already well-known outside of academia.


Wouldn’t this be an example of anchoring?

https://en.wikipedia.org/wiki/Anchoring_effect


Believe it or not, it can be multiple things at once


I use GPT-4 because 4o is inferior. I keep trying 4o but it consistently underperforms. GPT-4 is not working as hard anymore compared to a few months ago. If this release said it allows GPT-4 more processing time to find more answers and filter them, I’d then see transparency of service and happily pay the money. As it is I’ll still give it a try and figure it out, but I’d like to live in a world where companies can be honest about their missteps. As it is I have to live in this constructed reality that makes sense to me given the evidence despite what people claim. Am I fooling/gaslighting myself?? Who knows?


Glad I'm not the only one. I see 4o as a lot more of a sidegrade. At this point I mix them up and I legitimately can't tell, sometimes I get bad responses from 4, sometimes 4o.

Responses from gpt-4 sound more like AI, but I haven't had seemingly as many issues as with 4o.

Also the feature of 4o where it just spits out a ton of information, or rewrites the entire code is frustrating


GPT-4o just fails to follow instructions and starts looping for me. Sonnet 3.5 never does.


Yes the looping. They should make and sell a squishy mascot you could order, something in the style of Clippy, so that when it loops, I could pluck it off my monitor and punch it in the face.


But you are not getting nothing there is actual value if you are able use that much and consistently hitting limits in the 20$ plan.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: