Hacker Newsnew | past | comments | ask | show | jobs | submit | pmdr's commentslogin

Every single word domain seems to have become some new AI company.

I still get briefly confused when I see a post on here about X, only to realize they're talking about Twitter, and not the display server.

There are 26 letters and millions of words; people should choose other ones.



So we're supposed to believe that removing humans from customer support will lead to better outcomes?

> Ensure you only pay for the value Sierra delivers with outcome-based pricing.

Yeah... that won't last.


With advances in AI you would've thought the priority would be on automating as much as possible of the non-human facing work and double-down on meaningful customer relationships - but no.

It is just to get hold of the process and make it impossible to go away from them. Then they will jack up the prices like we've never seen. Then it will be "people are actually cheaper why are we using them?" - can't move off the platform, they own our IP even though they said they wouldn't but they updated their ToS without us noticing last month and here we are.

Their secret is that they have hoards of fake AI Customers who will call into their client's AI Customer Support and respond to surveys saying they were extremely happy with the support, so the client has to pay for perfect simulated outcomes.

ai skeptic fanfic evolves in fascinating ways every day

This isn’t specific to AI this is just the dark arts startup valuation playbook. AI extension of gaming the metric “what is the ratio of “active” accounts to validated human daus”

just wait until you read the ai "optimist" fanfic

true. we'll see how many ai cos become profit printers a few years from now

AI customer support is trash and everyone hates it , but it makes the Wall St numbers go up, so it's a good thing.

AI support generally sucks but I actually wouldn't mind if everyone used it for the initial call routing portion. Beats an IVR tree or waiting for someone to just redirect your call to the real queue.

I respectfully disagree with the initial routing point. I very strongly prefer a traditional tree to “I’m your voice assistant! In a few words, tell me how I can help!”.

The tree is structured and gives me an immediate sense of how to map my task to the support offering. If I’m calling, I probably have an issue that I can’t self-serve resolve via the customer portal or whatever, so walking the tree lets me get an idea of who can help.

The “voice assistant” gives me no sense of what the system is capable of or how to take advantage of those capabilities. So I’m left guessing at phrases or functions based off of the assumption that there’s still some kind of tree-like structure that’s been abstracted away. Same outcome, more cognitive overhead, plus I usually have to shout in my best William … Shatner … impression to get it to understand me.


The other side is if you already know the tree you can automate dialing the right tones to get you to where you need if you call it often enough.

If you're calling it an "AI assistant" then it's probably not the type of system I was talking about and I probably don't like it either. AI call routing is having an IVR tree's functionality where the call system does the work to map it to a number in the tree. Anything more than that is getting into something else AI.

E.g. instead of waiting for the IVR tree to be read out to find out you needed to press 4 for the shipping department the AI asks "Please state the department you wish to connect to or reason for calling" and you just say "shipping" (or however much of a life story you want to give it) and it's the call system's job to figure out where in the menu that is instead. For repeat calls once you know its AI call routing you can just say "shipping" right as the call starts, the same as you'd known press "4" before the 2nd time around an IVR tree, except you don't have to remember the random digits.


ime its very implementation dependent

but even a simple impl to answer questions can knock out like 50% of callers who are tech-illiterate at 100x cheaper cost, it's just strictly better economics and better for those customers


I broadly agree though I have noticed that it seems to be getting a bit better. I hate how patronizing pretty much every LLM tends to be, but at least I've noticed now that the AI support is better at figuring out what it is I actually want.

That said, my life hack for these things to get escalated to a human is to just keep saying or typing curse words. Usually that triggers a "connect to human" flow. I can't promise it will always work, but I can say it has worked every time I have tried it.


I hate waiting on hold for 30 minutes even more.

The domain name is ripe to respawn as the name of some new AI company that no one really knows what it does and that has nothing to do with search. Agentic something something.

Because the same rocket man the same crowd here was worshipping a decade ago is bad now. And by extension everything anyone that works for him does must also be bad and evil.

ChatGPT would conveniently throw an error when asked about allegations against Sam. Claude doesn't like openclaw, refusing requests or charging extra if it sees the word.

IMO Elon's manipulation is nothing compared to that.


Forcing an LLM to have extreme right-wing behavior has a much bigger negative impact in society than not liking openclaw or not answering things about Sam.

Also Grok saying it's Mecha Hitler is somehow worse than OpenAI/Anthropic's use by the DoD.

I wonder how long until this post is flagged/"shadowbanned". Such was the fate of almost all of Ed's posts on HN, with little surprise as to why.

People who don't adjust their prior outlook in light of newer data may not be the best fit around here. I'm OK with that.

What is the newer data?

Extensively discussed elsewhere in this thread. Just start at the top and start reading comments.

Can you summarise? I only reached your comment after scrolling past all the others and I still don't have the answer.

Is the new data that models are more useful for coding than they once were?


Cost of tokens goes down over time. Like by a lot. And it will continue to do so.

Imagine being in 2003 and saying compute costs won’t go down. That’s Ed lol.

EDIT: Some quick research on this so you guys have actual numbers: https://gist.github.com/dwaltrip/a037be938d2b5ecc8b8b238736e....

There's multiple separate angles that all contribute to token-costs going down: chip improvements, engineering improvements for running inference in general, AI architecture and training advances that give similar intelligence in a smaller model, improvements in the quality of the training data, data center design / economies of scale, networking and rack-level improvements that are multiplicative with chip advancements, and so on...

If you analyze the situation for 5 minutes, it's blindingly obvious that price-per-token will continue to improve. And there's a very similar case for intelligence-per-token as well.

And don't get me wrong -- I have many concerns about how this is all unfolding and how it will impact society. But let's get our basic facts straight.


I read through some of the sources in your link, but they don't paint as clear a picture as you claim. Yes, the cost of inference appears to be coming down, but we so far don't really know why that is and what the largest contributing factors are. With other costs rising (e.g. the rising cost of training, the cost of inference scaling with number of parameters, and reasoning models using more and more tokens), it means we can't yet make any certain claims about long-term economic viability. There just isn't enough data yet.

Taking a look at the sources in your link, the MIRI's "Observations About LLM Inference Pricing"report [0] seems like one of the least biased ones (forgive me if I don't believe everything a16z has to say about the economics of AI).

Some choice quotes from the report:

"Imagine you went to the gas station and the price was $4.00, and you look across the street at another gas station and the price is $40.00 — that’s basically the situation we currently see with LLM inference."

"Overall, LLMs do not appear to be priced like other commodities."

"It's possible that some providers are slightly modifying a model that they are serving for inference, for example by quantizing some of the computation"

"Unfortunately, it is difficult to make strong conclusions about the underlying costs of LLM inference because prices range substantially across providers. The data used in this analysis is narrow, so I recommend against coming to strong conclusions solely on its basis."

Another source you linked, Don't Panic Labs[1], seems to agree with Zitron:

"It is a little unclear as to why the price per token is dropping, and I am still a little worried that the price per token will, at some point, go up."

"According to another graph at Epoch AI, the cost to train a model doubles every eight months. This tends to align with the common wisdom that we are getting a really good deal right now, while everyone is fighting to build market share."

If your inference costs come down due to quantization, that doesn't count, since you're cutting costs by offering a worse service, and there's only so much you can do that before your customers walk away. If your inference costs come down due to subsidization, that doesn't count either, since that obviously won't last forever. If your inference costs come down but your training costs double every eight months, that poses a significant problem for your business. If your argument to that is "training costs won't continue to increase at this rate forever". Well, inference costs won't continue to come down at this rate forever, either.

From what I can tell, there still isn't enough data to draw a strong conclusion either way.

[0]: https://techgov.intelligence.org/blog/observations-about-llm...

[1]: https://dontpaniclabs.com/blog/post/2025/12/02/the-price-per...


That sounds like a reading comprehension skill issue? In which case I don't see why me summarizing would move the needle.

But if it helps, no, the data being discussed is surrounding the economics of running inference and R&D, nothing to do with the utility of models for coding.


Yours is the first from the top to mention this. You might want to consider the physical location of your comment before telling people to read the thread. We could do without the rudeness, too.

> A $20 subscription 2 years ago is not providing the same level of intelligence you're getting today.

That subscription was then and is now likely still subsidized.


For all we know, there could be 10 people paying for a ChatGPT subscription and not using it enough to subsidize 1 power user _and_ still have money left for profit.

Oh they'd be sure to let us know if that were the case.

Why would the AI companies advertise that most of their users do not use their subscription in full??

People tend to believe OpenAI and Anthropic can make money any time, the only thing they need to do is to stop training newer/better models. Source? Sam & Dario, of course (trust us, bro). It may (if they sell access at API price) or may not be true, but the scenario where training is stopped is simply unrealistic at this point.

> I assume targeting big centralized networks such as X and Facebook is good enough.

Exactly, and make non-anon networks the norm enough so that most people will never trust anything said on fringe social networks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: