Hacker News new | past | comments | ask | show | jobs | submit login

> LLMs have no real sense of truth or hard evidence of logical thinking.

Most humans don't have that either, most of the time.




Then we already have access to a cheaper, scalable, abundant, and (in most cases) renewable resource, at least compared to how much a few H100s cost. Take good care of them, and they'll probably outlast most a GPU's average lifespans (~10 years).

We're also biodegradable.


Humans are a lot more expensive to run than inference on LLMs.

No human, especially no human whose time you can afford, comes close to the breadth of book knowledge ChatGPT has, and the number of languages is speaks reasonably well.


I can't hold a LLM accountable for bad answers, nor can I (truly) correct them (in current models).

Dont forget to take into account how damn expensive a single GPU/TPU actually is to purchase, install, and run for inference. And this is to say nothing of how expensive it is to train a model (estimated to be in the billions currently for the latest of the cited article, which likely doesn't include the folks involves and their salaries). And I haven't even mentioned the impact on the environment from the prolific consumption of power; there's a reason nuclear plants are becoming popular again (which may actually be one of the good things that comes out of this).


Training amortises over countless inferences.

And inference isn't all that expensive, because the cost of the graphics card also amortises over countless inferences.

Human labour is really expensive.

See https://help.openai.com/en/articles/7127956-how-much-does-gp... and compare with how much it would cost to pay a human. We can likely assume that the prices OpenAI gives will at least cover their marginal cost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: