Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 6. Human performance on a task isn’t an upper bound on LLM performance

Is that true? Because LLMs are trained on the work of human: a LLM that learned all of the content of the articles would at best yield output equal to that of the human writing.

The reasons the author gives seem unfounded to me:

> First, they are trained on far more data than any human sees

If the human writer has access to google (everyone in modern society) this point is moot.

> In addition, they are often given additional training using reinforcement learning before being deployed.

If this is human in the loop RL, then the upper bound would still be the human training it. If it isn't, refer to #1.



I don't agree with you. Human knowledge itself is not bound only to the previous knowledge before it, so why wouldn't AI be any different? This is more of where AGI will shine, but I still think LLMs can produce novel innovation.

There's also a difference between having access to and being able to 'comprehend' all previous works in a given genre for generating music, that a human simply won't have the ability to process and recall when creating their own music and the AI could surpass. Humans simply having access to information is different than using information.


> If the human writer has access to google (everyone in modern society) this point is moot.

But a single human can only read so much information in a finite amount of time (their life). Sure, the same thing applies to LLMs, but the theoretical limits are orders of magnitude higher.

And while this probably isn't what you were talking about, it's also pretty clear that any time-limited task favours LLMs. There's no human on earth that could write a non-trivial short story in 10 seconds, regardless of quality level.


> If the human writer has access to google (everyone in modern society) this point is moot.

Does not follow at all. There’s a vast difference between being able to search for and internalize a tiny, tiny sample of all the available information, and literally having sampled and synthetized all of it.

> If it isn't, refer to #1.

Meaning what? LLMs use adversarial learning, the same thing that allowed AlphaGo to reach superhuman levels in Go.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: