Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes and no.

If you had access to the training corpus text, you could be more methodical. Even then, it's still guesswork, because that's what LLMs are: inference models.

And that's why your criticism applies to LLMs on the whole. They are a personified black box; and the only way we could possibly study them is by feeding it a prompt, and making inferences based on the black box's output.

...or we could stop limiting ourselves, and create a constructive understanding of the technology itself; but apparently no one is interested in doing that...

So let's keep checking its SAT score! Yeah, magic is real, and it can totally pass the BAR exam or whatever!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: