Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This LLM hype is out of control, they literally bullshit on any novel input (admittedly there's not a lot of this) and are wrong 15-20% of the time for all inputs

This isn't a "we'll just add more context", "we'll just add more instructions and params", "our multi-head transformers will auto-attention the tits off your query" etc - this is search based on probability maps. It is structurally unable to be intelligent, no matter how many models you chain together and how much compute you throw at it. But it can sure mimic that shit, which is why LPs are going to lose their shirts and GPs are going to struggle to raise follow on funds.

Let's move on.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: