Hacker News new | past | comments | ask | show | jobs | submit login

LLMs are nothing more than statistical token prediction engines, feed it bullcrap and it will statistically provide bullcrap. There is no novel thought behind them, and they're marginally more complicated than your phone's keyboard Next Word Predictor.

So, yeah, they'd fit in place of an McKinsey alumni sitting in a C-Suite transparently.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: