Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People overly impressed by LLMs haven't spent a lot of time trying to make them actually useful.

When you do, you learn that they're talented mimics but still quite limited.



I think the reason to be impressed is that they do things that were previously not possible. And they are absolutely directly useful! Just not for everything. But it seems like a very fruitful line of research, and it’s easy to believe that future iterations will have significant improvements and those improvements will happen quickly. There’s no sense worrying about whether GPT4 is smarter than a human, the interesting part is that it demonstrates that we have techniques that may be able to get you to a machine that is smarter than a human.


This. LLM's have a surface that suggests they're an incredibly useful UI. That usability is like the proverbial hand full of water though - when you start to really squeeze it, it just slips away.

I'm still not convinced that the problem isn't me though.


Part of me wonders, though, could we "just" connect up an inference engine and voila? We could really be on a cusp of general AI. (Or it could be a ways off) That's a bit frightening in several ways.


I kind of expected AI to be AI - and not a mirror.


Meaning it wouldn't necessarily be human-like?


That'd be a start I think. Or at the very least if it was in a room alone there would be something there.


Yes, I actually think this is true. See my above comment, which supports your claim.


This also applies to humans.


Most humans can mimic but can also describe their first-hand experience of being scared or happy or heartbroken or feeling desire etc…




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: