I think the reason to be impressed is that they do things that were previously not possible. And they are absolutely directly useful! Just not for everything. But it seems like a very fruitful line of research, and it’s easy to believe that future iterations will have significant improvements and those improvements will happen quickly. There’s no sense worrying about whether GPT4 is smarter than a human, the interesting part is that it demonstrates that we have techniques that may be able to get you to a machine that is smarter than a human.
This. LLM's have a surface that suggests they're an incredibly useful UI. That usability is like the proverbial hand full of water though - when you start to really squeeze it, it just slips away.
I'm still not convinced that the problem isn't me though.
Part of me wonders, though, could we "just" connect up an inference engine and voila? We could really be on a cusp of general AI. (Or it could be a ways off) That's a bit frightening in several ways.
When you do, you learn that they're talented mimics but still quite limited.