AGI is defined as human-level performance across wide variety of tasks. ChatGPT achieves that. It's somewhat stupid and limited (no vision, hearing, manipulation), but still, in principle, it's AGI.
I don't want to argue about the semantics or definition of AGI, since that is a rabbit hole that I don't believe contribute anything to the subjective danger that people are feeling.
The only thing I want to say about ChatGPT and other LLM model is that they don't know what they are doing. Give me something that knows what its doing, and perhaps more importantly, what it wants to do. Then I'll acknowledge my personal obsolescence, if not at that point then in quick succession, but until then.
Right now everything is in such a grey area that we could take this discussion in so many ways. Writing it off because it doesn't mean what it says misses the point, in my opinion, the point is that anyone who knows how to write the correct questions can work with ChatGPT to develop things things they couldn't before, improve their own writings, and potentially replace their need to consult others, for a quick list.
You are the one saying that belief behind the writing matters. What if the audience decides to upvote ChatGPT instead of you? Who says conviction is essential?
I would also contest you by suggesting that the fearful ones are the one's who find a need to downplay the threat, or we can rephrase it, advanced usefulness of current, let alone near-future versions.
And I realize ChatGPT could have written this post quicker than I did, including a version that contains my typo patterns, as well as a more concise and grammatically improved version...
It is unsurprising to find wisdom in 570GB of human generated text.
It is unsurprising to find useful information being returned when a statistical process is used to extract value from those text. It's a similar process to how search engine work, except more costly and the result more natural.
However, if you choose to believe that the above process is signifying the development artificial intelligence that will start to have its own consciousness, then good for you.
Again, I'm not saying ChatGPT is useless or won't replace many jobs, it is amazing in its own right. It's like a refined and more useful google, which is huge. I'm only arguing that it is not AGI, will not develop into AGI, attempting to argue for its usefulness in other area is not arguing against me - no strawman please. As for myself, I'm not too worried for my job from this angle (there are plenty of other angles to worry about, that is), for with all tooling that develops, it benefits the people who could best put the said tool to work.
It is not in any sense whatsoever AGI. AGI is an unsolved problem. You are defining it to be something it’s not. I don’t know where this idea comes from.