Well I found the exact reverse: people saying LLM are useful actually need them to be useful, so they could boast about their "prompt engineering" "skill" (i.e. typing a sentence) and "AI knowledge". I've seen a caricature of this a few hours ago on LinkedIn, from a "data guy" saying devs not using AI are gonna replaced by those who do. Yet it was very clear from his reply to comments he never wrote code and wasn't a position to give an opinion on the matter, especially one like the extreme and rude one he wrote.
Both your and GPs observations (and many more) can be true simultaneously.
Some people are quick to dismiss any new technology as useless; others are quick to hail it as the thing that will take everyone's jobs in just a few months (and might consider that a good or bad thing) or solve any number of humanity's hard problems.
Usually one or the other will be seen as slightly more accurate in retrospect, but since both ultimately come from a knee-jerk reaction to something new, with rationalizations bolted on to support their respective case, most of these can be safely ignored.
Remove all the parroting of technical jargon, wishful thinking, appeals to morality etc. and you essentially have two crowds arguing why this time the roulette ball will surely fall on red/black (and everybody forgetting about the zero/green).