I think this talk [0] by Jodie Burchell explains the problem pretty well. In short: you are right that for a given task, only the outcome matters. However, as Burchell shows, AI is sold as being able to generalize. I understand this as the ability to transfer concepts between dissimilar problem spaces. Clearly, if the problem space and or concepts need to be defined beforehand in order for the task to be performed by AI, there’s little generalization going on.
Then those salesmen need to be silenced. They are selling the public AGI when every scientist says we don't have AGI but maybe through iterative research we can approach it.
Describing some service/product in grandiose terms and misrepresenting it's actual use cases, utility, and applicablity, that it'll solve all your ills and put out the cat, is a grift
for as long as there have been salesmen. Silencing such salesmen would probably be a net gain, but it's hardly new and probably isn't going to change because the salesmen don't get hit with the responsibility for following through on the promises they make or imply. They closed the sale and got their commission.
[0] https://youtu.be/Pv0cfsastFs?si=WLoMrT0S6Oe-f1OJ