Since 'gets anything wrong, ever' is the current goalpost for agi (per the Gary Marcus methodology), we have to judge human intelligence by the same stick. Since the author of this article misunderstood the gpt release process, they have proven they are a non sentient pile of trash brain, ready to be processed into hamburger.
Actually ... that's a reasonable goalpost, in my opinion.
Yes, humans make careless mistakes. However, humans mostly make careless mistakes because A) their brains have to reconstruct information every time they use it, or B) they are tired.
LLMs, as piles of linear algebra, have neither excuse. Their training data is literally baked into their weights, and linear algebra does not get tired.