That is a ridiculous exaggeration. Carmack was clever enough to gain ~1 year advantage in performance over his competitors for the Doom engine, using Binary Space Partitioning, which was first applied to 3D graphics in 1969, before he was born. The Quake engine got a significant performance boost from Michael Abrash, who is a specialist in code optimization.
No, he didn't, and that is not a claim that he would ever make himself.
We don't have the laws of AGI like we had the laws of optics (Asimov notwithstanding.) Tons of research effort was poured into the wrong avenues in vision (hand-tuned HoG, transforms, optical flow analysis) and ML (support vector machines, computational learning theory) until a chain of breakthroughs hit on the right mathematical approach for vision and supervised learning more generally.
We have some mathematical approaches to try with AGI (e.g. policy optimization/max-Q in reinforcement learning), but they equations are plagued with fundamental issues (e.g. reward sparcity, easily-gamed artificial objectives.)
Carmack optimized some very difficult equations when he worked on graphics, but in AGI we still don't have the right equations to optimize.
That's ridiculous. A dog can't draw a dog, therefore a human can't draw a human?
As an example, consider all of these ASCII stick figures:
Clearly, it isn't necessary to deeply comprehend the essential nature of humanity in order to draw one.
Knowing how a mind works well enough to build one is an immense task. A better analogy would be building a human body from scratch, which is also something humans can't do.
The reason "dog brain can't design an artificial dog brain" is a useful contribution is that it gives people an intuitive understanding of a complex truth: things can't fully model themselves. Dogs can't fully understand dogs. People can't fully understand people.
It's plausible to me that humans can evolve something akin to AGI. It's also plausible to me that a vast number of humans working together will manage to stumble into creating AGI. But I see no reason to think that humans have the intellectual capacity to understand a human-level mind well enough to build one intentionally.
If so, it implies there's a sort of intelligence-completeness, where species can accomplish any physically possible objective.
Or maybe we will peter out before achieving practical fusion, quantum gravity or AGI. Either way, "X can't create X" is a silly argument. Humans create humans, there's whole websites dedicated to that.
"Humans create humans" via what bio evolution found.