The argument in the paper (that AGI through ML is intractable because the perfect-vs-chance problem is intractable) sounds similar to the uncomputability of Solomonoff induction (and AIXI, and the no free lunch theorem). Nobody thinks AGI is equivalent to Solomonoff induction. This paper is silly.
NP-hardness was a popular basis for arguments for/against various AI models back around 1990. In 1987, Robert Berwick co-wrote "Computational Complexity and Natural Language" which proposed that NLP models that were NP-hard were too inefficient to be correct. But given the multitude of ways in which natural organisms learn to cheat any system, it's likely that myriad shortcuts will arise to make even the most inefficient computational model sufficiently tractable to gain mindshare. After all, look at Latin...
Even simple inference problems are NP-hard (k means for example). I think what matters is that we have decent average case performance (and sample complexity). Most people can find a pretty good solution to travelings salesman problems in 2D. Not sure if that should be chalked up to myriad shortcuts or domain specialization.. Maybe there's no difference. What do you have in mind re Latin?