I stopped thinking that humans were smarter than machines when AlphaGo won game 3. Of course we still are in many ways, but I wouldn't make the unfounded claims that this article does--it sounds plausible but never explains how humans can be trained on bodies of work and then synthesize new ideas either. Current AI models have already made discoveries that have eluded humans for decades or longer. The difference is that we (falsely) believe we understand how the machine works and thus doesn't seem magical like our own processes. I don't know that anyone who's played Go and appreciates the depth of the game would bet against AI--all they need is a feedback mechanism and a way to try things to get feedback. Now the only great unknown is when it can apply this loop on its own underlying software.
That's like saying we know how we think because we understand how neurons fire and accumulate electric charges with chemical changes. We have little idea now the information is encoded and what is represented. We're still trying to get models to explain themselves because we don't know how it arrived at a response.