So what happens if (when?) we come up with a system that does everything a human can do, better than a human, but doesn't contain any 'crazy insights', just a bunch of incremental improvements on what we have now?
Does that mean we aren't intelligent? Or does it mean that the system isn't intelligent but that we are "because we do it differently"? Or do we accept at that point that intelligence is composed of simple building blocks interacting in complex ways (which we already know, if we eschew Cartesian duelism)?
We would declare it intelligent? Certainly the people of today would call it intelligent. What I suspect you are claiming is that the people of that future would not call it intelligent, and this is the basis for arguing why that objection is not valid today. But that extrapolation to the future is just your speculation.
Or, in my view the most likely, that isn't in fact possible and the system we eventually arrive at that does that, _will be_ extremely different from what we have now.
Although I actually think that we'll never make a system that does _everything_ a human can do at all, simply because that would be silly.
And of course, there also has never been a human who can do everything that humans can do, so this bar is way too high anyway.
Does that mean we aren't intelligent? Or does it mean that the system isn't intelligent but that we are "because we do it differently"? Or do we accept at that point that intelligence is composed of simple building blocks interacting in complex ways (which we already know, if we eschew Cartesian duelism)?