Right, but what you're describing is 'not being able to do math'. Like, if I've memorized a multiplication table and can give you any result that's on the table but can't multiply anything that wasn't on the table, I can't do multiplication.
It depends on how you see it. I agree with you, generally, but in the limit, if you memorised all possible instances of multiplication, then yes, you could certainly be said to know multiplication.
I've not just come up with that off the top of my head, either. In PAC-Learning (what we have in terms of theory, in machine learning) a "concept" (e.g. multiplication) is a set of instances and a learning system is said to learn a concept if it can correctly label each of a set of testing instances by membership to the target concept with arbitrary probability of error. Trivially, a learner that has memorised every instance of a target concept can be said to have learned the concept. All this is playing fast and loose with PAC-Learning terminology for the sake of simplification.
The problem of course is that some concepts have infinite sets of instances, and that is the case with arithmetic. On the other hand, it's maybe a little disingenuous to require a machine learning system to be able to represent infinite arithmetic since there is no physical computer that can do that, either.
Anyway that's how the debate goes on these things. I'm on the side that says that if you want to claim your system can do arithmetic, you have to demonstrate that it has something that we can all agree is a recognisable representation of the rules of arithmetic, as we understand them. For instance, the axioms of Peano arithmetic. Which though is a bit unfair for deep learning systems that can't "show their work" in this way.