Obviously they are not. One is a linear operator, the other is a data structure for implementing computations using that operator. This description extends to all tensors.
It's like saying "queues are not just lists". That is true and also neither insightful nor helpful.
I don't see it as mystifying or complicated, what am I missing?
The reason we carve out a category of tensor, even when you could in principle just define ad-hoc functions on vectors and call it a day, is that we notice the commonalities between a number of objects which are invariant with respect to coordinate changes. Machine learning generally does not use this invariance at all, and has arrays which happen to be tensors for largely unrelated reasons. Calling them tensor makes more sense than calling, say, real numbers tensors, but less sense than calling reals reals.
Sure but if you’re working with lists you shouldn’t call them queues. Similarly if you’re working with mere multidimensional arrays don’t call them tensors.
Obviously they are not. One is a linear operator, the other is a data structure for implementing computations using that operator. This description extends to all tensors.
It's like saying "queues are not just lists". That is true and also neither insightful nor helpful.
I don't see it as mystifying or complicated, what am I missing?