From a data structure perspective: You mean like an n-dimensional array? That’s what Matlab does. Matrices (2D arrays) are just a special case of array in Matlab.
I suspect by the usage of the phrase "typed" they mean something like that, but where the axes/dimensions/whatever you call them are typed - ie. you can't naively take a product along the time dimension of A with the batch dimension of B.
I've explored this space a bit, and I don't think the perfect solution is here yet. In my view, static checking should be the goal. It's harder to iterate quickly when your network dynamically fails 30 layers later due to some edge case where you forgot to take a transpose. Definitely much better than it silently succeeding if the dimensions happen to be the correct size though!
https://github.com/deepmind/tensor_annotations seems promising. I've developed a mypy plugin for pytorch that does similarly off of the "Named Tensor" dynamic feature (which isn't well supported yet), but haven't released it yet.
I'm also excited by the ways in which including this in some sort of "first-class" way could make tensor semantics simpler, which I'm assuming xarray allows? Many of the operations enabled by http://nlp.seas.harvard.edu/NamedTensor are quite nice and similar ideas can let you write more generic code (ie. code that works automatically for on tensors with and without a batch dimension)