In this context, tensors are just multidimensional arrays. The easiest way to understand them is by analogy with matrices.
Matrices are 2d arrays, and matrix decomposition uses methods inspired by geometry, to analyze data in matrix form. Even though the matrix in question might have no intrinsic geometric meaning (e.g. a matrix of which individual likes which beer), the geometric methods still give useful result.
Similarly, tensor decompositions are inspired by geometry, but are also a simple way to get information from a 3d array. For tensors, the most basic tensor decomposition, where a tensor a_ijk is approximated by sum_l b_li * c_lj * d_lk.
tl;dr don't read too much into the geometric side of tensors. Data science applications don't have a geometric interpretation, but the techniques can still work.
Matrices are 2d arrays, and matrix decomposition uses methods inspired by geometry, to analyze data in matrix form. Even though the matrix in question might have no intrinsic geometric meaning (e.g. a matrix of which individual likes which beer), the geometric methods still give useful result.
Similarly, tensor decompositions are inspired by geometry, but are also a simple way to get information from a 3d array. For tensors, the most basic tensor decomposition, where a tensor a_ijk is approximated by sum_l b_li * c_lj * d_lk.
tl;dr don't read too much into the geometric side of tensors. Data science applications don't have a geometric interpretation, but the techniques can still work.