When I was in high school, I learned linear algebra from similar textbook, but it wasn't until college (and that was an engineering linear algebra course!) that I got any sort of understanding of what it's really about.
Computer science was not a prerequisite to that work in optimization.
Optimization in operations research and much of applied math more generally are awash in important algorithms without reference to computer science and, really, e.g., the Dantzig simplex algorithm of about 1949, before computer science was a recognized academic field.
Such applied math includes A. Wald's work on sequential testing in statistics, the fast Fourier transform,
Gram-Schmidt orthogonalization, Gauss elimination, Gauss-Siedel iteration, the Hungarian method for maximum matching, Runge-Kutta in ordinary differential equations, and much more.
I don't know too much about linear algebra but I know I had to learn some when my boss wanted me to do unsupervised classification. Maybe the book just considers it a good application of the material.
What do old books have to do with what should be included in new books?
Chapters 18 and 19 are about nonlinear least squares. I think that ideas outside the nominal linear algebra domain (like k-means, or nonlinear LS) can be helpful in a way like sex between people with different genomes; it's useful for exchanging ideas.
While fair in general, the point is k means has nothing to do with linear algebra besides they both use numbers sometimes. It would be like sandwiching in a chapter on RSA crypto because it's interesting and number-y but still has no connection to linear algebra.
Besides, a lot of introductory math textbooks are thinly veiled introductions to subjects the authors think are important.