I've never seen such a complex subject as linear algebra presented with such clarity and simplicity.
A string diagram is also equivalent to a program in a generalized-stack language (in the stronger sense of equivalence up to difference in the topological arrangement of the wires, not up to the full graphical rules). Unlike a pure stack language, you can operate on values at any point in the stack, not just the top. The four operations are: drop an element from the stack, push a zero to the stack, remove two elements from the stack and push their sum to the stack, remove an element from the stack and push two copies of it to the stack. Obviously any program in this language computes a linear function of its initial stack (there is a full homomorphism to matrices). A stack language captures the same ability of an input to be consumed that a string diagram does.
There's a language, Joy, that has combinators that allow you to work deeper in the stack, e.g. "dip":
a b c [F] dip
a b F c
dipd == [dip] cons dip
a b c [F] dipd
a b c [F] [dip] cons dip
a b c [[F] dip] dip
a b [F] dip c
a F b c
I think it's a "Category Theory" paradigm language, whatever that might mean.
Joy resembles the point-free form of Haskell from Conal Elliott's "Compiling to categories" http://conal.net/papers/compiling-to-categories/ )
How? AFAICT, the string diagram contains far too much incidental information for doing linear algebra. It contains even more information that a fully parenthesized tree of additions because it even contains information about how to evaluate duplicated expressions (eg. it tells if in (x+y)+(x+y) you should evaluate (x+y) twice or only once and then reuse the value). If we write a string diagram as a normal linear system the whole question of fullness and faithfulness would be dead obvious.
Blog post from Google : https://ai.googleblog.com/2019/06/introducing-tensornetwork-...
and an official website about the diagrams : https://tensornetwork.org/
Penrose notation is very high level: edges correspond to tensor dimensions, joining corresponds roughly to tensor contraction. When you're working with multiple complicated multi-dimensional tensors, it can be a beautiful way to express things.
This notation seems to be expressing something a bit lower level: it seems that edges correspond to values and dots correspond to copy/add. So, roughly, they visually express linear functions. (I didn't read the full post, it's possible that they expand beyond this initial meaning.) Diagrams of a similar style are used in other contexts (eg. expressing the weights of neural networks), often as a pedagogical tool.
I looked this page, the whole site and even slept on the questions involved. My experience with being student, doing a bit of math tutoring and doing a bit of math teaching is that linear algebra is one of the simplest "real" math courses a student encounters. Matrices are elaborate but they're just repeatedly doing pretty simple - and the geometric intuition of matrices as translation and rotation of linear spaces, is both useful and one of the easier intuitions to get, among "basic" higher math intuitions.
Now, this page comes along and overall message, "Wow, linear algebra is hard to understand, here's simpler way to approach it" and then [stream of stuff no undergraduate possibly get and that I struggle with having an MA in math] (also a structure that's not at all related to the concrete geometric interpretation of linear algebra). And, after scanning the thing structure, I at least have come away with what I think the motivation involved is.
The thing that's "complicated" and "unintuitive" to our author is that the development of linear begin with operations on individual vectors, describing the mechanics in detail. It then jumps to the qualities of sets of vectors and how one subspace can be orthogonal to another subspace.
So, basically, it takes informal set theory as implicit scaffolding to the entire theory. And if you are a mathematician (or one kind of mathematician), set theory is a grunging, undesirable thing. The mechanics of computation is also grunging. I'm tempted to say everything is grunging until you reach a supreme level of abstraction. But the computation + set thing definitely is constructing the structure "bottom". So what the author is doing is constructing things top down. Combining algebraic operations as strings that's constructing the equivalent of subspaces at the same it's describing what addition "is". Everything gets constructed from one "piece".
I feel like I can see the appeal of this but I can't actually see it as producing a thing useful to me, at the levels of abstraction I like to work with.
It also makes think how set theory is a bit the object-orientation of mathematic. It's a messy, do-anything, constructive exercise. I can understand an urge of someone who like their basis to be enlightening, to base their structures on things of great apparent elegance. Perhaps I'm pessimistic enough to believe that one is always to going to face "grunge".
Strings is the canonical example for Chu spaces.