First author here. I agree with your points, we were constrained by the format and (especially) writing style expected by the journal we're submitting to. The Methods sections contain more explicit explanations as well as other analyses.
According to the authors, these "parameteric matrix models" or PMMs outperform:
* commonly used (zero- or low-parameter) regression models like XGBoost, random forests, kNN, and support vector machines on a variety of regression tasks, and
* DNNs with 10x to 100x more parameters on small-scale image classification tasks like MNIST variants, CIFAR-10, and CIFAR-100 -- albeit with a lot of feature engineering.
It looks promising, but I cannot find a link to the authors' code for replicating their experiments.
Reads like the authors skip to implications before clarifying the design.
Also, a stylistic sidenote, narrower columns of text are much easier to read, newspapers and journals do this for good reason