The whole field is interesting because a lot of problems can be phrased as optimization over non-negative polynomials (e.g. finding Lyapunov functions for systems which gives you stability results). The problem? Optimizing over non-negative polynomials is NP hard and non-convex. On the other hand, we know SoS polynomials are (a) nonnegative and (b) easy to optimize over (this is equivalent to a semi-definite program, which is a convex program and therefore “easy” to solve), so we optimize over those instead.
The problem, of course, is that there are non-negative polynomials which are not SoS polynomials, but it turns out this latter class is large enough to be useful in many cases.
The authors of the current paper show that, in fact, you do not need to solve a semi-definite program (SDP), but instead can phrase the original problem as a sequence of linear programs. To give you an idea, the best known bounds right now for SDPs are roughly O(n^3) in practice (with large leading constants) where n is the number of variables (assuming a small number of constraints), while for linear programs these bounds are O(mn^2), where m is the number of constraints.
For large n, the blowup in computational time is spectacular. To give you an idea: solving a linear program with a thousand variables and roughly ten thousand constraints takes around 3 seconds on my 2015 MBP 13”. Solving an SDP with 10 variables and ten thousand constraints takes upwards of twenty minutes (these 10k constraints are really just one equality constraint, but the matrices in the constraint are 100x100).
Of course, having structure in your SDP can make life a lot easier (it turns out that sparse SDPs are extremely fast to solve, if the sparsity pattern is a special case called Chordal sparsity). For further fun, a lot of NP-hard problems can be phrased as simple nonnegative polynomial programs, and their respective SoS relaxations form what is called a “hierarchy” of relaxations. You can get whatever accuracy you want on these problems, except that you pay with an exponential blowup of variables or constraints.
Apologies in advance for the (likely many) typos since I’m walking and typing this on my phone.
EDIT: one of the professors I’ve taught for has a great set of slides on SoS and respective relaxation. If you have a little bit of background in optimization, I would highly recommend them !
Sorry... what is NP?
Point being: it is possible to generate trajectories using SoS, but we have much better algorithms to do that. Rather, it’s hard to certify that controls which follow those trajectories are stable, esp. as they become more complicated than simple ones whose stability is easily proven (like PID, for example).