"PL theory", the field of study, contains many, many theories of programming. There is not a lambda calculus, there are several (e.g. the untyped lambda calculus, the simply typed lambda calculus, System F, Martin-Löf's dependent type theory, among others) although not all of them are Turing-equivalent (the lack of which can be a useful feature), as well as other systems which are not based on lambda (e.g. concatenative calculi, which correspond roughly to things like Forth.) In fact, from a quick perusal, it seems that Nock is a minor variation on a graph rewriting system with the graphs restricted to DAGs, so... a studied and established theory of computation.
Similarly, there are techniques for representing code as data that are quite widely known and understood in programming language theory, e.g. through Danvy's defunctionalization or the use of arrows to make explicit the structure of computation or stage-separation and things like Lisp macros. These also are not ignored in the field of PL theory.
Effectively, my criticism is that your practical understanding of "PL theory" amounts to a mostly-remembered reading of "Types and Programming Languages," and so you are accidentally reinventing things poorly and describing them in an ad-hoc way. Now, Nock and Hoon and the others are quite interesting and impressive, and I plan to keep following them in the future, but I still assert that, rather than standing on the shoulders of giants as you could be, you've dismissed them as windmills and are trying to climb your own way.
Arguably there's a lot of folks reinventing things poorly in ad-hoc ways, doubtful that will ever change. What's more concerning is the flagrant anti-theory attitude that, at least to me, seems to be on the rise. In the context of discussions around Haskell or Agda it's sadly common to see "mathematical" bandied around as a synonym for "impractical".