As for the logical definition of a "spec," you are welcome to find any inconsistencies or imprecisions in the Nock axioms. Actually someone reported a (cosmetic) bug in the Nock spec this morning, which is quite cool.
There's a very important point in that Nock can specify not what a computer computes - but what it has computed, if it terminates. In general, practical systems have to interrupt long-running computations, so they must always compute some subset of that definition. But it can be, and is, a strict subset. I find this a meaningful and accurate statement.
Sorry, I didn't mean "PL theory" as scare quotes - I just didn't want to exclude anyone for whom the term doesn't instantly bring to mind a body of work.
The truth, though, is that the "PL theory" we have is just one theory of programming. It's a convention, it's not a piece of reality like pi. Church-Turing equivalence tells us that we have infinitely many such representations.
Eg, the Turing Machine is equivalent in power to lambda, but much less useful as a programming framework. My feeling is that, as a programming model, lambda suffers from the fact that it was originally designed, by mathematicians, to do mathematics.
Just from a UI basis, programmers are not mathematicians. The skill is very similar, but mathematicians work best in geometric and logical abstractions, whereas programmers are more comfortable with mechanical representations.
For instance, the general lambda approach is to represent data as code (Church numerals), a very sensible approach from the math side, whereas the Nock approach is to represent code as data - which feels more natural to the programmer.
So, nothing can tarnish the very powerful and elegant model of computing that is "PL theory." But it is not "the" theory of computing - just one of infinitely many possible theories.
"PL theory", the field of study, contains many, many theories of programming. There is not a lambda calculus, there are several (e.g. the untyped lambda calculus, the simply typed lambda calculus, System F, Martin-Löf's dependent type theory, among others) although not all of them are Turing-equivalent (the lack of which can be a useful feature), as well as other systems which are not based on lambda (e.g. concatenative calculi, which correspond roughly to things like Forth.) In fact, from a quick perusal, it seems that Nock is a minor variation on a graph rewriting system with the graphs restricted to DAGs, so... a studied and established theory of computation.
Similarly, there are techniques for representing code as data that are quite widely known and understood in programming language theory, e.g. through Danvy's defunctionalization or the use of arrows to make explicit the structure of computation or stage-separation and things like Lisp macros. These also are not ignored in the field of PL theory.
Effectively, my criticism is that your practical understanding of "PL theory" amounts to a mostly-remembered reading of "Types and Programming Languages," and so you are accidentally reinventing things poorly and describing them in an ad-hoc way. Now, Nock and Hoon and the others are quite interesting and impressive, and I plan to keep following them in the future, but I still assert that, rather than standing on the shoulders of giants as you could be, you've dismissed them as windmills and are trying to climb your own way.
Arguably there's a lot of folks reinventing things poorly in ad-hoc ways, doubtful that will ever change. What's more concerning is the flagrant anti-theory attitude that, at least to me, seems to be on the rise. In the context of discussions around Haskell or Agda it's sadly common to see "mathematical" bandied around as a synonym for "impractical".
As for the logical definition of a "spec," you are welcome to find any inconsistencies or imprecisions in the Nock axioms.
Not my point. The "spec" can be perfect, but you're not saying what a Nock implementation needn't do. If I write NockJIT that optimizes increment-loops into O(1) addition, does that violate the spec? You might reply "no", but someone else who relied on such loops for hardware timing obviously would say "yes".
The spec defines the semantics, not the performance. I think that's a pretty concrete distinction.
But in practice, at a certain level some kind of informative or even quasi-normative convention will have to creep in if you want to define the question "computer X can run program Y reasonably well." It's a qualitatively different problem, but it remains a real problem. Not one we have in the early days of the system, though.
I am happy to solve most but not all of any problem...
The spec defines the semantics, not the performance. I think that's a pretty concrete distinction.
But it's a distinction you have to make. Concrete ≠ obvious.
Other things that are sometimes in specs and sometimes not:
Is the intermediate state of a running Hoon program specified by the code? (It matters for someone writing a debugger!)
Is how the compiled program handles data at runtime specified? (It matters for a cryptographer!)
Are the errors produced by the compiler specified? (It matters for an IDE!)
Does the compiler have any limitations that should be allowed to be reduced / not have any limitations that should be allowed to be restricted? (It matters for anyone trying to make a faster compiler!)
I think the source of the confusion is that I'm carving off a stricter, and smaller, definition of the word "spec" than what we're used to. I agree. The word is not quite right.
In general the answer to your questions is "yes and no." Well, really it's no - except that as a fairly common case, for example when we want to catch compiler errors without breaking out of the system, we virtualize Nock within itself. This is also serviceable when it comes to adding an extra operator, 11, that dereferences the global namespace. But the fact that "virtual Nock" is just a virtualization stack within one Nock interpreter is not semantically detectable.
There's a very important point in that Nock can specify not what a computer computes - but what it has computed, if it terminates. In general, practical systems have to interrupt long-running computations, so they must always compute some subset of that definition. But it can be, and is, a strict subset. I find this a meaningful and accurate statement.
Sorry, I didn't mean "PL theory" as scare quotes - I just didn't want to exclude anyone for whom the term doesn't instantly bring to mind a body of work.
The truth, though, is that the "PL theory" we have is just one theory of programming. It's a convention, it's not a piece of reality like pi. Church-Turing equivalence tells us that we have infinitely many such representations.
Eg, the Turing Machine is equivalent in power to lambda, but much less useful as a programming framework. My feeling is that, as a programming model, lambda suffers from the fact that it was originally designed, by mathematicians, to do mathematics.
Just from a UI basis, programmers are not mathematicians. The skill is very similar, but mathematicians work best in geometric and logical abstractions, whereas programmers are more comfortable with mechanical representations.
For instance, the general lambda approach is to represent data as code (Church numerals), a very sensible approach from the math side, whereas the Nock approach is to represent code as data - which feels more natural to the programmer.
So, nothing can tarnish the very powerful and elegant model of computing that is "PL theory." But it is not "the" theory of computing - just one of infinitely many possible theories.