Getting symbol tables, functions, environments, free and bound variables, etc, etc, out of the fundamental automaton, frees you up to design them right at the higher layer where they (IMHO) belong.
This philosophical argument has serious practical ramifications, I think, because it leads directly to the Question of Why Lisp Failed. Why did Lisp fail? Many people say, because it couldn't be standardized properly.
Why couldn't it be standardized? Because the Lisp way is not to start with a simple core and build stuff on top of it, but to start with a simple core and grow hair on it. So you end up with a jungle of Lisps that are abstractly related, but not actually compatible in any meaningful sense. This is because the lambda calculus is an idea, not a layer.
Basically the point of Nock is to say: let's do axiomatic computing such that it's actually a layer in the OS sense. The way the JVM is a layer, but a lot simpler. Lambda isn't a layer in this sense, so it doesn't provide the useful abstraction control that a layer provides.
Lisp isn't a failure. You're commenting on a server that is powered by a Lisp.
>Why did Lisp fail? Many people say, because it couldn't be standardized properly.
There was a very good idea about how to standardize Common Lisp back in 1982. It divided documentation into 4 different parts, or 4 different "colored pages". It was eventually abandoned because of the time constraints. Read DLW's (one of the 5 main Common Lisp authors) news.yc post about it:
Read more about it here:
Look at Haskell, Agda, and others, which are based on an a slightly extended form of LC. I doubt anyone would claim that these extensions are "hairy".