With reserved words you are overloading two very different namespaces, the space of language primitives and the space of user functions/variables. Sure, you can get away with this. But do you want to? A language is a UI, and the potential for confusing the programmer is immense. If operators and functions are really different things - as they are in Hoon, anyway - it's very confusing to mush them together.
I don't know, I am similarly annoyed by mathematicians because they usually use single letters for variables. It's a little more excusable for them because math functions are usually very short and have few variables, but still I'd rather use words. But I digress.
Your brain is actually built to memorize symbols like this. (I feel I know the learning process pretty well because I have toddler-aged kids who are also learning Chinese.) Of course, kids are not grownups, but even for grownups the process of binding symbol->meaning is easier than it looks like it should be.
Also, when you have name->meaning, the name inevitably is chosen for orthographic reasons and supplies its own meaning, which may mislead from the actual definition of the primitive. If you bind a symbol directly to a semantics, you lose this source of confusion. It is replaced (ideally) by an odd sense of "seeing the function," which I think is also present in experienced Perl ninjas.
But, one could argue, the "is it a function? or is it a macro?" confusion is a significant cognitive load on the Lisp programmer. These are really two different things, even though you can fit them into the same namespace.
Hoon has the different problem that you allude to - it is hard to extend the macro set at the user level. While this is fairly limiting, many FP languages, like Haskell, seem to do just fine without macro extensions. Perhaps typedness plays a role.
The bottom line on this problem is that there does remain a fair chunk of unused ASCII digraph space, so there is probably a way to put that in the programmer's power, but user-level macros are an unsolved problem in this relatively young language, nor is it one that obviously needs to be solved. But it would be nice if it does get solved.
You might get more cores at the same speed, but even that seems to be limited due to heat issues.
You can get more efficient CPU (e.g. SIMD instructions) but the compiler cannot optimize for them very well. Some people say implicit SIMD is a bad idea anyways.
So, for example, lambda in Hoon is not a primitive but a relatively high-level built-in macro. This means it is part of a larger family of lambda-like things, many of which are (IMHO) quite useful. On the other hand, all these things demand either reserved words, digraphs or Unicode glyphs.
Just 3 syntactic forms building up an expression tree:
Expr = Lam Name Expr | Var Name | Apply Expr Expr
reduce (Apply (Lam name body) arg) = subst name arg body
Getting symbol tables, functions, environments, free and bound variables, etc, etc, out of the fundamental automaton, frees you up to design them right at the higher layer where they (IMHO) belong.
This philosophical argument has serious practical ramifications, I think, because it leads directly to the Question of Why Lisp Failed. Why did Lisp fail? Many people say, because it couldn't be standardized properly.
Why couldn't it be standardized? Because the Lisp way is not to start with a simple core and build stuff on top of it, but to start with a simple core and grow hair on it. So you end up with a jungle of Lisps that are abstractly related, but not actually compatible in any meaningful sense. This is because the lambda calculus is an idea, not a layer.
Basically the point of Nock is to say: let's do axiomatic computing such that it's actually a layer in the OS sense. The way the JVM is a layer, but a lot simpler. Lambda isn't a layer in this sense, so it doesn't provide the useful abstraction control that a layer provides.
Lisp isn't a failure. You're commenting on a server that is powered by a Lisp.
>Why did Lisp fail? Many people say, because it couldn't be standardized properly.
There was a very good idea about how to standardize Common Lisp back in 1982. It divided documentation into 4 different parts, or 4 different "colored pages". It was eventually abandoned because of the time constraints. Read DLW's (one of the 5 main Common Lisp authors) news.yc post about it:
Read more about it here:
Look at Haskell, Agda, and others, which are based on an a slightly extended form of LC. I doubt anyone would claim that these extensions are "hairy".