Hacker Newsnew | comments | show | ask | jobs | submit login

You're saying, why are reserved words a bad idea? I think most defenders of the humble reserved word would consider it, at best, a hack. Maybe it's a necessary hack, maybe not.

With reserved words you are overloading two very different namespaces, the space of language primitives and the space of user functions/variables. Sure, you can get away with this. But do you want to? A language is a UI, and the potential for confusing the programmer is immense. If operators and functions are really different things - as they are in Hoon, anyway - it's very confusing to mush them together.




I'm not asking why reserved words are a bad idea, just opining that I think having 100 single- and double-character symbols might not be the best UI for a language. Why not have just names (neither reserved like keywords nor inscrutable like symbols?)

I don't know, I am similarly annoyed by mathematicians because they usually use single letters for variables. It's a little more excusable for them because math functions are usually very short and have few variables, but still I'd rather use words. But I digress.

-----


It's a different approach to learning that I think favors the actual learning curve, not the perceived learning curve.

Your brain is actually built to memorize symbols like this. (I feel I know the learning process pretty well because I have toddler-aged kids who are also learning Chinese.) Of course, kids are not grownups, but even for grownups the process of binding symbol->meaning is easier than it looks like it should be.

Also, when you have name->meaning, the name inevitably is chosen for orthographic reasons and supplies its own meaning, which may mislead from the actual definition of the primitive. If you bind a symbol directly to a semantics, you lose this source of confusion. It is replaced (ideally) by an odd sense of "seeing the function," which I think is also present in experienced Perl ninjas.

-----


The alternative, as I see it, is to let people add new keywords to the language from inside the language, so there's feature parity again.

-----


Yes. And that's what many people find so elegant about Lisp.

But, one could argue, the "is it a function? or is it a macro?" confusion is a significant cognitive load on the Lisp programmer. These are really two different things, even though you can fit them into the same namespace.

Hoon has the different problem that you allude to - it is hard to extend the macro set at the user level. While this is fairly limiting, many FP languages, like Haskell, seem to do just fine without macro extensions. Perhaps typedness plays a role.

The bottom line on this problem is that there does remain a fair chunk of unused ASCII digraph space, so there is probably a way to put that in the programmer's power, but user-level macros are an unsolved problem in this relatively young language, nor is it one that obviously needs to be solved. But it would be nice if it does get solved.

-----


In my view, CPU speed advances and a high-quality optimizing compiler mean there aren't many reasons to use macros in Haskell. People do, though; for examples of extensive Template Haskell use, see Yesod.

-----


We are pretty much done with CPU speed advances and compiler optimizations. If you use an Intel i7 with the Intel C compiler, there might be a few percent left to optimize, but not much. Free lunch is over for a decade now.

You might get more cores at the same speed, but even that seems to be limited due to heat issues.

You can get more efficient CPU (e.g. SIMD instructions) but the compiler cannot optimize for them very well. Some people say implicit SIMD is a bad idea anyways.

-----


Lisp has no reserved words.

-----


Yes, but Hoon would need them. Lisp gets a lot of things for free because it's based on the lambda calculus, which some regard as trivial but I don't.

So, for example, lambda in Hoon is not a primitive but a relatively high-level built-in macro. This means it is part of a larger family of lambda-like things, many of which are (IMHO) quite useful. On the other hand, all these things demand either reserved words, digraphs or Unicode glyphs.

-----


The Lambda Calculus is so tiny how can you view it as anything but trivial?

Just 3 syntactic forms building up an expression tree:

  Expr = Lam Name Expr | Var Name | Apply Expr Expr
And one reduction rule:

  reduce (Apply (Lam name body) arg) = subst name arg body
At least assuming unique names (no shadowing nonsense), you can't get much simpler than this...

-----


You can get rid of the whole name reduction system. Which is hardly trivial. If you assume it, though, it's true that everything else is trivial.

Getting symbol tables, functions, environments, free and bound variables, etc, etc, out of the fundamental automaton, frees you up to design them right at the higher layer where they (IMHO) belong.

This philosophical argument has serious practical ramifications, I think, because it leads directly to the Question of Why Lisp Failed. Why did Lisp fail? Many people say, because it couldn't be standardized properly.

Why couldn't it be standardized? Because the Lisp way is not to start with a simple core and build stuff on top of it, but to start with a simple core and grow hair on it. So you end up with a jungle of Lisps that are abstractly related, but not actually compatible in any meaningful sense. This is because the lambda calculus is an idea, not a layer.

Basically the point of Nock is to say: let's do axiomatic computing such that it's actually a layer in the OS sense. The way the JVM is a layer, but a lot simpler. Lambda isn't a layer in this sense, so it doesn't provide the useful abstraction control that a layer provides.

-----


>I think, because it leads directly to the Question of Why Lisp Failed.

Lisp isn't a failure. You're commenting on a server that is powered by a Lisp.

>Why did Lisp fail? Many people say, because it couldn't be standardized properly.

There was a very good idea about how to standardize Common Lisp back in 1982. It divided documentation into 4 different parts, or 4 different "colored pages". It was eventually abandoned because of the time constraints. Read DLW's (one of the 5 main Common Lisp authors) news.yc post about it:

https://news.ycombinator.com/item?id=81258

Read more about it here:

http://www.saildart.org/ARPA.PRO%5BCOM,LSP%5D

http://xach.livejournal.com/319717.html

-----


Lisp isn't really the only attempt to work with the lambda calculus.

Look at Haskell, Agda, and others, which are based on an a slightly extended form of LC. I doubt anyone would claim that these extensions are "hairy".

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: