Hacker News new | comments | show | ask | jobs | submit login

I think Lisp's philosophy, as explained by Paul Graham (perhaps you've heard of him :) ), is to enable the programmer to do anything they put their mind to. When choosing whether to do the powerful thing or the simpler thing, they will choose the thing that gives the programmers more power, and point out (not entirely incorrectly) that they can wrap up simpler versions for simpler cases with macros.

As a skilled developer, I certainly prefer this philosophy to the Java philosophy of "If the feature might be abused, leave it out." (Obviously, I simplify that, but there's a lot of truth there.) But I don't think it's the right way to think of languages as we go forth, and I don't think it can scale as we go forward.

Instead of seeing languages in terms of what they permit, I see them in terms of what they deny. In particular, what invariants do they maintain? Now, that might seem an odd way of looking at it, but what it is is support for the next question: What do languages build on top of this invariant?

All the languages that I am currently interested in, and all the ones that I think provide the way forward, maintain invariants that Lisp does not let you maintain, precisely because it gives the Lisp programmer too much power. One obvious example: You can't "mutable lisp" your way into a language that has immutable variables all the time. If the language provides for mutation, even if you program in a strictly immutable subset of the language, your libraries, including your very base libraries that make up the "runtime", are likely to be mutation-based. (Getting around this might be possible but hardly worth it.) Erlang builds a huge runtime around having such pervasive immutability, and while mutable Lisps can borrow large swathes of Erlang's libraries and capabilities, they are incapable of providing a guarantee at the language level that no function you ever call will mutate a value out from underneath you. Erlang is a great example of building on quite a few invariants (read "programmer limitations") and producing things that are harder and/or impossible without them.

Another example: Type system. Lisp certainly lets you layer type systems on top of your language, but without enforcement by the language, there's a barrier as to how much you can take advantage of it. What the ML family does with types is not something you can add to a Lisp and end up with the same thing you get with native language support. You can come close, but there's a final level of integration or loop-closing (don't have a clean word for this concept) that you can not get to.

Smaller example: Haskell's laziness. You can add laziness constructs to a strict language (see also: Python generators), but you can not macro your way to a fully-lazy language starting with a strict one, at least not in any reasonable manner. (This feature excites me much less, but it's another example of something you have to start with.)

When the entire philosophy is about empowerment, you close the doors to a lot of things that can only be obtained by carefully-selected disempowerment of the programmer, most of which are fringe right now, but which I think will become more important over the future. Truly ultimate power only comes from assembler; languages give you their power by restricting the assembler they can generate.

In the single programmer case, especially the mature single programmer case, "more power" may be a great thing, but I think it breaks down pretty quickly as you get more people involved, especially as you get less-than-experts involved, and I think a lot of Lisp's failures to take over the world come from this basic issue, and some second-order effects.

What I want to close with is something I already said: Certainly, if I had to choose between a conventional Java-style B&D language or Lisp, I'd take Lisp. But the interesting action is occurring in the fields dominated by constraining the programmer: parallel programming developments, STM, immutability and/or banning shared memory, and potentially other interesting things that all involve first taking things from programmers, then building on the resulting invariants that things like Lisp or Perl or Ruby make a big point of not constraining you with. (I'm interested in type-safe string manipulations to ban XSS at the type level, for instance.) There are Lisps doing those things, but notice precisely that they have to be built in at the language level; you can't just macro your way to Clojure's STM. (And if that does happen to be how it is implemented, I'd run screaming; STM isn't very useful unless it is a very strong guarantee. And any connection with other JVM libraries will be very hard, too.)




A lot of it has to do with the relative power of context in languages. If I write a statement, what other places in the code could the meaning of that statement be modified? Languages like Lisp let you modify the meaning of things in tons of ways. Languages like Java only let you change the meaning of things in certain carefully restricted ways, like object polymorphism.

In order to understand what is actually happening in any one part of a Lisp program, I may have to understand most of the rest of the program, if it is written in a particularly complicated way. This is usually never the case in Java.


Both static typing and immutability are trade-offs and we need to know when its appropriate to make them. I get concurrency for the price of immutability in clojure, BUT i could still screw things up using java, and i think that this is a good thing(but im a power hungry lisp programmer, obviously :D). Its my oppinion that what you call the WRONG direction is actually the OTHER direction.


Paul Graham often talks about the partial ordering of language goodness, but doesn't know what beats Lisp. Possibly still nothing does, but this provides the way to give concrete examples: Since no one language can choose all invariants, it is likely that over time certain invariants will come to be seen as better in some niches than others, so there will indeed be several "winners" at the top of the ordering based on different choice of invariants.

Lisp will still be one of them, the choice of no invariants that restrict the programmer. It's a viable niche. It just isn't the only niche. I think it's less viable than Lisp advocates think and that this is part of the reason it never has taken off, but here we enter the realm of opinion as there isn't anywhere near enough science to actually know. (Indeed, the whole of idea of "science" sounds weird here because we are so far away from having it at all.)

I also point to foldr's message in this thread. Strong typing has its pros and cons, even when done as nicely as Haskell's typing can be, and personally, I don't think that's ever going to change. Sometimes you're going to want it and sometimes not.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: