Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This post repeats two memes that float around the programming language space. One is: "it's so powerful that it's bad". The other is: "it's ok, but not for large projects". I don't think I've ever seen any evidence attached to either. (If the OP contains any, I missed it.) But they're the sort of things that sound plausible and have more gravitas than "Here are my current preferences", so they get repeated, and no doubt the more they get repeated, the more they get repeated.

When I say "evidence" I'm not asking for formal studies; that's too high a bar for our field. But one can at least ask to hear about specific real-world experience.

Of the two arguments, the "too-powerful" one has the disadvantage of being prima facie absurd, so I think the "large-project" one is more harmful. So, where are the large Clojure and Common Lisp projects that have been harmed by this alleged language weakness? Let's find some practitioners who actually ran into this.

For what it's worth, I haven't. Since the best thing for a large project is not to be so large in the first place, applying language constructs to make codebases smaller is a great strength when the language lets you do it—and Lisp lets you do it.

Edit: [deleted off-topic bit]



> it's so powerful that it's bad

I had the pleasure of using Cascalog in production at work, which is written in Clojure. While it was in fact written by some very smart people, we had a very difficult time using some of its constructs that were very cleverly abstracted away behind macros.

The problem was that it felt nearly impossible to debug problems we had because of the long, impossible stack traces. Further, trying to get another very smart programmer to understand why some functions behaved in one way and some behaved in very different was very hard to convey. I'll reiterate that I thought the other guy I was working with was really smart, and I'm at least not an idiot, and we both felt like we had a really hard time unwrapping what the code was doing.

On the flip side, if it were written in Java (I think some parts are actually but more under the hood), you could point at the code and say "That's where the map function gets called on all the workers" (Cascalog is for Hadoop), or run the code and get some kind of stack trace where you could even begin to start figuring out what was going on. We weren't even doing anything cutting edge.

For me, I love the academic/fun endeavor. I have wasted countless hours playing and learning. But if you asked me if I would base any critical part of my production app on Clojure, especially when there are more than a couple people who weren't Lisp experts, I would have a really hard time justifying it after what I saw when I tried.


Finally some comment explaining a specific problem involving macros.

First of all Clojure has a problem with Stack traces, other lisps are much nicer in that respect (but do not have to face the JVM).

Anyway, macros can be difficult to debug, I am sure you have heard and used of the tools that usually exist in lisps, macroexpand etc. Nevertheless macros are Transformations of the AST and thus not as easily traceable as function calls.

Nevertheless, when working with ClojureScript on a web-app, I grew really fond of the possibilities macros offer, possibilities that are hardly possible with JavaScript (HTML templating within ClojureScript code, etc.). http://blog.getprismatic.com/blog/2013/1/22/the-magic-of-mac...

I wrote some macros to help with HTML5 canvas contexts and these made my code a lot more reliable and readable.

The problem with keeping languages less powerful is, that you often end up with something like Java: Surely quite understandable when you look at a few lines of code, but in the end you need a whole lot of complicated patterns and best-practices, now you get hit by a boomerang at the back of your head.

Take away message: Macros should not be used on every occassion, but they are really helpful in central places.


That's why some people prefer to use Lisp, it has better debugging tools for that and stack traces are easier to use.

Still, debugging macro-using code IS harder. The first thing I need is full and partial macro expansion in the editor. There is a bit more then. When all fails I use an interpreter (most Common Lisp implementation have both an interpreter and a compiler) to follow the expansion process in detail.

If a supplied macro creates errors which are hard to understand, then it is also possible to request better compile time error reporting from the developers.


It seams to me that if you give somebody Cascalog and native Java or any other Hadoop query language you will quickly see why macros are great.

Lets be honest nobody want to write hadoop jobs directly with Java. Clojure has a query language built in that feels natural and has the full power of the language.

Other people build things like Hive or Pig that come as comply different languages.


Racket is a lisp that has a specialized macro debugger to help in that very task. The point is that instead of throwing away a potentially good tool, there are people working on making that tool more reliable.


>This post repeats two memes that float around the programming language space. One is: "it's so powerful that it's bad".

Also known as "less is more", which is a well established point in programming, and with a lot of historical examples to showcase it.

>The other is: "it's ok, but not for large projects". I don't think I've ever seen any evidence attached to either.

Well, were are the sucesfull large projects written in Lisp (from either number of happy users or monetary success perspective)? How many are they compared to other languages?


I don't get how taking away macros is "less is more".

Since macros provide capabilities that no other feature does, it seems to me that taking them away is "less is less".

As I understand it, the intent behind "less is more" is to boil things down to a minimal number of 'things' (for lack of a better word) without sacrificing capabilities, which in practice involves getting rid of redundancies and overlap while coming up with orthogonal 'things'. Reducing the number of 'things' while also sacrificing capabilities seems to be throwing out the baby with the bathwater.

(Although to be honest, I've been unable to find a definition of "less is more" in the context of programming anywhere.)

Could you clarify what you mean by "less is more"?


>"less is more"

Less what? Do be more precise. I've heard that with regards to complexity – not so much with regards to power.


"Well, were are the sucesfull large projects written in Lisp[...]? How many are they compared to other languages?"

In our field, tools get chosen not by merit but by what's the current fad. It's unfortunate, but this fact makes those two questions unhelpful in moving the discussion forward.


>In our field, tools get chosen not by merit but by what's the current fad. It's unfortunate, but this fact makes those two questions unhelpful in moving the discussion forward.

That's an idealistic and elitist response.

Very removed from the empirical and scientific spirit, which would suggest that if people use other languages for large projects (say C/C++) there are reasons for this, besides them being "fashion victims" and "doing it wrong".

Some of those reasons would be the appropriateness of those languages for the computers of the 70's - 90's (at a time when Lisp machines were slow and resource hungry), or the availability of tons of library code afterwards, the better control over the memory layout needed for large scale projects like a broswser, an OS, Office or Photoshop, etc etc.

Notice how the response just moves the goalposts a little further, without trully answering. Even, for example, if you are right and languages are used because they are fads, you failed to answer why LISP wasn't picked as a fad itself.


I would think that 'fad' playes a role sometimes and in some areas, but that it is neither sufficient nor necessary to describe language adoption.

But there are mechanisms which may look like 'fad'. For example in the academic community a lot of progress is only incremental and people need something new to publish incremental results.

Industry demands from Universities to teach the language de jour.

'Industry analysts' give technology guidance and tell companies what to use.

Often it is seems 'modern to reinvent everything. Look at Clojure, a Lisp dialect which is basically zero backwards compatible. It allows people to reimplement the old stuff, sometimes in slightly different ways and claim some achievement. You also don't have to deal with the old people, which 'know it already' or with 'old' technology. The community is self-selected to newcomers and those willing to reimplement old stuff and to invent newish stuff.

It is also about communicating ideas. If one uses a language few speak, one gets less attention, mindshare, etc. Thus use something which in the hype cycle is on an increasing angle, where the attention of many is easier to get. If one wants to promote a new framework, better use a popular language underneath it. Otherwise it could be nicely engineered, but few will hear of it, few will try it and few people will use it.

Also some technologies are popular - like the JVM - and this allows to leverage engineering efforts by others. Popular technologies often seem to be ported widely and seem to have more active maintenance.


The idea that the core language is not what drives adoption is clear. There are a ton of other things that go into this kind of thing.

Tooling, Library, Schooling, existing base of people that know the language, CPU architecture, Memory constraints and so on. And of course all the nontechnical things like marketing.

So saying that there is not as much lisp as c++ code is not a argument that c++ is a better language.


That's true, but the fact that you can somehow seperate the core language from the other factors ("tooling, library, schooling, existing base of people that know the language, CPU architecture, Memory constraints") is a fallacy.

Might be possible for a totally academic or greenfield small time project, but not at all if you do commercial and pragmatic oriented development, with teams, constraints, deliverables etc.

So while I agree that "that there is not as much lisp as c++ code" is not a argument that c++ is a better (core) language, I also think that this fact shows that C++ is a better language+extras for more projects.


I would say it 'was' not it is. The amount of legacy java does not speak that its better right now. Only that it was better (or persived to be better) in the past. The argument for perseption vs actual 'goodness' is almost impossible to answer.

The intresting thing about java is that is was cleary worse then something and only became better because people used it so much. Java was adopted and developed around the same time that self was around as well. Now self at the time was owned by the same company, self was just as small to send over the wire, self allready very (very, very) performant (compared to java witch was grindingly slow) and had much better tooling.

The reason for all this seam that the people at sun just did not know understand what the technology they had laying around in some reasearch project.

Java was pushed and became what it is now, self was not and became what is now and thus proving that even with everything speaking for you at a point in time, you might not.

(PS, in the end it might have been a good thing that java was picked over self since self might actually have won over in the webspace and we would all be using propritary applets instead of the web we have today.)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: