Hacker News new | comments | show | ask | jobs | submit login
A startup's "Why we use Lisp" story (gmane.org)
113 points by zachbeane 2839 days ago | hide | past | web | 90 comments | favorite



Lisp is a beautiful language but I think the biggest problem with it is its proponents failing to explain the merits. I'm sorry, this post would have probably made a bit more sense 15 years ago, but definitely not now.

> (a) Very fast development that is enabled by CL (e.g., everything from hash tables to string-operators to memory management is automatically included - there is nothing that is not included).

Name a modern mainstream language that doesn't have these things.

> (b) Excellent programming environments - e.g., parentheses-savvy editor.

You haven't seen XCode, Delphi or MS Visual Studio, where, for example, you can jump to the definition of a symbol with "one click", allow interactive step-by-step debugging with variable watch, disassembly, stack trace, etc - I shouldn't really name all the things that are possible in a typical modern IDE. And I don't know any text editor which is not paren-savvy.

> (c) Excellent compiler, especially with declarations, enables very fast code.

A compiler which doesn't "enable very fast code" has no place under the sun nowadays.

> (d) Excellent system stability with no random crashes at all.

Very exciting, although GC-based languages (i.e. those usually lacking pointers) should not crash at all, or if they do crash that's a shame. Stability and robustness of your compiler and your runtime system shouldn't really be mentioned as a merit. If it doesn't meet stability standards, it shouldn't be released.

> (e) Macros and all that.

Finally getting to the point and you say "and all that"? Btw, "all that" includes unification of code and data - something no other language provides, let's say, idiomatically. This is an amazing feature, and in fact Lisp macros are Lisp macros thanks to just that - unification of code and data and symbolic nature of the language.

Memory footprint: megabytes do matter because of the CPU cache. A 30,000 LOC program should take a few megabytes at most and fit a modern CPU cache entirely. Compared to a 50MB program the performance gain can be enormous.


a) Java is such an example. It does not offer fast development. Contrast it to so-called RAD (Rapid development) tools or rapid prototyping environments, ... Java is a mostly a batch language with a long edit/compile cycle.

b) Right. There are several of those.

c) For dynamic languages there are fewer excellent compilers. If you check the usual benchmarks, Lisp compilers are among the best in that category.

d) That's true. But the reality is different. GCs have bugs. More complex GCs have even more bugs. The FFI may introduce possibilities for errors. I have seen Java guys struggle with obscure heisenbugs. In theory all is mature, in practice it took quite some time to get Java mature and if you read the SUN notes for some of their garbage collectors (the concurrent one, for example), they tell you that they are not stable enough for production. For example the release notes at some point said: "Garbage First (G1) garbage collector Improved reliability and performance Available as 'early access' software. Usage of G1 in production settings without a Java SE for Business support contract is not recommended."

e) The 50 MB program usually will not touch all the code all the time - for example LispWorks can include the function compiler in the delivered application - it is likely not called very often. The 50 MB can also consist of a lot of Lisp data, not only code.


a) The Java I'm familiar with includes hash tables, string-operators, and memory management.

Java may or may not enable fast development, but the reasons actually given for why Lisp development is fast certainly apply equally to Java.


Read again what he writes: 'everything ... is automatically included - there is nothing that is not included'.

LispWorks comes a huge image that contains all the necessary features, from graphics, networking, memory management, ... Actually you have to REMOVE things to make it smaller (for example using a so-called tree-shaker during 'delivery'). In Java you need to tell which classes/etc. to use. In LispWorks you just call the functions are instantiate the classes - every language feature/library. that he needed for his application, is already included.

In LispWorks you start the image and type to the listener. For his task all the functionality was available then.


So Lisp's great advantage over Java is that you don't have to put "import ..." at the top of the file? Clearly it's more fundamental than that. Trying to eke out minor victories over other languages on every front just trivializes the areas where Lisp truly dominates.


You are thinking in files, no wonder.

Put such a Lisp image that includes all the tools and classes on an embedded device. Then connect of the network and develop/debug the functionality incrementally while it runs.

In Java, oh wait, I forgot class XYZ, lets push it onto the device and see if it loads. Maybe not, it needs some more class files, ... then the Java capabilities for runtime changes are kind of limited - well, you can hack the class loader.

One area where Lisp has been used a lot, is interactive development using so-called images. During development one works with 'images' that contain all the necessary software - that's a part of the 'ball of mud' approach (Smalltalk environments like Squeak work similar and have extensive tools to control that). LispWorks also allows to save preconfigured images - with LispWorks 6 one can dump running programs (with GUI) and restart them later.


Back when I was writing a lot of Common Lisp (CMUCL), the image system didn't seem like a win over files, but a misfeature necessitated by the bug of taking 20 seconds to start a program. Images are inherently messy; after you've changed a running system two or three times, it's even odds whether it will actually be the same as a system started from scratch with ostensibly the same variables and functions, in my experience, and there's no good way to be certain, besides restarting. Since you're going to have files of source code anyway, using runtime images just creates the opportunity for your two official copies to get out of sync. This isn't worth the few times you could hotfix something instead of restarting the system.


Does not sound like development in Lisp (or Smalltalk). You were using Lisp as a batch language - there are possibly better languages for that.

I tend to run my Lisp images for days, weeks, months. As long as possible. Restarting is always an unwelcome interruption of my flow. Working with files is no problem. I have the code in the LispWorks editor in buffers/files and compile incremental from there. Sometimes I recompile a whole buffer and sometimes only parts. A good Lisp environment will also keep track of changed code and have a command to compile only those. Alternatively I compile a buffer and then walk through the compiler warnings fixing the code until the warnings are gone - never leaving the image.


walk through the compiler warnings fixing the code until the warnings are gone - never leaving the image.

To me, this seems like a disaster waiting to happen if you're doing something complex. When there are bugs in my code, it usually means bad things are happening to my data, my data structures, and the state of my program in general. When I find the bug, it often seems far easier to fix the bug and start over than it would be to both fix the bug and track down all the consequences and fix those as well, perhaps writing some more code to help with that which might end up with bugs in it... my attention stack was never that large to begin with, and as it gets smaller I try harder and harder to avoid situations where I recursively just have to do this one more thing before I can do the thing I want to do.

I did run some of my images for weeks, and one webapp in particular for months. After the first few days, though, I was always apprehensive about making changes to it, since I couldn't be sure my files were in sync with what was in the running system -- what if I'd forgotten to write a change to a file -- and since my system was running, I couldn't just reload from the file to make sure, since if there was a difference I'd lose the working copy in favor of one I'd replaced. It was all very nerve-wracking. In contrast, the systems I had that were written in Python could be changed and then bounced, trading about 5 seconds for all that stress[1], and the systems I had that were written in PHP didn't even need that; changing the file on disk was enough.

[1] I do know that I could have done this with CMUCL, but I assumed that there would eventually be a point at which I realized it was all worth it...


It might be surprising, but people have been writing large and complex software systems with this approach. Several hundred thousand lines of Lisp code in some apps were definitely reached by several Lisp applications.


Oh, I'm quite aware of it! I'm not saying that image-based development is unworkable; I'm saying that developing in a monolithic image seems more difficult to me than systems made up of many smaller pieces (files, in this case).

Nor am I knocking lisp. I think there are a lot of great things in various lisps: though I have specific nits to pick with each lisp I've actually used, some of the problems I encountered were due to attempting to solve problems I would have just lived with in other languages. I've said elsewhere that I think one of the reasons lisp languishes is that it's seductively powerful -- why spend time learning someone else's library when you can write the part you need in less time?

I agree with one of the g'parents, though, that lisps' strengths are in the macros and code-is-data areas, and passing over that in favor of things that most every competing language has in this era (built-in datatypes, nice IDEs, REPLs) seems severely underpromising.


You don't have to convince me that incremental development at the REPL is a good thing. But what you're talking about is package management, and Java is perfectly capable of creating an uberjar that contains all dependencies. Even if LispWorks does it better, we're talking about differences in convenience rather than differences in capabilities.

Lisp is adequate in many ways, and excellent in others. But a lot of descriptions of why it's excellent focus on what appear to be trivialities. Maybe there's some sort of gestalt at work here, where the sum of these minor differences is greater than its parts, but if that's the case you're doing a really poor job describing it.


Where is the überjar that Java programmers use?

LispWorks comes with the full thing, including graphics and IDE by default. You put it on the machine, run it and it comes with everything INCLUDING the incremental compiler.

Zero assembly needed.

Java is usually developed in a batch fashion with lots of files, classes/modules, jars, ... it needs an IDE like Eclipse that keeps track of all the components, has a build process, assemble the components, load it into some virtual machine, connect the external debugger, etc. etc.

Lisp applications (here with LispWorks) are often developed with a single image and incremental modification.

This not a small thing. This is a huge difference in convenience. Something that seems to be important for HIM (and me).

This incremental development capability is one reason I prefer to use Lisp. For me a piece of software is not a bunch of files that are on the disk, compiled and linked and then started. MY mental model which I like best is to see an application as a sea of running objects which are communicating (the part of dead code on the disk is only necessary to jump start and assemble these objects). Once the program is running in some primitive fashion, I tend to prefer to think about modifying the running objects by a bunch of changes (the changes tend to be in files, sometimes code, sometimes data - often code that more looks like executable data). Not everybody uses the same mental models when developing and I am spoiled by interactive systems like the Lisp Machine (which Jack also knows), where the philosophy is very similar: http://lispm.dyndns.org/genera-concepts/genera.html


I think you're missing the point here.

The lisp/smalltalk way: have all my tools within arms reach so I don't have to reach for anything while I'm developing. When I'm finished press a button to get everything I didn't end up using in the end put away.

Everyone else: have a very small set of tools so that I must constantly go back to the tool shed and pick up something I now need, often forgetting to put back things I'm not using anymore.

The second case is overstated in the case of at least MS Visual studio with Resharper, which will automatically suggest use statements when you use a class that isn't visible, and tell you about any includes that aren't being used. But with Lisp/Smalltalk (provided they have "tree shakers") you don't have to think about this, the computer will just do it automatically.


> LispWorks comes a huge image that ...

> In LispWorks you start the image and type to the listener.

I'm very new to Lisp (but not C, C++, Java, Python, and some other similar languages). Can you please tell me what an "image" is?


Often a Lisp program at runtime consist of a 'runtime environment' (memory management, ...) and the data in the heap: functions and all kinds of Lisp objects (characters, lists, hash tables, CLOS objects, ...). An 'image' is a saved dump of this memory.

Typically you can start Lisp and just reload the memory from the heap dump (the image) on disk and do some automatic re-initialisations (for example reopen windows, reconnect to files and databases, restart processes, ...). All the code and data will be restored. You are back, where you saved the image before.

Imagine you would dump the memory of your JVM to disk and restart it later.

The Lisp vendor will provide you with an 'image' that contains the base language, some extensions, graphics, networking, editor, debugger, inspector, gui designer, etc.. You can then write a program, load all the program files, load all the data and configuration files and save an image. When you restart the image later, all your application code and data is back. This can save a lot of time during development.


Interesting. Sounds like you run, step into, and then modify this big blob of binary data (from the inside!) until you've shaped it into what you want. Weird. :)


In both Lisp and Smalltalk you (can in Lisp, must in Smalltalk) develop from an "Image" based point of view instead of a file based one.

The simplest way to understand this is to realize that you are basically living "inside" the running exe that you're writing.

Contrast this with e.g. C++ where you write a bunch of files, fight with the compiler until they turn into some black box exe, then run said exe and try to guess what it's doing based on the blueprints of your source file. To actually see what's happening inside the exe you would have to run some kind of debugger, but you can't take action based on what you see. You're looking through a glass.

Not so with Lisp/Smalltalk. If you see something wrong you can just fix it on the spot and continue running.


> The simplest way to understand this is to realize that you are basically living "inside" the running exe that you're writing.

Ah. Given what I already know, it sounds a lot like opening up a Python REPL, importing all the code you need, and then starting the main loop of the program.

> You're looking through a glass.

Heh, through debugger-colored glasses. :)

I see what you mean though. With C++, if you find your problem, you stop the whole show, go back to your source, take a stab at a fix, recompile, and then run once again in the debugger. With Lisp, you're just there the whole time.


Exactly. And as a consequence of this both Lisp and Smalltalk allow you to restart from exceptions. Lisp has an extremely sophisticated method (most powerful of any language imo) called "restarts". These make it a lot easier to build much more robust software than it is without them. http://www.gigamonkeys.com/book/beyond-exception-handling-co...


"You haven't seen XCode, Delphi or MS Visual Studio, where, for example, you can jump to the definition of a symbol with "one click""

That's an old feature. It used to be called "ctags" and even console-based text editors support it.

"Finally getting to the point and you say "and all that"? Btw, "all that" includes unification of code and data - something no other language provides, let's say, idiomatically. This is an amazing feature, and in fact Lisp macros are Lisp macros thanks to just that - unification of code and data and symbolic nature of the language."

Lisp's problem seems to be that, until you know how to use macros and code/data unification, you can't be easily convinced of their merits. It takes a considerable commitment to learn Lisp before you can reach that level, though.


Yeah, Slime does this and without ctags, I believe most lisps keep track of the location of definitions in source.

But what's a definition? Slime does this for defun/defvar etc, but not for things defined with, say, hunchentoot's define-easy-handler. Can you tell your IDE to take me to that location in a single keystroke? Maybe Slime supports adding definition types, I have no idea. Or maybe it throws up its hands because of the potential naming conflicts.

All this makes you wonder whether Lisp is too powerful for any IDE to keep up.


LispWorks does that. You can add your own ways to record source locations of your definition forms.

http://www.lispworks.com/documentation/lw60/LW/html/lw-60.ht...


You can do that, it's not terribly difficult, just picky.


That's an old feature. It used to be called "ctags" and even console-based text editors support it.

Have you ever used IntelliSense?


Yes, and it's a very useful feature I haven't found an equivalent to yet, That's not quite what we were talking about though--it's far beyond jumping to the definition of a symbol with a click.

IntelliSense doesn't always work, either, at least when I used Visual Studio a lot.


Which version of Visual Studio? Intellisense in VS, at least as of 2008, is pretty much rock solid. I can't think of a time when it has failed on me.


In '03 and '05 Intellisense would intermittently quit, but start working again after a build.


I'm often saddened that because CLOS doesn't use a message-passing paradigm (and because Lisp's syntax is prefix?), doing traditional Intellisense-style method listing and completion is nearly impossible.


I wouldn't be impossible, it just wouldn't work the way a message passing language does. If you ask me the trade off wouldn't be remotely worth it.

You could e.g. have a system where write some type name (or variable with a declared type), hit a key and get a list of all the generic methods that apply.


It would be a small matter of programming to add a function to Slime that, upon typing a symbol presents a list of every function that's reasonably likely to apply to it. I think the reason I haven't seen it is that it represents a very noun-oriented way of thinking about programming. Lispers are less likely to program that way than say Java programmers, where the language enforces the style.


ctags are a pain to set up and not nearly comparable to what exists today.


jumping to definitions is nothing that is new. My Lisp Machine can edit/modify all source code of the entire operating system while it is running. It knows the location of every single definition.


Screencasts?


LW (I have LW6) is really great system, for coding, experimenting, and the CAPI user interface is platform-independent (almost, there are some differences, but you can handle them).

I do use MSVC all the time at work, XCode occasionally at home, and have used Delphi quite a lot before my current work.

But I also bought Lispworks Mac/Win32 versions, and I feel much more productive there. Why? Because I have listener, I can quickly recompile the current buffer (Ctrl+Shift+B), or the current enclosing function (Ctrl+Shift+C), or if I'm at the end of expression evaluate it (Alt+X E).

I can keep recompiling the buffer, and all variables declared with defvar would retain their values, I can change object definitions on the fly.

It's just easier to experiment.

Weaknesses - I'm so much used to the MSVC debugging (I Actually used to be Turbo Debugger junkie in the past), that I miss that - it's simply not easy to debug in LW (or EMACS + slime + any other lisp).

But do I need to debug that much more? No - I don't, but that could be because I'm yet at the amateur level.


a) C and C++. Oh, but they're not modern, you say; balls, I say. Both standards were revised very recently and C++ is changing quite a bit.

b) SLIME and SBCL (to pick freebies) crush any of those under their collective boot. Eclipse tries real hard but is let down in a number of ways by Java (instability, the fact that the environment is opaque to the user instead of malleable, etc). LispWorks's GUI is very good, at least as good as Eclipse at the things Eclipse is good at.

c) You would be shocked at how bad things are. GCC, to pick on a weak and somewhat helpless example, tries really hard to optimize C code but is generally content to merely compile C++ code. XCode with its gcc backend does relatively little real optimization of Objective C, certainly nothing compared to what you expect from a modern JIT in Java. Not familiar enough with Delphi or MSVS to pick those apart but I doubt it's that much different.

d) Ditto. Eclipse crashes at least once a day for me during heavy use and suffers bizarre and inexplicable errors even more frequently (fun experiment: create a file Foo.java. Rename it to Bar.java. Try to create Foo.java again. The editor cannot be convinced to read the new Foo.java off disk without a restart). Errors in the native code are not infrequent (I have a bunch of logs here from older Javas where the GC (I think) got confused in the middle of GCing and dumped core), although they seem to be better with Java 6.

e) A comp.lang.lisp audience knows about macros and why they're nice. Other language zealots cannot have their minds made up by a simple reiteration of the benefits of code as data (which I agree are considerable). The other points are somewhat more relevant inasmuch as they challenge generally accepted but really wrong concepts.

And re: memory footprint: be wary of comparing apples to rutabegas. Remember that every C program benefits from libc and its ancillary stuff; on this here Ubuntu system the system libraries required to bring the system up from a dead stop (not any of this /usr/lib detritus) is already 2.5 MB on disk. Also remember that the 50 MB program footprint likely includes the runtime compiler system (so you can hot swap code; very useful) which is something like 42 MB for gcc 4.4 alone (again, disk; memory usage is enormous). I would expect that the amount of code that needs to stay in the CPU cache is equivalent.


a) C and C++. Oh, but they're not modern, you say; balls, I say. Both standards were revised very recently and C++ is changing quite a bit.

I develop quite quickly in C/C++.


To be fair, this is on a Lispworks mailing list discussing Lispworks in particular, which has it's own IDE that provides all of these features.

'Go To Source', 'Edit Definitions', 'Edit Callers', 'Edit Callees', 'Display Backtrace' are all a keystroke (or extended command call way) along with various tools that may or may not be present in others (I've never used MS Visual Studio, which seems to be a leader in this field)

One of the parts I miss from it was that break points are at a source level, not a line level, which allowed incredibly fine control when tracking down bugs.


You haven't seen XCode, Delphi or MS Visual Studio, where, for example, you can jump to the definition of a symbol with "one click"

I have used XCode, Eclipse, and Visual Studio. Emacs' tag functionality is much easier to use than the equivalent functionality in those editors. ("C-u M-." and "M-star" are especially convenient. And I would have typed an asterisk there, but news' parser is busted.)

The parentheses awareness he is talking about is paredit-mode, which changes Emacs' usual keybindings from operations on lines to operations on S-expressions instead. I never got the hang of it, but Emacs' built-in word/sentence/paragraph movement/editing functionality is essential. I have not seen these features in other editors, including Vim.

The other feature that Emacs has over the "visual" editors that you mention is the ability to interactively change how the editor works. (This is especially nice if you're a lisp programmer, as there is nothing to be confused about; it's just Lisp.) I am not just talking about changing the colors or things like that, I am talking about writing new modes, changing the built-in behavior, and so on. I do this so often I don't even consider it unnatural.

(Something I fixed today; when loading a ".chs" file into the inferior GHCI buffer via C-c C-l, GHCI complains "this is not haskell", which is true. Then it changes the prompt from "My.Module>" to ">", which makes the next C-c C-l command lock Emacs. I debugged this by seeing which command C-c C-l ran by pressing "C-h k C-c C-l". I then pressed ENT to visit the source code for that function, and noticed that it is programmed to hang until it gets output from GHCI that matches a regex. I changed the regex to something more liberal, hit C-M-x to load my changed function, and my problem was solved. Those two minutes of distraction will save me much frustration over the course of the rest of my life. When was the last time you fixed a bug in Eclipse or Visual Studio in two minutes?)

But anyway, Emacs' functionality is not limited to Lisp. I mostly write Perl and Haskell, and Emacs excels at both tasks. It is also a good mail reader, web browser, and IRC client. (Why yes, I am composing this in Emacs.)

allow interactive step-by-step debugging with variable watch, disassembly, stack trace, etc - I shouldn't really name all the things that are possible in a typical modern IDE

But of course, no REPL. I still don't know what a C++ programmer does when he wants to figure out what regexp he needs to match something. Write a driver program around the library, compile it, debug it, recompile it, and finally run it and play with it? Sounds fun...

A compiler which doesn't "enable very fast code" has no place under the sun nowadays.

Apparently the real world is not under the sun.

Memory footprint: megabytes do matter because of the CPU cache. A 30,000 LOC program should take a few megabytes at most and fit a modern CPU cache entirely. Compared to a 50MB program the performance gain can be enormous.

Sure, but you have to realize that most applications that people build with big stacks like CL or Java or C# or whatever are not really CPU-constrained; the context switches kill them. (Wait for user to do something. Wait for database results. Wait to copy response buffer to web server.)

Nobody wants code that runs slowly, but the trade-offs to get something really fast (or to fit in the CPU cache, etc.) are not worth the effort for most people. "80% solution" and all that. And, the CPU cache is all about critical sections, anyway; it's usually your data that doesn't fit in cache and slows everything down.

So anyway, you are underestimating the capabilities of Emacs.


I think that a lot of people underestimate capacities of EMACS.

The problem is that unlike a modern IDE, EMACS does nothing to make it's capacities evident. Perhaps it's good to have a system which requires one to put a systematic course of study to fully understand. Or perhaps it's symptom of a primitive UI bound for eventual obscurity.

How shall we decide??


Visual Studio and XCode don't either. You start them up and you get a gray screen and a few menu items like "New..." and "Build...".

At least Emacs starts up with a screen containing links to the documentation and an interactive tutorial.


[deleted]


The way I look at it is... if you refuse to learn something, you are only hurting yourself.


This is the key point:

I started programming in LISP way back in 1971 on a Univac 1108 mainframe and also implemented a 68000-based Lisp system (~50K lines of real-time assembly) for mobile-robotics use in 1983 - and so know my way around the language.

All Lisp (or Smalltalk, or...) success stories I've read hinge on someone with an enormous amount of experience with the language. I'd argue that someone with that much experience could get the job done in (almost) any language. I'm surprised that someone with that much experience would put it down to language choice rather than deep knowledge of the problem domain.


That is close to saying that all languages are equivalent given the same amount of experience. I don't buy that. For a specific task some languages may be better suited / more productive than other languages, even given equivalent levels of experience.

I don't believe in a global linear scale of language power ("the blub theory"), but I do believe that some language may be better than others for a specific task given specific constraints.

E.g. if I have equivalent levels of experience in C++ and Python, I'm pretty sure I can write small webapp quicker in Python. OTOH if the languages are very similar, like Python and Ruby, the level of experience is much more important that the relative strengths and weaknesses of the languages.

Of course, it is seldom that you get that kind of fair comparison between two languages - usually everyone has a favorite language they know better than any other.


That is close to saying that all languages are equivalent given the same amount of experience.

Not quite - there are a bunch of languages that are "up there" in terms of expressivity: Lisp, Smalltalk, Haskell, ML, etc. There's no data to say "learning Lisp will give you an enormous productivity boost". There is only data to say "people with 30 years experience in Lisp/Smalltalk/whatever tend to be very good programmers". And, well, duh.

Also, if he had to do it in C, say, could he really not? Even if he needed to invoke Greenspun's Law in the process...


Even if he needed to invoke Greenspun's Law in the process...

The question is whether that always happens for a complex system. Because if it does, then every sufficiently advanced programmer is a lisp programmer.


I am starting to arrive at my own conclusion to language power which is that if you already know the "$1,000,000 concepts" embedded in the language, the only things the language itself offers above that are dev environment and ecosystem improvements - better debugging features and tools, more libraries, more solutions to corner case problems, platform deployment options, etc.

So features that are mostly implementation-dependent, like incremental development, static type checking, contracts, CPAN/gem/easy_install type code repositories, reference books and mailing list discussions, etc. are the things that give a language value. Exposing powerful key concepts(like common data structures, garbage collection, reflection, macro systems...) directly through the language is also a win but not something impossible to work around.

To paraphrase a oft-abused quote, "every sufficiently complex program contains a Lisp." But that doesn't mean the conclusion is "start with Lisp, since you'll end up with one anyway." It may be that Ruby (for example) has a really cool library you want to use for a core part of your app, so you start writing the code in Ruby, and things are good and you make progress....and once you come across a problem requiring Lisp-type power, just by knowing how the solution would work in Lisp, you can usually devise a worse-is-better 80% solution that is right for your specific application. The final result may not be beautiful or pure, but it lets you do things incrementally without the up-front misgivings of "it would have saved so much time to start in Ruby..."

tl;dr: Learn languages to learn concepts, but build applications in an "environmentally-friendly" way. Perceived "potential power" is not a good reason to sweat and strain to build apps in your favorite language when you could go over to the current "industry standard" and save yourself months of effort.

On the other hand, nobody has an omniscient view of all languages and environments, hence people tend to stick with what they know....


In some place I read that differences in languages could account for at most 30% of improvements in speed of development. This is an epsilon if compared to differences between programmers.

Another issue is that, if you are talking about small-to-medium sized applications then clearly there is a difference in languages. For example, it is pretty clear that writing a script is easier in perl than in C, or writing a medium sized expert system is easier in LISP than in Pascal.

However, if you consider large scale applications (100k+ LOC), then I don't believe there is any difference between writing in C, C++, Java, or Common LISP, as long as the programmer(s) have deep experience with the used language.

Just notice that the language is not going to solve the large-scale problem by itself. As long as such language has tools for creating abstractions, the code will have about the same complexity no matter what. If that complexity will be encapsulated in simple concepts (structs and functions) or higher level concepts (closures and continuation) depends on the taste of the developers and the language used.


30% is an understatement. Think of the productivity gain when you go from assembly to C. Now, consider the fact (I do mean fact) that the jump from assembly to C and the jump from C to LISP are comparable.

The choice of abstraction do matter. If you use weak ones, your productivity is taking a serious hit: your program will be bigger, more complex, and have more errors (squared).

C++ abstractions, for instance, are incredibly weak. Take the function abstraction, which isn't even complete: you have no way to write anonymous functions the way you write literal integers[1]. Higher level concepts, as you call them, aren't more complicated than the "simple" ones. Often, they are just less familiar and more consistent.

[1]: Anonymous functions should actually be called "literal functions":

    (fun x -> 7 x + 42) -- a literal function
    357                 -- a literal integer
    2 + 3               -- expression which yields a integer
    f . g               -- expression which yields a function
Nothing "high order" about that. This is just acknowledging that functions are mathematical objects like any other.


Funny, I went to a talk today on statistical methods for opinion analysis, and in the annotated corpus presented, the only opinion word that was used more often in a subjective frame than an objective frame is the word "fact".


I just don't find this to be the case.

There are two kinds of design patterns: architectural patterns (e.g. MVC) and language patterns (e.g. Iterator). In the language I use for work, C#, we have to use a lot of both kinds of patterns. Often the "all code must be in a closure or a method" way the language works gets in the way (I can only imagine it's much worse in Java). In 100k LOC, I bet 25-30% of it (random "from the gut" guess) must be language patterns (e.g. visitor).

When you realize that nearly none (if any at all) of the language patterns are needed in Lisp you realize that in 100k LOC you only work with the problem. It seems to happen to me pretty often that I run into situation where there are two possible representations of something, both having problems and I realize that in Lisp I wouldn't have even noticed the situation at all because I could have used a more natural solution right from the start (CLOS' generic function approach makes all the difference in the world here).

Keep in mind I've used C++ and its descendants longer than I've used lisp.


I've been feeling quite despondent about the general language/framework fanboyism on Hacker News. This comment is a breath of fresh air. Thank you.

I've worked on heavily used programs in several languages (including Lisp) and frankly in my experience the quality of the programmer is way more important than the framework they start from.

Can you provide a source for the 30% figure?


Do you have personal experience writing a 100k+ LOC application in two or more of the languages C, Java and Lisp?


I would say that he has an enormous amount of experience with several languages. He lists on his webpage: "Programming languages used over the past 39-years include C/C++, Common Lisp/CLOS/MOP (and other Lisp varients), ALGOL, FORTRAN, PL/1, SNOBOL, numerous assembly languages, and others."

So, he probably could have written it in C++ (or SNOBOL ;-) ), but he chose to do it in Lisp.


No, he could not have (short of bootstraping a Lisp system from C++). I, for one, have much more C++ experience than Ocaml experience. I am far more efficient with Ocaml: typically, my Ocaml programs are 3 to 5 times shorter, takes half the time to program, and almost no time to debug.


I have 10 years of C++ experience, and 1 year of part-time Python experience. I needed to create a simple graph algorithm to find maximum Hamiltonian paths. It took me 1hr to get it running in Python and 6 hours to translate it to C++.

It is amazing how much Python (and other high level languages such as lisp, Ocaml) allow you to focus on the problem you are solving not the irrelevant details of the solution.


Heres why I like Lisp:

I recently (~5 months ago) started using Clojure heavily and now find myself much more productive using it than using any of the languages that I have considerably more experience with (I used C++ almost exclusively for 4 years for hobby (mostly game related) projects and toy virtual machines/interpreters; Python for ~4 years for prototyping, web development and GUI development; Java for 2.5 years for server (non-web) development - total programming time including hobby, uni and professional = ~10 years).

As an example, I'm very much interested in compilers, virtual machines, programming language design and such, so have been plying around with these in various languages over the years. I wrote my first toy language in VB; wrote a few interpreters, virtual machines and simple parsers in C++ and some parsers, interpreters and assembly code generators in Python; for uni, I implemented a parser and code generator (instruction selection, using maximal munch) in JavaCC - recently I wrote an assembly code generator in Clojure. It took me a weekend and it surpassed the power of anything[1] I'd written in VB, C++, Python or Java. Its flexible and can be easily be extended, its pretty smart and can do some basic optimisation (caching data in registers, function inlining...). I tried writing something similar in Python before, but gave up.

My point is that I have much much deeper knowledge of C++, Python and Java, yet I was able to build something MORE complex and extensible in Clojure, in less time, even though I'd only been using it for a few months.

Yes, my previous experience played some part in this, but I attribute most of it to the flexibility and power Lisp provides me through easy to use and powerful abstractions, flexible and convenient syntax and interactive development.

[1] Of the same scale - any interpreter, code generator, compiler etc. I've written much more powerful systems, of course, but they are huge in comparison and took a lot lot longer than a weekend to write.


All things being equal, lisp was in use much more back in the 70s and 80s than it was in the 90s or 00s. It makes sense for the majority of people who would think to use lisp to be a bit grayer.


Fortunately, since 2000, a new generation of young hackers has shown increased passion for Lisp.


Enthusiasm, yes, but where is the output? Usually, when I read about a Lisp success story, there's a paragraph:

... and we could not have done this without A, B, C ... by Edi Weitz.

Lisp needs more people like him who actually release software instead of blogging about its virtues.


But you have seen the article that we are talking about? It is just such an output where the software is written in Lisp and the result is sold commercially.

People have asked Jack to explain why he used Lisp for an embedded application. You can read the result in the linked article.

The guy who has posted the story also has written a bunch of Lisp based internet applications. If you look among the hardcore Common Lisp users, quite a lot are writing software. The once that are blogging superficial stuff not so much. Read the stuff on planet.lisp.org - that's usually quite useful.


Why do you assume people don't release software in Common Lisp?

When you buy a piece of no one writes on the packaging "we did this in C++!" or "We did this in Java!" Why would you expect a big "Common Lisp!" sticker on the packaging? The deliverable is machine code, not source code. The customer doesn't care what the source is written in.


Because people don't write C++ or Java or Python "success stories". When someone ships an app in Lisp, they blog about it and it gets posted here. If Lisp were widely used, it would not be newsworthy.


I don't think you hear much about the 'real' Lisp apps. I have for example never seen anything about Amazon's use of Lisp. But they use it. Did you ever read something about the applications that schedule airport operations? Or did you ever hear about SISCOG's software ( http://www.siscog.com/ ) who develop most of their software in Lisp? Their products are used to schedule the crews of public transport systems (trains) in major cities and states.

Ever read blog posts about those?


>(d) Excellent system stability with no random crashes at all.

This is interesting, considering one of the main reasons Reddit switched from Lisp to Python was because it was crashing often.


that was another Lisp implementation


I don't understand why a Lisp hacker wouldn't match parens properly when ending a parenthetical statement with a smiley:

  (commentless, of course :)
Hasn't he seen xkcd: http://xkcd.com/541/ ?


If you do it the double-chinned XKCD way, it messes up your auto-balancing text editor.


If you read XKCD and use an auto-balancing editor for plaintext then you should have no problems modding it to recognize emoticons.


Of course! I take it all back.


…which you obviously use to write your e-mails.


I use my text editor for typing everything. Scratch buffers are a wonderful thing.


Apparently you don't know anyone who spends 90% of their working time in emacs.


I do. :-(whoops, I don't : he uses the other editor).


Seems to me this discussion is missing one of the main points of the original post: a massive plug for an unfairly under-appreciated book:

  > "Let Over Lambda" 
  > (which is really quite scary to read - I can't say that I understand 
  > 100% of it - maybe 60% and I am very happy with that level of 
  > comprehension) -- you end up with an enormously powerful set of 
  > programming tools unlike anything else out there.
I really like this book too and recommend it.

http://letoverlambda.com/index.cl/toc


A friend of mine keeps asking me to dump Lisp and "Get modern" with C#, and I try to explain why I prefer Lisp, but he won't accept it.

It was Paul Graham's essay that encouraged me to try Lisp in 2005.


I'm just dreading the day that Microsoft decides that C# isn't modern enough and they want to sell a whole bunch of seats of Visual Studio (insert new name here)


They release new versions of Visual Studio every few years, and C# is updated almost as regularly. C# / .net 4.0 will be released this year (as will Visual Studio 2010).

The differences between C# 1.0 and 4.0 are enormous. Microsoft isn't shy about making changes to the language.


They have an ML dialect called F# as well.


"Excellent system stability with no random crashes at all"

This holds for pretty much any language you care to use.


People keep saying that, and I can understand where they are coming from, but this isn't the case. Just yesterday, I managed to segfault python 2.6.something. I have no idea why (I really should have figured it out) but my hypothesis is that I freaked out the parser. It was vanilla python code; it should never segfault. I have had the same experience with the JVM: I once segfaulted the parser by incorporating a static string that was hundreds of lines long. Curiously, Eclipse "compiled" it just fine.

Sometimes environments have bugs. If you need an underlying runtime that simply will not crash, using something tried and true may be at the top of your priority list. That might be overkill for most projects, but I can see a guy who wants something so robust he knows its his fault when it breaks.


So based on your experience (as a comp sci student, I'm assuming, based on your other comment), Python and the JVM are both 'crashy', but you don't happen to remember exactly why?

Reminds me of a bunch of compiler bugs I found back in the day. Strangely the longer I program, the fewer of these compiler bugs I seem to be able to find.


I am going to defend myself. Then, I am going blow my argument out of the water.

Backstory: I am a fifth year comp sci student. I have worked part-time as an SDE for most of my college career. The python crash I referenced earlier was in class, the java problem at work. I have written an optimizing compiler before (C subset to LLVM to SPARC MIPS), so I hope that you will at least agree that I am not a complete moron in this area.

Java story, more detail: To make a really long story short: I wanted to move a really really long stored procedure (~2k LOC) into a JDBC query. Don't ask why, it was a very scary legacy issue. So I copy and paste the stored procedure into one long const string in eclipse, and it compiles just fine. I run our ant script against it, and it explodes with a stack overflow (IIRC). The solution that I found was simply to replace newlines with \n" + (newline). No stack overflow on compilation anymore. From this I could only assume that I had fubared the compiler (which would have been a reasonable way to fubar, being as I was trying to create an absolutely massive const string directly).

Now we could quibble over whether this even counts as a crash in the sense we were talking about, but the underlying premise is: you never want to have to work around your tools. As computer scientists we do it a lot, but it is never fun and its worse when something goes down in production because the tool crapped out. Theoretically, a VM or OS should never fail and bring the entire system down. A compiler should never outright crash.

Now to debunk my own defense: I just sat down for about an hour and did everything I could to reproduce the bug in JDK 1.5.6 (the original JDK I broke it on). Well, go figure, I can't get it to break in the hypothetical way I wanted to. I might one day do exactly what I did with the stored procedure, but setting that particular environment up again would take quite some time.

In conclusion, you can assume I was an idiot because I can't show you the code. I would in your shoes. :)

P.S. This all assumes you are using tools given to you by the platform itself. Using JNI to dereference a NULL doesn't count. :D


That's not a 'random crash.' The compiler ran out of memory trying to compile your file. Stack space is not infinite, a typical Java VM comes with some default which can also be reconfigured. After running out of memory, the compiler told you what the error condition was and exited.


Why not extend the stack at runtime?

LispWorks:

    CL-USER 101 > (defun foo (n) (unless (zerop n) (cons n (foo (1- n)))))
    FOO

    CL-USER 102 > (foo 1000)

    Stack overflow (stack size 15998).
      1 (continue) Extend stack by 50%.
      2 Extend stack by 300%.
      3 (abort) Return to level 0.
      4 Return to top loop level 0.

    Type :b for backtrace or :c <option number> to proceed.
    Type :bug-form "<subject>" for a bug report template or :? for other options.

     CL-USER 103 : 1 > 
Now use restart 1 or 2.


Could you post the code with which you were able to segfault Python 2.6? I succeeded in that only by using ctypes and messing up parameter passing.


I wish I could. I was in the middle of a lab when I did it, and as I fixed it I thought "I should really save this and figure out why it is segfaulting." Alas, the lazy part of me prevailed and I just fixed it so I could demo the assignment. Next time I run into one of those issues I am going to write about it as an example for the argument I just made.


Apparently iRobot also uses Lisp in an embedded context.


iRobot used to do embedded Lisp (back when they called themselves IS Robotics), but I haven't heard anything indicating that they still are. See http://lemonodor.com/archives/2004/08/l_mars_venus.html

They had L, their Lisp dialect, and Mars, the macro layer for doing robotics in L.

The paper "L - A Common Lisp for Embedded Systems" is available at http://www.cs.cmu.edu/~chuck/pubpg/luv95.pdf


The last I heard Rod Brooks said all the pacbots software was written in Lisp.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: