> (a) Very fast development that is enabled by CL (e.g., everything
from hash tables to string-operators to memory management is
automatically included - there is nothing that is not included).
Name a modern mainstream language that doesn't have these things.
> (b) Excellent programming environments - e.g., parentheses-savvy editor.
You haven't seen XCode, Delphi or MS Visual Studio, where, for example, you can jump to the definition of a symbol with "one click", allow interactive step-by-step debugging with variable watch, disassembly, stack trace, etc - I shouldn't really name all the things that are possible in a typical modern IDE. And I don't know any text editor which is not paren-savvy.
> (c) Excellent compiler, especially with declarations, enables very fast code.
A compiler which doesn't "enable very fast code" has no place under the sun nowadays.
> (d) Excellent system stability with no random crashes at all.
Very exciting, although GC-based languages (i.e. those usually lacking pointers) should not crash at all, or if they do crash that's a shame. Stability and robustness of your compiler and your runtime system shouldn't really be mentioned as a merit. If it doesn't meet stability standards, it shouldn't be released.
> (e) Macros and all that.
Finally getting to the point and you say "and all that"? Btw, "all that" includes unification of code and data - something no other language provides, let's say, idiomatically. This is an amazing feature, and in fact Lisp macros are Lisp macros thanks to just that - unification of code and data and symbolic nature of the language.
Memory footprint: megabytes do matter because of the CPU cache. A 30,000 LOC program should take a few megabytes at most and fit a modern CPU cache entirely. Compared to a 50MB program the performance gain can be enormous.
b) Right. There are several of those.
c) For dynamic languages there are fewer excellent compilers. If you check the usual benchmarks, Lisp compilers are among the best in that category.
d) That's true. But the reality is different. GCs have bugs. More complex GCs have even more bugs. The FFI may introduce possibilities for errors. I have seen Java guys struggle with obscure heisenbugs. In theory all is mature, in practice it took quite some time to get Java mature and if you read the SUN notes for some of their garbage collectors (the concurrent one, for example), they tell you that they are not stable enough for production. For example the release notes at some point said: "Garbage First (G1) garbage collector
Improved reliability and performance
Available as 'early access' software. Usage of G1 in production settings without a Java SE for Business support contract is not recommended."
e) The 50 MB program usually will not touch all the code all the time - for example LispWorks can include the function compiler in the delivered application - it is likely not called very often. The 50 MB can also consist of a lot of Lisp data, not only code.
Java may or may not enable fast development, but the reasons actually given for why Lisp development is fast certainly apply equally to Java.
LispWorks comes a huge image that contains all the necessary features, from graphics, networking, memory management, ...
Actually you have to REMOVE things to make it smaller (for example using a so-called tree-shaker during 'delivery').
In Java you need to tell which classes/etc. to use. In LispWorks you just call the functions are instantiate the classes - every language feature/library. that he needed for his application, is already included.
In LispWorks you start the image and type to the listener. For his task all the functionality was available then.
Put such a Lisp image that includes all the tools and classes on an embedded device. Then connect of the network and develop/debug the functionality incrementally while it runs.
In Java, oh wait, I forgot class XYZ, lets push it onto the device and see if it loads. Maybe not, it needs some more class files, ... then the Java capabilities for runtime changes are kind of limited - well, you can hack the class loader.
One area where Lisp has been used a lot, is interactive development using so-called images. During development one works with 'images' that contain all the necessary software - that's a part of the 'ball of mud' approach (Smalltalk environments like Squeak work similar and have extensive tools to control that). LispWorks also allows to save preconfigured images - with LispWorks 6 one can dump running programs (with GUI) and restart them later.
I tend to run my Lisp images for days, weeks, months. As long as possible. Restarting is always an unwelcome interruption of my flow. Working with files is no problem. I have the code in the LispWorks editor in buffers/files and compile incremental from there. Sometimes I recompile a whole buffer and sometimes only parts. A good Lisp environment will also keep track of changed code and have a command to compile only those. Alternatively I compile a buffer and then walk through the compiler warnings fixing the code until the warnings are gone - never leaving the image.
To me, this seems like a disaster waiting to happen if you're doing something complex. When there are bugs in my code, it usually means bad things are happening to my data, my data structures, and the state of my program in general. When I find the bug, it often seems far easier to fix the bug and start over than it would be to both fix the bug and track down all the consequences and fix those as well, perhaps writing some more code to help with that which might end up with bugs in it... my attention stack was never that large to begin with, and as it gets smaller I try harder and harder to avoid situations where I recursively just have to do this one more thing before I can do the thing I want to do.
I did run some of my images for weeks, and one webapp in particular for months. After the first few days, though, I was always apprehensive about making changes to it, since I couldn't be sure my files were in sync with what was in the running system -- what if I'd forgotten to write a change to a file -- and since my system was running, I couldn't just reload from the file to make sure, since if there was a difference I'd lose the working copy in favor of one I'd replaced. It was all very nerve-wracking. In contrast, the systems I had that were written in Python could be changed and then bounced, trading about 5 seconds for all that stress, and the systems I had that were written in PHP didn't even need that; changing the file on disk was enough.
 I do know that I could have done this with CMUCL, but I assumed that there would eventually be a point at which I realized it was all worth it...
Nor am I knocking lisp. I think there are a lot of great things in various lisps: though I have specific nits to pick with each lisp I've actually used, some of the problems I encountered were due to attempting to solve problems I would have just lived with in other languages. I've said elsewhere that I think one of the reasons lisp languishes is that it's seductively powerful -- why spend time learning someone else's library when you can write the part you need in less time?
I agree with one of the g'parents, though, that lisps' strengths are in the macros and code-is-data areas, and passing over that in favor of things that most every competing language has in this era (built-in datatypes, nice IDEs, REPLs) seems severely underpromising.
Lisp is adequate in many ways, and excellent in others. But a lot of descriptions of why it's excellent focus on what appear to be trivialities. Maybe there's some sort of gestalt at work here, where the sum of these minor differences is greater than its parts, but if that's the case you're doing a really poor job describing it.
LispWorks comes with the full thing, including graphics and IDE by default. You put it on the machine, run it and it comes with everything INCLUDING the incremental compiler.
Zero assembly needed.
Java is usually developed in a batch fashion with lots of files, classes/modules, jars, ... it needs an IDE like Eclipse that keeps track of all the components, has a build process, assemble the components, load it into some virtual machine, connect the external debugger, etc. etc.
Lisp applications (here with LispWorks) are often developed with a single image and incremental modification.
This not a small thing. This is a huge difference in convenience. Something that seems to be important for HIM (and me).
This incremental development capability is one reason I prefer to use Lisp. For me a piece of software is not a bunch of files that are on the disk, compiled and linked and then started. MY mental model which I like best is to see an application as a sea of running objects which are communicating (the part of dead code on the disk is only necessary to jump start and assemble these objects). Once the program is running in some primitive fashion, I tend to prefer to think about modifying the running objects by a bunch of changes (the changes tend to be in files, sometimes code, sometimes data - often code that more looks like executable data). Not everybody uses the same mental models when developing and I am spoiled by interactive systems like the Lisp Machine (which Jack also knows), where the philosophy is very similar: http://lispm.dyndns.org/genera-concepts/genera.html
The lisp/smalltalk way: have all my tools within arms reach so I don't have to reach for anything while I'm developing. When I'm finished press a button to get everything I didn't end up using in the end put away.
Everyone else: have a very small set of tools so that I must constantly go back to the tool shed and pick up something I now need, often forgetting to put back things I'm not using anymore.
The second case is overstated in the case of at least MS Visual studio with Resharper, which will automatically suggest use statements when you use a class that isn't visible, and tell you about any includes that aren't being used. But with Lisp/Smalltalk (provided they have "tree shakers") you don't have to think about this, the computer will just do it automatically.
> In LispWorks you start the image and type to the listener.
I'm very new to Lisp (but not C, C++, Java, Python, and some other similar languages). Can you please tell me what an "image" is?
Typically you can start Lisp and just reload the memory from the heap dump (the image) on disk and do some automatic re-initialisations (for example reopen windows, reconnect to files and databases, restart processes, ...). All the code and data will be restored. You are back, where you saved the image before.
Imagine you would dump the memory of your JVM to disk and restart it later.
The Lisp vendor will provide you with an 'image' that contains the base language, some extensions, graphics, networking, editor, debugger, inspector, gui designer, etc.. You can then write a program, load all the program files, load all the data and configuration files and save an image. When you restart the image later, all your application code and data is back. This can save a lot of time during development.
The simplest way to understand this is to realize that you are basically living "inside" the running exe that you're writing.
Contrast this with e.g. C++ where you write a bunch of files, fight with the compiler until they turn into some black box exe, then run said exe and try to guess what it's doing based on the blueprints of your source file. To actually see what's happening inside the exe you would have to run some kind of debugger, but you can't take action based on what you see. You're looking through a glass.
Not so with Lisp/Smalltalk. If you see something wrong you can just fix it on the spot and continue running.
Ah. Given what I already know, it sounds a lot like opening up a Python REPL, importing all the code you need, and then starting the main loop of the program.
> You're looking through a glass.
Heh, through debugger-colored glasses. :)
I see what you mean though. With C++, if you find your problem, you stop the whole show, go back to your source, take a stab at a fix, recompile, and then run once again in the debugger. With Lisp, you're just there the whole time.
That's an old feature. It used to be called "ctags" and even console-based text editors support it.
"Finally getting to the point and you say "and all that"? Btw, "all that" includes unification of code and data - something no other language provides, let's say, idiomatically. This is an amazing feature, and in fact Lisp macros are Lisp macros thanks to just that - unification of code and data and symbolic nature of the language."
Lisp's problem seems to be that, until you know how to use macros and code/data unification, you can't be easily convinced of their merits. It takes a considerable commitment to learn Lisp before you can reach that level, though.
But what's a definition? Slime does this for defun/defvar etc, but not for things defined with, say, hunchentoot's define-easy-handler. Can you tell your IDE to take me to that location in a single keystroke? Maybe Slime supports adding definition types, I have no idea. Or maybe it throws up its hands because of the potential naming conflicts.
All this makes you wonder whether Lisp is too powerful for any IDE to keep up.
Have you ever used IntelliSense?
IntelliSense doesn't always work, either, at least when I used Visual Studio a lot.
You could e.g. have a system where write some type name (or variable with a declared type), hit a key and get a list of all the generic methods that apply.
I do use MSVC all the time at work, XCode occasionally at home, and have used Delphi quite a lot before my current work.
But I also bought Lispworks Mac/Win32 versions, and I feel much more productive there. Why? Because I have listener, I can quickly recompile the current buffer (Ctrl+Shift+B), or the current enclosing function (Ctrl+Shift+C), or if I'm at the end of expression evaluate it (Alt+X E).
I can keep recompiling the buffer, and all variables declared with defvar would retain their values, I can change object definitions on the fly.
It's just easier to experiment.
Weaknesses - I'm so much used to the MSVC debugging (I Actually used to be Turbo Debugger junkie in the past), that I miss that - it's simply not easy to debug in LW (or EMACS + slime + any other lisp).
But do I need to debug that much more? No - I don't, but that could be because I'm yet at the amateur level.
b) SLIME and SBCL (to pick freebies) crush any of those under their collective boot. Eclipse tries real hard but is let down in a number of ways by Java (instability, the fact that the environment is opaque to the user instead of malleable, etc). LispWorks's GUI is very good, at least as good as Eclipse at the things Eclipse is good at.
c) You would be shocked at how bad things are. GCC, to pick on a weak and somewhat helpless example, tries really hard to optimize C code but is generally content to merely compile C++ code. XCode with its gcc backend does relatively little real optimization of Objective C, certainly nothing compared to what you expect from a modern JIT in Java. Not familiar enough with Delphi or MSVS to pick those apart but I doubt it's that much different.
d) Ditto. Eclipse crashes at least once a day for me during heavy use and suffers bizarre and inexplicable errors even more frequently (fun experiment: create a file Foo.java. Rename it to Bar.java. Try to create Foo.java again. The editor cannot be convinced to read the new Foo.java off disk without a restart). Errors in the native code are not infrequent (I have a bunch of logs here from older Javas where the GC (I think) got confused in the middle of GCing and dumped core), although they seem to be better with Java 6.
e) A comp.lang.lisp audience knows about macros and why they're nice. Other language zealots cannot have their minds made up by a simple reiteration of the benefits of code as data (which I agree are considerable). The other points are somewhat more relevant inasmuch as they challenge generally accepted but really wrong concepts.
And re: memory footprint: be wary of comparing apples to rutabegas. Remember that every C program benefits from libc and its ancillary stuff; on this here Ubuntu system the system libraries required to bring the system up from a dead stop (not any of this /usr/lib detritus) is already 2.5 MB on disk. Also remember that the 50 MB program footprint likely includes the runtime compiler system (so you can hot swap code; very useful) which is something like 42 MB for gcc 4.4 alone (again, disk; memory usage is enormous). I would expect that the amount of code that needs to stay in the CPU cache is equivalent.
I develop quite quickly in C/C++.
'Go To Source', 'Edit Definitions', 'Edit Callers', 'Edit Callees', 'Display Backtrace' are all a keystroke (or extended command call way) along with various tools that may or may not be present in others (I've never used MS Visual Studio, which seems to be a leader in this field)
One of the parts I miss from it was that break points are at a source level, not a line level, which allowed incredibly fine control when tracking down bugs.
I have used XCode, Eclipse, and Visual Studio. Emacs' tag
functionality is much easier to use than the equivalent functionality
in those editors. ("C-u M-." and "M-star" are especially convenient. And I would have typed an asterisk there, but news' parser is busted.)
The parentheses awareness he is talking about is paredit-mode, which
changes Emacs' usual keybindings from operations on lines to
operations on S-expressions instead. I never got the hang of it, but
Emacs' built-in word/sentence/paragraph movement/editing functionality
is essential. I have not seen these features in other editors,
The other feature that Emacs has over the "visual" editors that you
mention is the ability to interactively change how the editor works.
(This is especially nice if you're a lisp programmer, as there is
nothing to be confused about; it's just Lisp.) I am not just talking
about changing the colors or things like that, I am talking about
writing new modes, changing the built-in behavior, and so on. I do
this so often I don't even consider it unnatural.
(Something I fixed today; when loading a ".chs" file into the inferior
GHCI buffer via C-c C-l, GHCI complains "this is not haskell", which
is true. Then it changes the prompt from "My.Module>" to ">", which
makes the next C-c C-l command lock Emacs. I debugged this by seeing
which command C-c C-l ran by pressing "C-h k C-c C-l". I then pressed
ENT to visit the source code for that function, and noticed that it is
programmed to hang until it gets output from GHCI that matches a
regex. I changed the regex to something more liberal, hit C-M-x to
load my changed function, and my problem was solved. Those two
minutes of distraction will save me much frustration over the course
of the rest of my life. When was the last time you fixed a bug in
Eclipse or Visual Studio in two minutes?)
But anyway, Emacs' functionality is not limited to Lisp. I mostly
write Perl and Haskell, and Emacs excels at both tasks. It is also a
good mail reader, web browser, and IRC client. (Why yes, I am
composing this in Emacs.)
allow interactive step-by-step debugging with variable watch, disassembly, stack trace, etc - I shouldn't really name all the things that are possible in a typical modern IDE
But of course, no REPL. I still don't know what a C++ programmer does
when he wants to figure out what regexp he needs to match something.
Write a driver program around the library, compile it, debug it,
recompile it, and finally run it and play with it? Sounds fun...
A compiler which doesn't "enable very fast code" has no place under
the sun nowadays.
Apparently the real world is not under the sun.
Sure, but you have to realize that most applications that people build
with big stacks like CL or Java or C# or whatever are not really
CPU-constrained; the context switches kill them. (Wait for user to do
something. Wait for database results. Wait to copy response buffer
to web server.)
Nobody wants code that runs slowly, but the trade-offs to get
something really fast (or to fit in the CPU cache, etc.) are not worth
the effort for most people. "80% solution" and all that. And, the
CPU cache is all about critical sections, anyway; it's usually your
data that doesn't fit in cache and slows everything down.
So anyway, you are underestimating the capabilities of Emacs.
The problem is that unlike a modern IDE, EMACS does nothing to make it's capacities evident. Perhaps it's good to have a system which requires one to put a systematic course of study to fully understand. Or perhaps it's symptom of a primitive UI bound for eventual obscurity.
How shall we decide??
At least Emacs starts up with a screen containing links to the documentation and an interactive tutorial.
I started programming in LISP way back in 1971 on a Univac 1108
mainframe and also implemented a 68000-based Lisp system (~50K lines
of real-time assembly) for mobile-robotics use in 1983 - and so know
my way around the language.
All Lisp (or Smalltalk, or...) success stories I've read hinge on someone with an enormous amount of experience with the language. I'd argue that someone with that much experience could get the job done in (almost) any language. I'm surprised that someone with that much experience would put it down to language choice rather than deep knowledge of the problem domain.
I don't believe in a global linear scale of language power ("the blub theory"), but I do believe that some language may be better than others for a specific task given specific constraints.
E.g. if I have equivalent levels of experience in C++ and Python, I'm pretty sure I can write small webapp quicker in Python. OTOH if the languages are very similar, like Python and Ruby, the level of experience is much more important that the relative strengths and weaknesses of the languages.
Of course, it is seldom that you get that kind of fair comparison between two languages - usually everyone has a favorite language they know better than any other.
Not quite - there are a bunch of languages that are "up there" in terms of expressivity: Lisp, Smalltalk, Haskell, ML, etc. There's no data to say "learning Lisp will give you an enormous productivity boost". There is only data to say "people with 30 years experience in Lisp/Smalltalk/whatever tend to be very good programmers". And, well, duh.
Also, if he had to do it in C, say, could he really not? Even if he needed to invoke Greenspun's Law in the process...
The question is whether that always happens for a complex system. Because if it does, then every sufficiently advanced programmer is a lisp programmer.
So features that are mostly implementation-dependent, like incremental development, static type checking, contracts, CPAN/gem/easy_install type code repositories, reference books and mailing list discussions, etc. are the things that give a language value. Exposing powerful key concepts(like common data structures, garbage collection, reflection, macro systems...) directly through the language is also a win but not something impossible to work around.
To paraphrase a oft-abused quote, "every sufficiently complex program contains a Lisp." But that doesn't mean the conclusion is "start with Lisp, since you'll end up with one anyway." It may be that Ruby (for example) has a really cool library you want to use for a core part of your app, so you start writing the code in Ruby, and things are good and you make progress....and once you come across a problem requiring Lisp-type power, just by knowing how the solution would work in Lisp, you can usually devise a worse-is-better 80% solution that is right for your specific application. The final result may not be beautiful or pure, but it lets you do things incrementally without the up-front misgivings of "it would have saved so much time to start in Ruby..."
tl;dr: Learn languages to learn concepts, but build applications in an "environmentally-friendly" way. Perceived "potential power" is not a good reason to sweat and strain to build apps in your favorite language when you could go over to the current "industry standard" and save yourself months of effort.
On the other hand, nobody has an omniscient view of all languages and environments, hence people tend to stick with what they know....
Another issue is that, if you are talking about small-to-medium sized applications then clearly there is a difference in languages. For example, it is pretty clear that writing a script is easier in perl than in C, or writing a medium sized expert system is easier in LISP than in Pascal.
However, if you consider large scale applications (100k+ LOC), then I don't believe there is any difference between writing in C, C++, Java, or Common LISP, as long as the programmer(s) have deep experience with the used language.
Just notice that the language is not going to solve the large-scale problem by itself. As long as such language has tools for creating abstractions, the code will have about the same complexity no matter what. If that complexity will be encapsulated in simple concepts (structs and functions) or higher level concepts (closures and continuation) depends on the taste of the developers and the language used.
The choice of abstraction do matter. If you use weak ones, your productivity is taking a serious hit: your program will be bigger, more complex, and have more errors (squared).
C++ abstractions, for instance, are incredibly weak. Take the function abstraction, which isn't even complete: you have no way to write anonymous functions the way you write literal integers. Higher level concepts, as you call them, aren't more complicated than the "simple" ones. Often, they are just less familiar and more consistent.
: Anonymous functions should actually be called "literal functions":
(fun x -> 7 x + 42) -- a literal function
357 -- a literal integer
2 + 3 -- expression which yields a integer
f . g -- expression which yields a function
There are two kinds of design patterns: architectural patterns (e.g. MVC) and language patterns (e.g. Iterator). In the language I use for work, C#, we have to use a lot of both kinds of patterns. Often the "all code must be in a closure or a method" way the language works gets in the way (I can only imagine it's much worse in Java). In 100k LOC, I bet 25-30% of it (random "from the gut" guess) must be language patterns (e.g. visitor).
When you realize that nearly none (if any at all) of the language patterns are needed in Lisp you realize that in 100k LOC you only work with the problem. It seems to happen to me pretty often that I run into situation where there are two possible representations of something, both having problems and I realize that in Lisp I wouldn't have even noticed the situation at all because I could have used a more natural solution right from the start (CLOS' generic function approach makes all the difference in the world here).
Keep in mind I've used C++ and its descendants longer than I've used lisp.
I've worked on heavily used programs in several languages (including Lisp) and frankly in my experience the quality of the programmer is way more important than the framework they start from.
Can you provide a source for the 30% figure?
So, he probably could have written it in C++ (or SNOBOL ;-) ), but he chose to do it in Lisp.
It is amazing how much Python (and other high level languages such as lisp, Ocaml) allow you to focus on the problem you are solving not the irrelevant details of the solution.
I recently (~5 months ago) started using Clojure heavily and now find myself much more productive using it than using any of the languages that I have considerably more experience with (I used C++ almost exclusively for 4 years for hobby (mostly game related) projects and toy virtual machines/interpreters; Python for ~4 years for prototyping, web development and GUI development; Java for 2.5 years for server (non-web) development - total programming time including hobby, uni and professional = ~10 years).
As an example, I'm very much interested in compilers, virtual machines, programming language design and such, so have been plying around with these in various languages over the years. I wrote my first toy language in VB; wrote a few interpreters, virtual machines and simple parsers in C++ and some parsers, interpreters and assembly code generators in Python; for uni, I implemented a parser and code generator (instruction selection, using maximal munch) in JavaCC - recently I wrote an assembly code generator in Clojure. It took me a weekend and it surpassed the power of anything I'd written in VB, C++, Python or Java. Its flexible and can be easily be extended, its pretty smart and can do some basic optimisation (caching data in registers, function inlining...). I tried writing something similar in Python before, but gave up.
My point is that I have much much deeper knowledge of C++, Python and Java, yet I was able to build something MORE complex and extensible in Clojure, in less time, even though I'd only been using it for a few months.
Yes, my previous experience played some part in this, but I attribute most of it to the flexibility and power Lisp provides me through easy to use and powerful abstractions, flexible and convenient syntax and interactive development.
 Of the same scale - any interpreter, code generator, compiler etc. I've written much more powerful systems, of course, but they are huge in comparison and took a lot lot longer than a weekend to write.
... and we could not have done this without A, B, C ... by Edi Weitz.
Lisp needs more people like him who actually release software instead of blogging about its virtues.
People have asked Jack to explain why he used Lisp for an embedded application. You can read the result in the linked article.
The guy who has posted the story also has written a bunch of Lisp based internet applications. If you look among the hardcore Common Lisp users, quite a lot are writing software. The once that are blogging superficial stuff not so much. Read the stuff on planet.lisp.org - that's usually quite useful.
When you buy a piece of no one writes on the packaging "we did this in C++!" or "We did this in Java!" Why would you expect a big "Common Lisp!" sticker on the packaging? The deliverable is machine code, not source code. The customer doesn't care what the source is written in.
Ever read blog posts about those?
This is interesting, considering one of the main reasons Reddit switched from Lisp to Python was because it was crashing often.
(commentless, of course :)
> "Let Over Lambda"
> (which is really quite scary to read - I can't say that I understand
> 100% of it - maybe 60% and I am very happy with that level of
> comprehension) -- you end up with an enormously powerful set of
> programming tools unlike anything else out there.
It was Paul Graham's essay that encouraged me to try Lisp in 2005.
The differences between C# 1.0 and 4.0 are enormous. Microsoft isn't shy about making changes to the language.
This holds for pretty much any language you care to use.
Sometimes environments have bugs. If you need an underlying runtime that simply will not crash, using something tried and true may be at the top of your priority list. That might be overkill for most projects, but I can see a guy who wants something so robust he knows its his fault when it breaks.
Reminds me of a bunch of compiler bugs I found back in the day. Strangely the longer I program, the fewer of these compiler bugs I seem to be able to find.
Backstory: I am a fifth year comp sci student. I have worked part-time as an SDE for most of my college career. The python crash I referenced earlier was in class, the java problem at work. I have written an optimizing compiler before (C subset to LLVM to SPARC MIPS), so I hope that you will at least agree that I am not a complete moron in this area.
Java story, more detail: To make a really long story short: I wanted to move a really really long stored procedure (~2k LOC) into a JDBC query. Don't ask why, it was a very scary legacy issue. So I copy and paste the stored procedure into one long const string in eclipse, and it compiles just fine. I run our ant script against it, and it explodes with a stack overflow (IIRC). The solution that I found was simply to replace newlines with \n" + (newline). No stack overflow on compilation anymore. From this I could only assume that I had fubared the compiler (which would have been a reasonable way to fubar, being as I was trying to create an absolutely massive const string directly).
Now we could quibble over whether this even counts as a crash in the sense we were talking about, but the underlying premise is: you never want to have to work around your tools. As computer scientists we do it a lot, but it is never fun and its worse when something goes down in production because the tool crapped out. Theoretically, a VM or OS should never fail and bring the entire system down. A compiler should never outright crash.
Now to debunk my own defense: I just sat down for about an hour and did everything I could to reproduce the bug in JDK 1.5.6 (the original JDK I broke it on). Well, go figure, I can't get it to break in the hypothetical way I wanted to. I might one day do exactly what I did with the stored procedure, but setting that particular environment up again would take quite some time.
In conclusion, you can assume I was an idiot because I can't show you the code. I would in your shoes. :)
P.S. This all assumes you are using tools given to you by the platform itself. Using JNI to dereference a NULL doesn't count. :D
CL-USER 101 > (defun foo (n) (unless (zerop n) (cons n (foo (1- n)))))
CL-USER 102 > (foo 1000)
Stack overflow (stack size 15998).
1 (continue) Extend stack by 50%.
2 Extend stack by 300%.
3 (abort) Return to level 0.
4 Return to top loop level 0.
Type :b for backtrace or :c <option number> to proceed.
Type :bug-form "<subject>" for a bug report template or :? for other options.
CL-USER 103 : 1 >
They had L, their Lisp dialect, and Mars, the macro layer for doing robotics in L.
The paper "L - A Common Lisp for Embedded Systems" is available at http://www.cs.cmu.edu/~chuck/pubpg/luv95.pdf