So it sounds like the JPL's reason (at least the reason they gave - whether you want to believe it was some anti-Lisp conspiracy or just bad office politics is your own business) was that Lisp was too big. I wonder if Scheme would have been a better fit? (It was certainly around in 1988; SICP was written in 1985.)
There's something about Lisp that gives it a reputation as inherently individualist (as opposed to the "workers are expendable cogs" project management model that the linked article associates with Java). Same with Forth. At this point, the Lisp community seems to encourage that idea, but what in the language itself supports this?
I hear arguments that it's a harder language, but let's not kid ourselves. I think programming well in, say, C++ or Haskell is much, much harder. (Should a language be hard to use well?) Or: Lisp code is hard to read. Potentially, sure -- the readability of the code is more dependent than most languages on the developers' abilities to name things well. (Same with Forth.) You can write unmaintainable garbage in any language, though - just copy and paste things where they're used instead of naming and referencing them, give variables meaningless names, etc. Just add water and presto, big ball of mud. (http://www.laputan.org/mud/) Also, most programmers are probably used to good editors handling paren balancing in any language.
Besides being significantly different from the main languages people are exposed to, what? All it takes for a language to be different from e.g. C/Java is to not be C/Java. Is it because you can create your own idioms / extensions to the core language? I also think the syntax is something of a red herring - sure, you may dislike the syntax, but I can think of popular languages that have disastrous syntaxes. (I guess Dylan could be a control group here, but I don't know anybody who has real experience with it, let alone its reputation among management.)
No answers, but I'd really like to know. Working on my own language, and all that.
Also: I'm most definitely not looking for thinly-veiled bragging about how "not everyone can handle such a superior language" either, because... come on.
By "inherently individualist" do you mean that Lisp programmers tend not to work in teams? Or just that there aren't many of them, so they tend to be thinly scattered and thus isolated?
My current pet theory is that only a minority of people feel comfortable thinking in s-expressions. For the majority, trees with many levels of nesting aren't a natural medium for expressing their thoughts. (I don't know why this would be, but it's worth pointing out that human language has a very low tolerance for nesting (in fact you almost never see it go beyond level 1)).
I know two very good programmers who gave Lisp a good try and just don't feel comfortable in it. So I don't buy the inferiority argument either. Interestingly, both of those guys strongly prefer Smalltalk, which is arguably as close to Lisp-with-all-those-trees-unwound as anyone has got.
Edit: I'll add that if the nesting theory is correct, I feel lucky to be in the minority that finds it comfortable to build programs that way... what I'm able to do in Lisp far outstrips what I was able to do before.
only a minority of people feel comfortable thinking in s-expressions
I think that everyone coming to Lisp for a period of time experiences this discomfort. It's a completely new way to think. I don't believe, however, that only certain people are capable (with practice) of comfortably programming in S-expressions. I think it just comes down to who is willing to experience the discomfort for the length of time it will take until they have their first "aha!" moment.
For me, I had some minor "aha!" moments while programming in Lisp, but not enough to switch. It was only later, when I wanted more powerful compile-time capabilities in C and when I joined python-ideas and started seeing the plethora of syntax proposals (often followed by lengthy debates which usually led to no changes) before I realized just what makes Lisp a more powerful language.
Ok, here I think you need to look at history rather than the languages themselves. Once Lisp (and Smalltalk, for different reasons) became branded as failures, that reputation was fixed in 90+% of people's minds, and it remains so. This is all the more entrenched because of the bias in favor of the newest things in computing. For most people, that a 40-year old failure might be better than something new and popular just isn't possible.
That view is remarkably well entrenched. On the last team I worked in, before coming to JTV, we were talking about the big Java project we were working on. I said if I had my way it would have been written in Common Lisp. Many on the team thought that was hilarious. One guy's response was "yeah, maybe if we had a supercomputer!"
For what its worth, that's probably a pretty good sign that they don't really understand what makes languages slow. (Or that they in "cheap shot" mode.)
Weird, as I was browsing the language shootout, I got the impression that SBCL produces damn fast code, in most cases comparable to the code emited by GCC or Java 6.
There's something about Lisp that seems inherently individualist. At this point, the Lisp community seems to encourage that idea, but what in the language itself supports this?
I think the people that choose Lisp select it after a study of the various options. Choosing and evaluating programming languages is not a social activity -- you do it by yourself. Contrast this to Java programmers, who use Java because it's what they learned in school. They aren't individualistic, they just do what they are told.
(Java's popularity is helped by the "tools" are "easy to use". SLIME and Emacs are "hard to use", so you have to want to use them to be able to use them. I use quotes because I disagree with the sentiment. Eclipse is a nightmare to use. Emacs is a dream.)
As for Java's popularity, I think it comes down to social factors rather than differences in the syntax of the language. Java developers want less money than Lisp developers. It's also easier to hire Java developers, because you can just require a Sun certification. Easy!
I don't think that's true. Lisp jobs are few and far between; people who want them really want them. There are many jobs that are like that. Here in the UK nurses and firemen are badly paid, that's because there's always someone who wants the job enough to do it for barely living wages.
The only way to be a well-paid Lisp programmer is to work somewhere where they wouldn't even dream of using Java.
So you'd say that more Lisp-users tend to be self-taught / come to the language themselves, rather than using $current_job_security_language?
This isn't specifically about Lisp vs. Java, though - I don't see people have this sort of kneejerk reaction about Python, for example. (Though Python seems to have been carefully designed to give a first impression of being clean and easy to use. I'd say it mostly is, but it definitely gives a good first impression.)
Lisp attracts the sort of people who want to bend the language to their will. As such, these same people bend systems, programs, and organizations to their will. Not particularly easy to manage.
I personally like scheme, because its beautifully weird. If something is weird, even if beautiful, the type of people that like it are not the best of team players. You don't write poetry in teams. I don't know how much of a team player i am, but from early childhood i got used to doing stuff mostly alone. I guess i could fit well in an extra small team, where one has more autonomy.
Thats strange because i see a lot of job adds requiring creativity and passion. Unfortunately big companies understand creativity as "you should be smart enough to understand your boss's vision" and passion as "You should just do your job and not whine about it".
They also seem to define teamwork as "doing what your boss tells you without pointing out ways to do it better" and leadership as "passing on your boss's vision to the poor souls working for you, while not passing back any complaints/problems to your boss".
No, Scheme would not be a better fit, speaking as someone who codes Scheme day and night. Yes Scheme is smaller, but that's in reality, which makes it totally unrelated to the reasons for rejecting CL.
I read this before, but still love this conversation with his Google manager:
Me: I'd like to talk to you about something...
Him: Let me guess - you want to use Smalltalk.
Me: Er, no...
Him: Lisp?
Me: Right.
Him: No way.
I just find it an amusing encapsulation of how so many of the high achieving people that Google hires like to program in high productivity languages like Lisp, Smalltalk, Ruby (see Yegge) but Google keeps a tight rein on it by keeping the languages they will deploy on their servers down to 4 or so (Python, C++, Java and Javascript being the 4 I've heard).
I can see where they're coming from, though: If every time a new popular language came out (Ruby or Clojure or Blorf or Java+++ or ...) somebody wanted to also add that as a dependency, it would quickly become a maintenance nightmare.
At the same time, Smalltalk and Lisp were both languages with decades of history, so it was something other than managers trying to help focus programmers who (understandably) want to try out every new shiny thing. What, though? (See my other comment.)
Facebook use 20+ languages in production - that's what Thrift is about.It's certainly do-able so long as certain processes are in place (e.g. if you use a language you are responsible for its bindings to the common message bus).
I do recall Norvig saying that they use C++ for a lot of core code because even though Lisp would be cheaper to develop, they run so many servers that any decrease in efficiency would cost millions (can't find the quote though)
If making lisp run faster would save someone millions of dollars, lisp would run faster. Sure, it's practical for Google to use C++ in this case [1], but I am imagining a fantasy world.
[1] Unless of course bugs that C++ ignores would be caught by Lisp. Perhaps there is a minor type error in Google's code that makes the search results not quite as good. If CL's type checker caught that, they would have gotten super-rich sooner and would own Microsoft right now.
But like I said, this is all fantasy. My real argument is "You are not Google."
Making Lisp run faster would almost certainly involve making it closer to SML / OCaml. Beyond algorithmic changes, speed optimization in Lisp usually involves type declarations, or sometimes inference, as in ML. (See: http://www.lrde.epita.fr/~didier/research/verna.06.imecs.pdf) Give Lisp a really powerful, inferred static type system and you're most of the way there. (Give it a better module system, while you're at it.) And if you want to talk about a type checker catching bugs...
I wish OCaml had kept the s-expression syntax, but it's a very interesting (and powerful) take on improving Lisp. (I think it also helps that they don't call it Lisp. It flies under the radar as far as Lisp's reputation.)
Making Lisp run faster would almost certainly involve making it closer to SML / OCaml.
Indeed.
I like Haskell as a compromise between OCaml and Lisp. OCaml's syntax is a bit too weird for me, and its type system is needlessly irritating (a different operator for integer and floating point addition?).
You have given me a good idea, though -- compiling a Lisp into Haskell or OCaml. All the syntactic goodness of Lisp, all the strictness of the "host" language.
Using different operators for ints and floats in OCaml is like making people use prefix notation for arithmetic in Lisp - you get used to it, but it can really annoy people new to the language, and some people never get over it (or forgive the language designer).
(Messing with the syntax for basic arithmetic is all but trying to give a bad first impression, even though it may fit the overall language better. Smalltalk seems to have found a good middle path here.)
I'm in the early stages of working on a synthesis between them, as well. Among many other things.
I understand the appreciation for Haskell, but it's just not my thing. I really strongly prefer multi-paradigm languages. Instead of forcing me to do everything in pure functional code, I'd rather it just be able to tell if it's pure, and if so, optimize accordingly. Sometimes bending the language's rules a bit is a better idea than the language realizes, and what constitutes "good" code can change over time. No one programming paradigm or technique is the best match for every problem domain. OCaml is far from ideal, but it suits that style of coding pretty well. (Also, I prefer OCaml's syntax to Haskell's, though I might be the only one.)
A language with OCaml's type and module sytems, Haskell's typeclasses (which could be built with macros on the existing type system, I think), Lisp's s-exps and clear specification of whether code at executed at read, compile, or run time, and Scheme's first-class continuations could be a great thing. (While we're dreaming, let's give it a standard library like Python's, a conceptual core as clean as Scheme, Smalltalk, Forth, or Lua's, and a runtime as small as Brainfuck's.)
> Using different operators for ints and floats in OCaml is like making people use prefix notation for arithmetic in Lisp - you get used to it
Until the day when you have a large code base and you decide that something that you originally decided to make an an int really ought to be a float instead. Then you'll get unused to it again.
If you think your numeric type everywhere may change down the road, you could split out the operation and e.g. define let add = (+) and sub = (-) ... etc. and use those in your code, so you will only need to change them in one place, should you change your numeric type. Abstract out the operation. (Then you're back to using prefix notation.) Or, change it once, and let the compiler show you everywhere else it needs to be fixed.
Haskell's type classes are a better overall solution for numeric types, of course. Not having a parallel to Haskell's deriving show in the OCaml standard is also annoying, but it's available elsewhere. (See: http://alan.petitepomme.net/cwn/2008.07.08.html#4)
Still, most (though not all) numeric operations fit under either exact (int, bignum, ratio) or inexact (floats) without much ambiguity.
> If you think your numeric type everywhere may change down the road
If programmers were always that foresightful, programming in general would be a whole lot easier.
> Or, change it once, and let the compiler show you everywhere else it needs to be fixed.
I find this downright perverse. If a compiler is smart enough to know what needs to be fixed then it should just go ahead and fix it instead of fobbing the job off onto the programmer.
> If programmers were always that foresightful, programming in general would be a whole lot easier.
Agreed. That's what encapsulation is for, though. When code isn't separated into sections with defined borders, it tends to grow entangled and hard to change.
For a real (albeit simple) example, something I'm working on has several instances of an ordered collection (with a few irrelevant quirks). I presently have the type declared (once) as
type 'a coll = 'a list ref (* pointer to a Lisp-style list *)
for the obligatory quick-but-inefficient scaffolding, and wrote a few one-or-two-line functions inserting, checking for membership, and the like. (They're just aliases to existing funs in the List module.) I expect to swap out the implementation so I can compare e.g. skiplist, RB Tree, hash table, etc., performance later. Given the constraints on the problem, none seems obviously better (though I'm betting on skiplists here, space reasons). 'a coll is used throughout the program, but changing that line and those few short functions will replace it everywhere (and it will notify me if I miss something). 'a coll is also polymorphic on another type, because the whole module is a functor. (It's for a language-agnostic source code analysis tool.)
This structuring is nothing specific to OCaml, though; data encapsulation is main idea of chapter 2 in SICP. (http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-13.html...)
Likewise, in an OO language I could swap in different collections subclassed from Collection, or just with the relevant methods, depending on if the OO implementation used class-based or structural/"duck typing" polymorphism. That's the same overall result as the above, though by slightly different means.
> If a compiler is smart enough to know what needs to be fixed then it should just go ahead and fix it instead of fobbing the job off onto the programmer.
OCaml's H-M type system doesn't try to guess whether you meant all references to x to be type A or type B, it just tells you that they aren't internally consistent. It would be guessing. (How would the compiler know if you're gradually changing from A to B or B to A from just a snapshot, anyway? That's analogous to the Heisenberg uncertainty principle.)
> OCaml's H-M type system doesn't try to guess whether you meant all references to x to be type A or type B, it just tells you that they aren't internally consistent.
No, it doesn't tell you that. It tells you that they are internally inconsistent WITH RESPECT TO OCAML's SEMANTICS. They are not internally inconsistent in any absolute sense, as evidenced by the fact that I can write (+ x y) in Lisp and it will produce a sensible result whether x and y are integers, floats, rationals, complex numbers, or any combination of the above. This is an existence proof that compilers can figure these things out. Hence, any compiler that doesn't figure these things out and places this burden back on the programmer is deficient. OCAML's design requires its compiler to be deficient in this way, which makes OCAML IMHO badly broken. OCAML, in essence, requires the programmer to engage in premature optimization of arithmetic primitives. The result is, unsurprisingly, a lot of unnecessary work when certain kinds of changes need to be made. Maybe you can "get used to" this kind of pain, but to me that seems to be missing the point rather badly.
Yes, "...aren't internally consistent within OCaml's type system.".
What does your Lisp implementation (SBCL?) do with (+ x y) when x is a float and y is a bignum? Is there an implicit conversion at run-time or compile-time? (I don't have SBCL at the moment, it's not available for either of the platforms I develop on. Sigh.) If so, that's a fundamentally different result than OCaml, which is deliberately avoiding implicit conversions under all circumstances. I'm not saying that's an inherently better choice than Lisp's (it can be annoying), but this thread was initially about making Lisp faster, and that's part of how you do it.
Adding optional type inference, whether by H-M or something more flexible when mixed with non-statically-typed code (which seems like the real trick, but I'm still learning how type systems are implemented) to e.g. Scheme would be a great compromise. I believe MzScheme has had some research along these lines (MrSpidey: http://download.plt-scheme.org/doc/103p1/html/mrspidey/node2...), but it's not available for R5RS yet.
I like OCaml's strict type system more for how it helps with testing/debugging than the major optimizations it brings, really, but the speed is nice.
Also, OCaml is not an acronym (but I'm guessing that was from autocomplete in Emacs).
debugger invoked on a SIMPLE-TYPE-ERROR in thread #<THREAD "initial thread" {A7BD411}>:
Too large to be represented as a SINGLE-FLOAT:
129381723987498237492138461293238947239847293848923
This really shouldn't be an error -- the internal representation of the number shouldn't leak out like this.
The good news is that you rarely want to add a double and a bigint -- it usually doesn't make mathematical sense. But the compiler / runtime should try harder to do what you asked.
(In the case of something obviously wrong like (+ 42 "foobar"), I agree that it would be nice for the error to be caught at compile time. But I digress.)
Actually, that works even if you don't change read-default-float-format. I think + should coerce single-floats to double-floats when appropriate, though.
CL-USER> (+ 1203981239.123d0 129381723987498237492138461293238947239847293848923)
==> 1.2938172398749823d50
CL-USER> (+ 1203981239.123f0 129381723987498237492138461293238947239847293848923)
debugger invoked on a SIMPLE-TYPE-ERROR ...
(note ...f0 vs ...d0)
Admittedly, this is not a problem I'd expect to encounter in real life, so I'm not losing much sleep over it. CL is weird and I accept that ;)
Of course. Sooner or later you bump up against the fact that real machines are finite. That's not the issue. The issue is this: you ask two compilers to add 1 and 2.3. Compiler A says the answer is 3.3. Compiler B says it's an error. Which compiler would you rather use?
It's ultimately a matter of taste. You can push compiler cleverness too far. For example, 1+"2.3" probably should be an error (and if you don't think so, what should 1+"two point three" return?) But the rules for mixing integers, floats, rationals and complex numbers are pretty well worked out, and IMHO any language that forces the programmer to reinvent them is b0rken.
I agree that it makes mixing floats and doubles a little more work, but I'll trade that for the ability to have it automatically infer and verify whether what I'm doing makes sense when I have e.g. multiple layers of higher-order functions operating on a cyclical directed graph containing vertices indexed by a polymorphic type, and the edges are ...
Interactive testing via bottom-up design helps, but getting automatic feedback from the type system whether the code works as a whole, and which parts (if any) have been broken as I design and change it is tremendously useful. Having to write the code in a way that is decidable for the type system is a tradeoff (and the type system itself could be improved), but it's a tradeoff I'm willing to make on some projects.
> The issue is this: you ask two compilers to add 1 and 2.3. Compiler A says the answer is 3.3. Compiler B says it's an error.
In this case, Compiler B is fine with (float 1) +. 2.3 or 1 + (truncate 2.3), it just won't implicitly convert there because it goes against the core design of the language. This sort of thing is annoying at the very small level, but its utility with complicated data structures more than makes up for it.
> For example, 1+"2.3" probably should be an error (and if you don't think so, what should 1+"two point three" return?) [...]
Good example, by the way. Also, I think we're in agreement most of the way - I find Haskell's typeclasses a better solution to this specific problem (the operations +, -, etc. operate on any type with Number-like properties, and if it's not a combination that makes sense within the type system, ..., it's a compile-time error), though I prefer OCaml to Haskell overall.
This really shouldn't be an error -- the internal representation of the number shouldn't leak out like this.
The good news is that you rarely want to add a double and a bigint -- it usually doesn't make mathematical sense. But the compiler / runtime should try harder to do what you asked.
One of the errors found with SPIN, a missing critical section around a conditional wait statement, was in fact reintroduced in a different
subsystem that was not verified in this first preflight effort. This error caused a real deadlock in the RA during flight in space.
So the bug fixed with the repl was actually one that was uncovered by the SPIN verifier ( http://spinroot.com/spin/whatispin.html ) months prior to the launch, but wasn't fixed because they didn't verify the code they actually decided to use.
In the words of Elton John: It's sad. So sad. It's a sad, sad situation. My best hope at this point is that the dotcom crash will do to Java what AI winter did to Lisp, and we may eventually emerge from "dotcom winter" into a saner world. But I wouldn't bet on it.
Nope. I haven't been doing it long enough to have gotten any good stories out of it yet. I did write this a while back, which was a big hit in its day:
"The situation is particularly ironic because the argument that has been advanced for discarding Lisp in favor of C++ (and now for Java) is that JPL should use "industry best practice." The problem with this argument is twofold: first, we're confusing best practice with standard practice. The two are not the same."
To me, this is the crux. We have this ridiculous idea that "industry" drives technological advance.
Industry software "engineers" trained up in trade schools or worse, without any awareness that they are reimplementing solutions (poorly) to problems that were solved decades ago...and we get...horrible industrial monstrosities like Java (and Microsoft Windows) roaming the earth, while remote diamonds like Lisp and Scheme shine kindly down.
Industry software "engineers" trained up in trade schools or worse, without any awareness that they are reimplementing solutions (poorly) to problems that were solved decades ago...and we get...horrible industrial monstrosities like Java
Since Guy Steele coauthored both the Lambda Papers (late 1970's) and the first three editions of the Java specification, I don't think that you can successfully argue that awareness of anything solved by Lisp or Scheme was a problem in the development of Java.
To use a golf analogy, consider cavity back irons. The average golfer, who struggles to hit it straight, will likely play a lot better with cavity backs. But a really good golfer won't be able to shape shots easily and will feel hamstrung without his blades.
"Come on, dude. You can't argue with best practices. They're the best!"
Which is the problem, I think. Their name just screams, "Quit arguing, they know better." (for increasingly nebulous values of 'they'). Aside from trying to undermine the term as a brand ("Best Practices: Because Embracing Mediocrity Looks Good On Paper"), what can you do?
Java wasn't that crazy. The basic idea of using Java for the web seemed a sane one: the only other alternative was to do something like XWindows. And then what? Are people going to use web browsers like XWindows clients? That would never fly!
There's something about Lisp that gives it a reputation as inherently individualist (as opposed to the "workers are expendable cogs" project management model that the linked article associates with Java). Same with Forth. At this point, the Lisp community seems to encourage that idea, but what in the language itself supports this?
I hear arguments that it's a harder language, but let's not kid ourselves. I think programming well in, say, C++ or Haskell is much, much harder. (Should a language be hard to use well?) Or: Lisp code is hard to read. Potentially, sure -- the readability of the code is more dependent than most languages on the developers' abilities to name things well. (Same with Forth.) You can write unmaintainable garbage in any language, though - just copy and paste things where they're used instead of naming and referencing them, give variables meaningless names, etc. Just add water and presto, big ball of mud. (http://www.laputan.org/mud/) Also, most programmers are probably used to good editors handling paren balancing in any language.
Besides being significantly different from the main languages people are exposed to, what? All it takes for a language to be different from e.g. C/Java is to not be C/Java. Is it because you can create your own idioms / extensions to the core language? I also think the syntax is something of a red herring - sure, you may dislike the syntax, but I can think of popular languages that have disastrous syntaxes. (I guess Dylan could be a control group here, but I don't know anybody who has real experience with it, let alone its reputation among management.)
No answers, but I'd really like to know. Working on my own language, and all that.
Also: I'm most definitely not looking for thinly-veiled bragging about how "not everyone can handle such a superior language" either, because... come on.