This really makes me excited, there is no reason why we cannot implement a thorough type-system like qi and shen on top of common-lisp for massive teams, the repl experience is still unparalleled, and I for one would love more libraries popping up, so maybe more people can make a business case for it. Machine learning in python is nice, but I would definitely enjoy common lisp for it too!
Many CL implementations have a type inferencer (usually so that the compiler can optimize code without the developer having to specify types everywhere). What sets SBCL (and CMUCL and Scieneer CL) apart is that it does limited forms of compile time type checking:
No. A type inference does not check types. It inferences types.
For example if you know that + is defined for numbers, then you can inference that a and b must be numbers: (+ a b)
If you know that a and b are numbers then you can inference that subexpressions need to deal with numbers only:
(let ((a c) (b d)) (declare (type number a b)) ....)
The type inferencer then will tell what types the various expressions have. It will try to propagate known types as widely as possible in the code.
Allegro CL and LispWorks for example are doing this type inference. But their compiler will not tell you that type declarations are violated.
SBCL OTOH treats type declarations as type assertions which can be checked at compile time. ADDITIONALLY it also does type inference - and additionally uses this information for compile time type checks.
Basically CCL's compiled code is often as fast as code from SBCL, LispWorks, Allegro CL. But one has to declare much more types of variables and functions explicitly with CCL. The code needs to be littered with type declarations.
A compiler with a (better) type inferencer can propagate the type information it already has and this is often enough for fast code. A good compiler can then also tell the developer where it lacks type information and then you can decide to declare types explicitly. SBCL/CMUCL does that in a very noisy way with lots of information generated by the compiler. ;-)
Performance. If you have (+ x y) and you can infer that x and y are fixints, then you can use a couple inline opcodes, rather than calling GENERIC-+.
EDIT: lispm is right, too, of course. You can look at it either way. If your program required top performance and you were going to tag every type until it ran super fast, then inference buys you the convenience of not needing to do as much. If you weren't going to bother with that, then inference buys you extra performance without all the effort.
And a similar possibly wrong usage is "conference" when used as a verb, maybe, since "confer" is probably okay. But "conference" seems to have become accepted, I've heard it a bit. There's this trend (from many years, not just recent) to make nouns into verbs too, e.g. "Skype me". The "confer" form as a verb probably sounds archaic to many, since it's not commonly heard or read (AFAIK).
I don't think so. A type inferencer figures out types of expressions that don't have a type explicitly declared using information it has about other things that are declared.
A compile time type checker is expected to signal errors at compile time for programs whose types don't check out.
You can use a type inferencer without signaling any errors at compile time (for example, by deferring errors to runtime) and you can signal type errors at compile time even if you don't infer any types (like C compilers, that demand all variables have a declared type).
(I feel sheepish even saying this to you), but when people say "type inferencing" I think these days they tend to mean "inference of static types that are enforced at compile time within a rich static type system." SBCL infers static types within a (mostly) dynamic, permissive type system. It can enforce certain constraints at compile time, but it’s ad hoc (there is no well-defined model of what properties it can prove at compile time on any valid code).
Type inferencing has a long tradition inside the Lisp community - for a language which is not statically typed, but which might be able to use type hints and static type information about the base language. Many Lisp compilers use type inference for optimization purposes.
As someone new to Lisp. The following thing has always fascinated me. I've read some older posts on comp.lang.lisp and people insist on buying a 'high performant' Lisp. I think what they mean is Allegro CL or Franz Lisp.
But why is it 'high performant' compared to other implementations? Like say CMU CL(now SBCL).
For example, in this case why would one use Clozure CL instead of SBCL?
Same thing. Franz is the company. Allegro Lisp is the product. Like "Apple Macintosh computer."
> But why is it 'high performant' compared to other implementations? Like say CMU CL(now SBCL).
Somewhat better compiler, but mainly lots of extra features (better debugging, graphics, better libraries).
> why would one use Clozure CL instead of SBCL?
I use CCL because 1) I like the IDE and 2) its compiler is fast so you can always compile everything essentially for free. In other CLs you have to choose between running in compiled or interpreted mode, and that can sometimes cause problems. CCL doesn't have an interpreter at all. It always compiles everything and that makes it easier to use.
I'm pretty sure SBCL compiles everything too. From "Compiler-only Implementation," section 2.3.3 (page 14) of the user manual (http://sbcl.org/manual/sbcl.pdf):
SBCL is essentially a compiler-only implementation of Common Lisp. That is, for all but a few special cases,
eval creates a lambda expression, calls compile on the lambda expression to create a compiled function, and then calls funcall on the resulting function object. A more traditional interpreter is also available on default builds; it is usually only called internally. This is explicitly allowed by the ANSI standard, but leads to some oddities; e.g. at default settings, functionp and compiled-function-p are equivalent, and they collapse into the same function when SBCL is built without the interpreter.
... and M68k Sun workstations, and the CCI Tahoe, which was a late-80s Vax clone with about 6x the speed. (I did the port to the Tahoe as an undergraduate.)
Also, it may not be necessary to master the most complicated machinery: compiling function by function gives lots of hints, and the very nice [log4cl](https://github.com/sharplispers/log4cl/) is handy.
Hi kamaal, it’s nice to see you interested in cl. I don’t know if you remember this but we sparred a couple of times on HN few years ago. I have been on this path before and happily settled on clojure for a few years now. I don’t know if it applies to all cls, but what nobody told me was how little I have to worry about when writing code in clojure. I stopped paying attention most of the Language and instead spend time on solving the actual problem. It is very liberating. See the talk “simple made easy” by rich hickey if you haven’t already.
Also most psychological and process incl. debugging barriers come down when using clojure
>>I don’t know if you remember this but we sparred a couple of times on HN few years ago.
Hey, I apologize if that was something that hurt you in anyway.
I discovered Lisp a few months back, I had of course heard of it long back, but kind of discovered using it only months back. I don't code in Lisp at day job or full time. But many of my problems related to understand data structures and algorithms have vanished since I started thinking of them in Lisp.
Mostly because I started to think in terms of recursion well, and other list manipulation stuff feels like it just comes to naturally. Not sure why this, or its unique to me. But for some reason I find it very easy to think and work with trees and graphs when I think of them as lists.
Yeah, most of incidental complexity vanished away once I start using clojure. Lisp is pretty cool in that there is very little to learn in way of syntax and ceremony.
If you are new to functional programming, I suggest understanding reduce well. Understanding it forms the basis of higher order functions. Also half of clojure core is implemented with reduce.
Also, I highly suggest watching the talk “simple made easy”. One hour of time spent there pays off in many many months/years
If by 'older posts' you mean early 2000's and prior, the open source Lisps weren't as many or as good as they are today. CCL is open source (back then it was closed source under the MCL banner), SBCL didn't exist in it's current form (that was announced in 1999) etc. In general, each decade back you go, think of whatever open source software that even existed as being significantly less powerful, performant and buggier. Many open source projects (esp. the GUI applications) started life as bad imitations of not much better commercial software vs. what you're used to seeing today. And to top it off, you were running it on radically less powerful hardware (in the early 90's you could still watch windows on the screen being redrawn on 'high end' hardware) so any software performance issues you had were magnified by the systems you were running on.
Not just Allegro, from what I read, Lispworks is also noticeably faster then SBCL (you might be able to find some comparisons for a practical system by googling for elPrep, which is now written in Go but used to be Common Lisp and ran on both Lispworks and SBCL). Concerning the appeal of Clozure CL, I think people have traditionally used it for faster dev turnaround due to better compilation speed than SBCL (which in turn typically generated faster code).
Franz Lisp does not exist anymore. It was a dialect like Maclisp at that time. It has been replaced three decades ago with Allegro CL - by Franz, Inc..
The 'fast' Lisps are often the native Lisps (SBCL, Clozure CL, Allegro CL, LispWorks, Lucid CL, ...). For selected applications some implementations were more specialized (for example GCL with fast numerics for Maxima)...
SBCL has the advantage of some more advanced type handling.
But fast isn't just low-level benchmarks. It might mean fast for a certain application. For example on the Deep Space One spaceship the application was deployed with LispWorks (and not OpenMCL/Clozure CL) because LispWorks had the best GC (soft realtime) and could be running on a hardened PowerPC processor on a real-time OS.
And 'fast' in Lisp is also important for mostly unoptimized, but compiled, code with full runtime checks and lots of flexibility. Stuff which would be 'full debug' mode in C++. For example the typical Lisp Machine OS could run in a full development mode without much runtime speed penalty - just lots of space penalty. It still had to deal with the window system (written in Lisp), disk access (written in Lisp), interrupt handler and scheduler (written in Lisp) ... Is that 'fast' enough for the interactive user? Benchmark-wise the system might be slow, but it might perform well for applications running multiple threads and lots of interrupts.
Imagine your application is a CAD system written in Common Lisp with CLOS (there are and were a bunch of those). It might need a large address space and zillions of CLOS objects. Will it run fast enough for interactive 3d workloads? How fast is the GC? How fast is the CLOS implementation? How does it deal with FFI operations into OpenGL... Stuff like that had been developed with Lucid CL and Allegro CL. At a time when CMUCL still was printing GC messages to the terminal...
Nowadays a lot of machines are "fast enough" and have lots of memory. But when one needed a 64bit SGI for a demanding application, Allegro CL was the thing to use. I think the IDE for Crash Bandicoot was Allegro CL on an SGI. Later similar graphics applications in Lisp ould also run on large Windows NT machines with faster graphics cards. Today Allegro CL is also the only CL one I know, which has a partly parallel GC - a demanding application like their database and knowledge-bases drives the need for faster runtimes. Lucid CL no longer exists (though LispWorks provided support for it), but it had a compilation mode for full-program compilation, which enabled good runtime performance. Many commercial applications had been deployed with it - until the company went away - after investing lots of money into an advanced C++ IDE which did not fly on the market. With something like LispWorks and Allegro CL one can develop normal-looking and fast applications for the desktop - stuff which feels much less sluggish than your typical Java desktop application and has useful interactive performance.
I have no idea, but it's 'small' - though there are still two commercial vendors. I'd guess that Franz still sells into some corporate environments and into government/military/security/... Domains people usually don't hear about and end users might not even know that they are using Lisp. If a Lisp-based graph database powers some customer-relationship application at a telco, then nobody will know that it is Lisp. There is also the myth that some military commander won't let their Lisp Machine application go, because it's the only one that never crashed in their data center. Former Swissair had an expert system to improve seat allocation (IIRC), which they maintained for more than a decade - it's might even still be running in Lisp - nobody would know that it exists and how it works... There are also Lisp applications which monitor and control processes in chemical plants - for the end user it's not visible what this is written in.
A bit out of topic but I remember you mentioned you were working on a lisp book and asked what we would wanna see on it.
Well for me, I'm super curious about compiler development, and I think, in the topic of this thread, maybe a chapter on writing a recursive descent parser for JS? building a transpiler? or using lisp as a intermediary with LLVM IR? somewhere where either DSL || "Code is data" and the true advantages of lisp shine through, and I would love that from an experienced lisper. Also if you could go into some use cases for tunable compilers that are seldom elaborated when that is mentioned as a huge lisp advantage! Thanks for giving the community your time in advance!
Thank you, it does look good. I still want one with an animal on the cover!
Or, more seriously, I think an O'Reilly book on common Lisp would go a long way toward promoting the language. As would one from the Pragmatic Programmers.
>Btw., for a decade or so O'Reilly actively refused to publish Lisp books - which did not make them very liked in the community.
Interesting, any idea why? I would have thought they would have been happy to publish Lisp books, since they have published books on so many other computer subjects. I used to be a huge buyer and reader of their books. Only reason I can think of for them not wanting to publish Lisp books is perceived small market size - do you think that was it? But on the other hand, they did publish books on topics like XML-RPC, which was probably not very much used even when they published a book on it. I had that book, it was about using XML-RPC from multiple languages [1].
Books that overlap too heavily with our existing books.
Books on proprietary technologies that don't have a huge user base.
Books on miniscule (i.e., personal or nascent) products, even if they are open source.
Books on topics that have dismal sales despite quality books being available. (If you're addressing a topic where good books have sold dismally in the past (for instance, LISP, LaTeX, or Web-based training), you have a much higher threshold to clear with your proposal. Convince us why there is a revival of interest in your topic, or why your approach to a deadly topic will provoke interest nonetheless.)
Books that have been rejected by other publishers, in most cases.
It’s interesting seeing lisp try and make a comeback, I wonder if it’ll work.
As one of the few programmers who has worked in CL commercially in this century, my general analysis is that the language has way too much rope. The code base I worked on wasn’t too clever by half, it was too clever by leaps and bounds. I’d genuinely never start a multi engineer project in Lisp ever.
As one of those few who also worked in CL commercially in this century, I disagree. The amount of rope is fine. There's just a big library and documentation problem. You can find a library for anything these days, but quite often it's only half-done, and you need to write the other half yourself.
The way I see it, all the power and flexibility of CL is what keeps it alive; any other language would have died off with so little people working on the libraries. I used to not mind this that much, but recently I also started working on Clojure code, and the difference is like night and day.
> There's just a big library and documentation problem. You can find a library for anything these days, but quite often it's only half-done, and you need to write the other half yourself.
I'm sad to see this is still the case. When I got into Lisp (Common Lisp) ten years ago, one of the first thing I did was ask what logging library I should use by default. Almost all the answers I got were variations of, "In Lisp, it's so easy to write your own, you shouldn't bother looking for one by someone else." One kind person gave me an FTP url to a tar file on a server at MIT and warned me that it might not work because he thought the person it belonged to might be retired or dead. And then he told me, "You'll see when you look at the code, it's so easy, you might as well write your own." Very frustrating. Every time I see an article like this, I hope things have changed, but at this point I wonder if Lisp is in a sense too good to ever be done right, like the kid who is so bright in school that he never learns how to do anything difficult. But now I'm getting flashbacks to all the speculation and armchair psychoanalysis we used to do about why the Common Lisp community was the way it, and that never did any good.
Things are changing, but some problems remain the same.
Right now, you have a pretty solid logging library in CL that is pretty much the go-to for anything but simplest of needs: log4cl[0]. It's feature-rich, performant, and even integrates with Emacs/SLIME. The README covers a lot of those things... briefly. Good luck figuring out how to e.g. use the (log:config) facility for anything more complicated than changing log levels. As you can see[1], it has a huge docstring describing a lot of things, but the description is shallow. It's like 20% of what is needed to figure out common things like just saving output to a fresh file (without automatic log rotation).
This is IMO a good example of the problem I see in those libraries that are actually solid and done. They miss detailed user-oriented documentation, forcing you to wade through code, which can get pretty complex[2].
[2] - Try to macroexpand a simple call to logger, like (log:warn "Foo" some-object). Performance and power can be nicely wrapped under a clean set of macros, but if you deviate from the intended use case, you'd better have a spare couple of hours for figuring out how things work.
Clojure has this advantage of Java Interop. So even if you didn't find native Clojure libraries, you can always find Java libraries. And you will find Java libraries for pretty much everything under the Sun. Sure they had to sacrifice a lot for that, but ultimately its worth it.
Any language that has to compete and be acceptable to the larger programming community today needs libraries. So it kind of becomes a Chicken/Egg problem.
I don't think anything today will ever reach the levels of Java usage. Partly because no company is likely to spend on a new language as much as Sun spent on Java.
So you are pretty much well off building on their ecosystem than fight it.
Having said that, TCO is the only thing I missed in Clojure compared to CL.
Sort of. Java interop is an extra win for Clojure, but I really meant just the Clojure libraries. Every time I need one, it seems much more feature-complete than CL alternative.
To take an example: in a project I'm working on, we're using Ring[0] at the backend + bunch of middleware, including Liberator[1]. Take a look at e.g. Liberator docs[2] and tell me, where's a CL library for just that, with so much thought that went into it? Or, in a CL project we had to implement CORS handling ourselves, whereas in Clojure we get it for free. Etc. This kind of story seems to repeat every time I need a library. In Common Lisp, you have to hope that Edi Weitz, Shinmera or Fukamachi wrote something, and that you won't have to patch much to make it work for you. In Clojure, everything I've seen so far[3] makes me expect to find solid and feature-complete libraries whenever I need something.
Now to be clear, I still love working in Common Lisp. It's just that my experience with Clojure made me jealous.
Note how Snooze uses Clack. Clack is almost a de-facto standard now.
To be honest, REST services can be done with a regular web development routing lib.
>Or, in a CL project we had to implement CORS handling ourselves, whereas in Clojure we get it for free
But it's only adding a few headers to your HTTP headers... Patching the response handler to add those is an easy task in CL (Clack already has the facilities to let you add middlewares into the response handler), does one really need a library for that?
I don't want to sound arrogant, but the complains seem a bit exaggerated.
Interestingly enough, I actually worked in both CL and Clojure professionally.
Clojure for certain feels much more modern. They actually have modern build tools, logging systems, and testing frameworks, which is nice. The emphasis on more data types than just linked lists was amazing, sometimes you just need a hashmap, and Clojure's worked great. The rate of progress on the build tools was frankly spectacular, and some of my favorite libraries in the world are available in Clojure (core.logic anyone?)
The Clojure community has an approach to libraries that is utterly unique in my experience, and I miss it in other languages. Clojure libraries are the most lightly coupled libraries I've ever seen, as they use data as the API (a very Clojure and very Rich Hickey thing to do), which makes tying libraries together a trivial and reliable thing to do. They also have a habit of finishing libraries and not arbitrarily creating competitors for whatever reason. The result is that there are a lot of Clojure libraries that are narrowly scoped, feature complete, and trivial to upgrade if necessary. I really miss that.
Clojure also had some downsides however, both for how we used it and in general. The team that I worked on with it is now mostly Java these days.
Domain specific issues. If you have a bunch of objects that are similar syntactically, but vary semantically, you do not want to use Clojure. Clojure has very poor structures for handling this kind of stuff, and it's very very easy to accidentally go from your pretty record type back to a regular persistent hashmap on accident. Basically Clojure encourages (although does not strictly enforce) hashmap oriented programming. There are some domains where this is fine, but ours was decidedly not one of them.
Guidance wise, interacting with Clojure maintainers was rough. It's their project and they can guide it how they want, and we just got the feeling that they couldn't give a rip about what anyone else wanted. That's their right, but it was super annoying as an end user of the products that they were pushing so hard. We never really had any luck getting any of the bugs or issues that we found dealt with by the core team. We got the distinct impression that they were running the code we submitted and then deciding post-facto that what the code did was exactly what they'd wanted in the first place, but proving that is of course impossible. It's been a few years since I last tried to talk to them, so it's entirely possible that this has turned around since then.
I quit that job before Spec was released, so I cannot really comment on how effective that is in bullet-proofing a Clojure code base. What I can say is that the core Clojure libraries are wildly type unsafe, to a degree that's really only beaten by PHP. There are a ton of core functions that'll change output types without warning, sometimes in several different ways. Nominally this is fine because all the collections are built on top of the seq abstraction, but in reality that abstraction is as leaky as they get. The result is that our code base was littered with "(map identity x)" and "(vec y)" and "(into {} z)" to handle all these little weird edge cases. My suspicion is that Spec is a tacit admission that these little type errors really get out of control in a larger Clojure code base, and they're trying to fix it without having to revamp the entire core library.
That being said, between the two I would start any modern project in Clojure rather than CL. Access to Java, modern libraries for web development, and a much much larger pool of developer talent to access are absolutely killer features for that language over CL.
>That being said, between the two I would start any modern project in Clojure rather than CL. Access to Java, modern libraries for web development, and a much much larger pool of developer talent to access are absolutely killer features for that language over CL.
While I agree with the "larger pool of developer talent", I am already working with ABCL which has pretty easy Java interop.
As for developer talent, What's difficult to find, is to find developers who really grok macros and the "code is data" feature.
When you find one, he/she will have it very easy to jump from CL <--> Scheme <--> Clojure, so i'm not too worried about that.
Thanks for this experience report. If you did it again from scratch in Clojure, would you switch from records to using regular maps with a :type key, or your domains analogue? You can even do dispatch on those with defmulti/defmethod.
I have also worked with CL commercially in this century, and the only thing I missed was a convenient portable GUI library for OSS Lisp. I hope in McCLIM (https://common-lisp.net/project/mcclim/1.html).
I would like to recommend to add links to websites like planet.lisp.org to make common-lisp.net look active on a regular basis. Another interesting style is used by haskellnews.org/grouped.
I don't use Emacs regularly any longer, but it used to lack the ability to embedded widgets or graphics on its REPL, one of the reasons why I used to be a XEmacs user instead.
Back to Lisp Machine, you have to imagine that the whole OS is written in a mix of Assembly and Lisp, nothing else.
Then the whole OS was exposed to the developer, as they were single user workstations.
Imagine the ability to access any running application from the REPL and interact with it in some way, for example, reformat the selected image in the word processing application. As very basic example.
Something that on modern systems is only kind of replicated with COM/.NET alongside PowerShell, and it fails short of what was possible.
When an application would trap at any level, the debugger pane gets invoked and you are able to fix the error, followed by redoing the action that triggered the error.
Also the OO system was much more powerful than what most OO programming languages are capable of. You had multiple inheritance, traits (aka protocols), aspects, contracts, multiple dispatch available.
Something that on modern systems is only kind of replicated with COM/.NET alongside PowerShell, and it fails short of what was possible.
Something I have always felt was a killer-feature of the Windows and Office platforms, and something the industry is moving away from. I never saw what you describe, but I'm sad to increasingly lose what interop we've had since the 1990s and 2000s in exchange for cross platform support, security of one kind or another, and silo'd cloud services.
But then again, what you describe sounds worse not better. Like half "wow, malware playground", and half "I just wanted a nutcracker but all I have available is this hammer factory and the machine shop which built it". Would I see the past differently if I started there, then moved to this present? Would I see COM interop differently if I had started in a world of Apps with constrained interop based on a "share this content with a link" popups?
Along with "if everything's top priority, nothing is", and "one size fits nobody very well", if a system is configurable for any task, does that mean it fits no task very well? A set of magnetic letters on a fridge door lets you rearrange the words to say whatever you want, but that doesn't make it "better than all novels".
All these concerns are understandable, but after reading a lot of lisp and non mainstream litterature (jitted OSes, ocaml compiler, lisp machines), I'd give lots of trust points to the thing described above.
But I'd expect a lot of needs for security and encryption since those times had almost no troubles regarding this.
given that GNU Emacs does not implement a file system, has no process scheduler, has no driver for Ethernet cards, doesn't have a driver for a framebuffer, ... the screen would be dark and there would be no i/O... whereas the Lisp Machine is a computer with a stack-oriented CPU running a relatively capable OS written in Lisp only (incl. disk driver, file system, NFS, DNS, SMTP, FTP, TELNET, graphics, processes, postscript printer support, users/groups/networks/sites, editor, graphics editor, file browser, process browser, printer dialog, screen shots, mouse handling, tape backups, cdrom reader, ...).
have you used pharo ? I quite ~loved the experience, and I wonder if lisp machines were similar in the feeling that you really can mold your system real time. Or if it was different.. (more possibilities, different idioms that changed the perspective)
I used Smalltalk/V at the university, a couple of years before Java sprung into existence.
Pharo just to occasionally check how much it has changed since those days.
Yes, Smalltalk does provide a similar experience to Lisp Machines.
Also if you read Xerox papers about Mesa XDE and Mesa/Cedar, one the goals was to provide a developer experience similar to the Lisp and Smalltalk workstations in the context of a strong typed systems programming language.
As a fan of lisp and smalltalk myself, and as "one of the few programmers who has worked in CL commercially in this century", I take my emacs hacking sessions as the most similar experience to it I can have in a daily basis, but in the link I posted, Kent Pitman says:
"I don't even find Emacs itself to be equivalently self-documenting and
easy to get around in, much less the many systems that Emacs touches."
Indeed Smalltalks (Squeak, Pharo, Amber) have so many "Aha!" moments waiting inside I just open them and start fiddling with the Class Browser (until I break the whole thing).
Only to those who aren't inquisitive. There are several old promotional videos that have been put up on YouTube, old screenshots and brochures have been put up on web pages, there were the recent 'bringing back to life' videos and talks/demos by many of the people involved at the Computer History Museum etc. The information is there, you just need to look for it. Search on 'Lisp' along with 'Lisp Machine', 'LMI' or 'Symbolics'
Any language has sufficient rope if the programmer is sufficiently clever.
Not hanging yourself is a convention and most languages that are more mainstream have bigger communities and more conventions (and frankly, more adults encouraging them).
I have a hard time blaming the language for what are essentially organizational problems.
Too many lisp people start projects doing everything with a single implementor.
Therefore the convention starts out as “whatever this guy wants to do that works.” Anyone working in a bubble like that is bound to eventually create something that makes sense to the implementor, but not necessarily outside people.
Which is why you need peer review when you’re writing your code. If you don’t get it, you’re gonna have problems. Maybe there is something about lisp that makes it difficult to review, but I think more likely there is something about lispers that makes them unwilling to do that particular set of work.
That's the so-called Lisp Curse, though, which states: "Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp".
The main topic 'adding OO' to Lisp is a good counter example.
> Now make this thought experiment interesting: Imagine adding object orientation to the C and Scheme programming languages.
How was it done in Lisp?
1) various experiments and getting real work experience with 1st and 2nd generation OOP in Lisp (Flavors, Loops, CommonObjects, Object Lisp, ...)
2) various proposals for a standard OO system
3) a small group (with people from Xerox PARC, Symbolics and Lucid) working on an OO system with large amounts of user feedback -> developing a full-blown portable public domain CLOS implementation and using public mailing lists for feedback. The resulting implementation PCL (Portable Common LOOPS) was ported to more than a dozen Lisp implementations. A SPEC was written. Books were written.
The result was CLOS. THE object system for Lisp. Built with the input of hundreds of developers/users.
It worked great. The fear of group work in and for Lisp is massively overblown.
> Why don't they make a free development system that calls to mind some of the lost glories of the LispM, even if they can't reproduce another LispM? The reason why this doesn't happen is because of the Lisp Curse.
The reason is that the first systems were developed in a totally different environment: large research organizations like PARC and MIT AI Lab swimming in DARPA money employing the brightest minds of their generation. A genius like Richard Feynberg wanted to work there: '...many a visitor at Thinking Machines was shocked to see that we had a Nobel Laureate soldering circuit boards or painting walls...' http://longnow.org/essays/richard-feynman-connection-machine...
The hacker community can't replicate that in their spare time.
Similarly there is lots of source code that we'll never see, some of it Lisp even. What beauties might they contain, especially if someone wrote a Fabien-style black book to bring forth the interesting bits?
The solution is to be sad, in the same way we can be sad about the loss of the library of Alexandria, and move on, creating new works of beauty. As brought up above though, it's very hard to do this in your spare time... Few are lucky enough to hack with CL professionally, and I would wager even fewer still of those are any less free than other companies' employees of business needs coming first and forcing some terrible choices in the development cycle. Still, there can be beauty in the system, and CL is very much a systems-oriented language... (https://www.dreamsongs.com/Files/Incommensurability.pdf describes it as more of a programming system than a programming language, even.)
From what I understand, having never experienced those machines first-hand, that's just half of the equation. The tooling & larger ecosystem is missing.
Not every language gives you the ability to modify the syntax of the language at run-time[0]. I am sympathetic to the argument that one can write both crappy and quality code in any language, but I am also convinced that certain language structures increase or decrease the odds of a code base being maintainable, all else being held equal.
Lisp brings a lot of power to the table along with relatively weak code organization constructs. A great example of this is the concept of package privacy, something that can be ignored in CL by using a double colon instead of a single colon. Sure, ideally experienced engineers would not abuse this, but that sure is a dangerous tool to leave around when deadlines are looming.
My personal favorite WTF from CL was the :around methods from CLOS. These allowed you to wrap a function with other functions, possibly from a completely different place in your code base, and handle any setup/teardown as needed. We typically used them for database transactions, along with the ability to catch exceptions and trigger a DB rollback. The catch was that every single around method had to call the magic (call-next-function) function, otherwise the main function would not be called. Wondering why in the hell the function you just called isn't actually executing is quite a brainfuck, especially when it was caused by a bug in a completely different package.
CL + Emacs was truly great though, best editor to language integration I've ever seen, period.
I also like the idea that languages impact code being maintainable. It's part of why I want to work in more dynamic languages again. My day job is mostly Java. Apart from your macro complaints (though some older devs have expressed similar complaints to me when faced with excited younger devs being a bit irresponsible with the new syntax of Java 8), I see the same things you're criticizing in Java.
Java has public/private/protected/package-scope, but you can get around it trivially with reflection. (Java 9 I think "fixed" this, but you can override it at startup time.) Personally I'm in favor of Python's "we're all adults here" philosophy.
Proxy patterns are common in large Java code bases, and various other things inspired by aspect-oriented programming. I've never thought to blame the language though because someone forgot to call (or decided not to call, e.g. because it got too smart with a cache) child.method() on the real object it is wrapping.
For me a far bigger irritation in Java is Design Pattern abuse. It makes code far more unreadable than any macro code I've seen so far.
Its a big like what Larry Wall talks about code complexity(https://en.wikipedia.org/wiki/Waterbed_theory). Basically if say a language X provides in built feature to do complex task Y, and it a little hard to understand. In another language Z which lacks the feature, the code that one has to write to achieve Y will be equally complicated and hard if not more.
>My personal favorite WTF from CL was the :around methods from CLOS. These allowed you to wrap a function with other functions, possibly from a completely different place in your code base, and handle any setup/teardown as needed. We typically used them for database transactions, along with the ability to catch exceptions and trigger a DB rollback.
Sounds like a not-so-appropriate way of doing it.
I do this by wrapping my transactions... on a (with-transaction ) expression! Which is easily done using CLSQL (common lisp lib). This would do exactly what you want: exec the rollback, etc. Note that I speak from personal experience: this has worked perfectly on a commercial project.
>Lisp brings a lot of power to the table along with relatively weak code organization constructs. A great example of this is the concept of package privacy, something that can be ignored in CL by using a double colon instead of a single colon. Sure, ideally experienced engineers would not abuse this, but that sure is a dangerous tool to leave around when deadlines are looming.
CL is, mostly, for ideally experienced engineers. And i'm not being smug or elitist here. Ideally experienced engineers are able to leverage the full power of CL.
>when deadlines are looming.
And the reason to use CL is precisely to enable greater speed of development for complex stuff, so deadlines don't scare you so much (compared to using a lesser-featured language like Java for example)
The code base you worked on has nothing to do with how good or bad Lisp is. It's only an indication of how good or bad the programmers who wrote that code base are.
i think the macros make life easier, and if well written make the code more maintainable by simplifying the code you're encountering most of the time.
for me the "rope" was the ridiculous complexity of things like the "loop" macro. Coming from a newb (to CL) perspective and wondering "how do i use this thing" ... ugh you could probably write a small book on all the different ways to use that thing. This type of stuff was littered through the language. Instead of small focused functions that were really self explanatory, but limited in capability, you have things that are way too ... i dunno "configurable"? "modifiable"? There were also thinks akin to "method overloading" but done by a drunk person seeing how many "useful" ways they could call this one function.
The core language documentation is extraordinary. Probably better than any language out there.... but I moved on to Chicken Scheme for my personal work. Life was much better afterwards.
>for me the "rope" was the ridiculous complexity of things like the "loop" macro.
You are not forced to use it. I like the LOOP macro, but there are many alternatives, which you can easily load and use.
>Instead of small focused functions that were really self explanatory, but limited in capability, you have things that are way too ... i dunno "configurable"?
So, for example, it looks like you would have liked to use the ITERATE lib instead of the LOOP macro.
Macros are indeed quite the bit of rope, but they're actually not that hard to tame. The problem is that when you forget to use gen-sym (to create unique symbols at macro expansion time) it will often work, but then suddenly do really weird things in the wrong context. My main beef with CL macros is that they're quite cumbersome to write, since there's no shortcuts for things like gen-sym out of the box. This is something that later lisps like Clojure fixed, although they have to provide these fixes by default, since they lack some of the more powerful and dangerous features listed below.
The worst bit of rope I ever personally saw was reader macros. These allowed you to essentially modify the grammar of the language on the fly, inserting functions into the lexer to handle unknown symbols at read time. This is exactly how the quote operator, which allows you to insert literal code without running it, works. Since this happened at run time, any reader macro you registered affected all files that were read later, and thus was build order dependent. If you wanted to use your reader macro in every file, you better load it first and hope it doesn't break any of your libraries.
CLOS, the standard OOP structure of CL was also full of foot guns. It's super impressive that they managed to add an OOP system to an existing language without breaking everything, but the design was .... dangerous. CLOS has some pretty weird rules around method routing, with the methods existing outside Objects and routing be handled by argument specificity. The moment you find yourself extending classes just to win the specificity race with another method you have gone too far and it's time to turn around and go back down whatever road you came up.
For the most part CLOS was fine, but you could design yourself into some really weird corners. The worst part about CLOS was the before, after, and around functions. These allowed you to add functions that wrapped your regular functions, possibly multiple times with order of execution being handled by similar specificity rules. The kicker was that every of these methods was responsible to call the magic "call-next-method" function, which continued the call chain down the stack. If an around method did not do this, the main method would never execute. It is super confusing to manually call a method, only for it to not execute or trigger your debugger, especially since you can register around methods on any method from any package.
Also, CL has a very weak definition of package privacy. Any public symbol, function or otherwise, is accessed externally with the form "package:symbol". But private symbols, those not explicitly exported, can be accessed with "package::symbol". Much like ruby's much maligned "send" method, there's nothing preventing sane programmers from staying way the hell away from private methods. But just like ruby's send method, that is a mighty tempting tool to leave laying around when deadlines loom and pressure is mounting.
Before I finish, a sibling comment mentioned the loop macro. This thing is absolutely insane, I personally stayed way away from it because I never really mastered it, and syntactically is didn't match anything else from CL. Here's a valid example from the "LOOP for Black Belts"[0] page.
(loop for i in *random*
counting (evenp i) into evens
counting (oddp i) into odds
summing i into total
maximizing i into max
minimizing i into min
finally (return (list min max total evens odds)))
While it sure is pretty, it's really jarring when you're working in CL. Where are the parens? How the heck does this parse? What is the runtime complexity of this thing? How do I figure out what magic keywords are available within this loop macro? It's maddening!
And before you ask, no symbols like "counting" are not already functions in the lisp standard. That would be far too easy. God help you if you have a loop call that needs to be modified just enough to make using loop unfeasible.
> Since this happened at run time, any reader macro you registered affected all files that were read later
Common Lisp has readtables, which you can create and change. So you can read certain files with a different readtable, leaving the default intact.
> It is super confusing to manually call a method
This is just the usual super method call in most object-oriented programming languages. If you have method and you want to call the method from the superclass, you have to explicitly call it.
> loop macro
The LOOP macro is not CL specific. CL has inherited it from Lisp Machine Lisp, which got it from Maclisp. At MIT it was even a single source file for several different Lisp dialects. The origin of this macro is actually Xerox' Interlisp - where it was called FOR.
It's not that bad once one masters the basics and the documentation is in the ANSI CL standard.
The upgrade from LOOP is the ITERATE macro, which even more powerful, but with more parentheses:
> Common Lisp has readtables, which you can create and change. So you can read certain files with a different readtable, leaving the default intact.
That a footgun can be occasionally be handled carefully does not erase the fact that it's a footgun. The ability to modify the read table on the fly is for certain super interesting, but IMHO very dangerous. I am glad that the languages I use on a daily basis do not have this capability, for certain.
> This is just the usual super method call in most object-oriented programming languages. If you have method and you want to call the method from the superclass, you have to explicitly call it.
Not a good comparison. While there is some similarity to super in languages like Java, the :around and call-next-method in CLOS is far more dangerous. In Java, super calls are inside the method in question, any stupid decisions around super are right in front of you. In CLOS, it can be in any file in your code base. This makes the source of the problem far less clear, and expands the possible scope of WTFs by a lot.
> The LOOP macro is not CL specific. CL has inherited it from Lisp Machine Lisp, which got it from Maclisp. At MIT it was even a single source file for several different Lisp dialects. The origin of this macro is actually Xerox' Interlisp - where it was called FOR.
An interesting history, for certain, but still jarring as hell, in my opinion. Macros like these are un-lispy, and really never should have been included. I always avoided them for the more primitive looping constructs, but which ones exactly now slip my mind.
> In Java, super calls are inside the method in question, any stupid decisions around super are right in front of you
The problem you mentioned was when the super call was missing. You have to find out which method gets called for the runtime object and then see that the super call is missing.
In CLOS I'd list the applicable methods and edit the first :around method.
True, CLOS is the much more complex system, with a meta-object protocol, generic functions, multi-methods, multiple-inheritance, meta-classes, method combinations, ...
Java was designed for the simple case and then promoted for the 'industrial programmer' - which explains part of its success - but it got complicated soon and people added for example Aspect Oriented Programming to it - which has similar features like CLOS :around methods.
> Macros like these are un-lispy
Not really. What you think as un-lispy is perfectly fine for many. There is a tendency to narrow down the language to simplify it, like Dylan or many others. Lisp is always more about freedom of design, experimentation, flexibility. Otherwise we wouldn't have widely different extension approaches in one language: format strings with loops, a whole complex language for printing code, macro like LOOP, ...
Sure you can just use macros to reorder arguments and make code more convenient, but at the same time you can use macros to add radical different sublanguages - that's one of the reasons I use Lisp - it allows me to do that and many have implemented very interesting language extensions on top of Lisp. Personally I think with some practice LOOP macros are relatively easy to write and I even prefer them over tail-recursive loops in Scheme - which are IMHO much harder to follow.
>That a footgun can be occasionally be handled carefully does not erase the fact that it's a footgun.
You know, there are languages that don't allow you to shoot yourself in the foot, because they don't allow any guns.
I prefer languages with power.
>The ability to modify the read table on the fly is for certain super interesting, but IMHO very dangerous.
> I am glad that the languages I use on a daily basis do not have this capability, for certain.
Reader macros are there to be used when they are necessary. And in those cases, they make ALL the difference in the world. In such cases, the "languages that do not have this capability" will require a lot of code, often hard to maintain, to achieve the same result.
So, i prefer languages with guns.
>Macros like these are un-lispy, and really never should have been included.
LOOP wasn't included on a whim. Read about the Common Lisp standard creation, it was based on at least a decade of experience writing Lisp for "serious" stuff.
>While it sure is pretty, it's really jarring when you're working in CL. Where are the parens? How the heck does this parse?
Note that the compiler will often tell you if the LOOP is wrongly written, at compile-time.
It is ugly to some. I love it. On imperative programming, a big part of programming is doing iteration. LOOP makes iterative code read easier, natural, as you sample code snippet proves. LOOP looks like that because it's a DSL; it is a DSL tailored for doing iteration.
>How do I figure out what magic keywords are available within this loop macro? It's maddening!
Yeah, but then it comes on second nature.
Again, you're not forced to use LOOP, there are alternatives like ITERATE.
You have T and NIL, and you can deftype a "boolean" for yourself as a choice between T and NIL. The so-called "generalized booleans" are a design choice, and not bad choice IMO.
There's a lot of approaches that might work; using something other than a list
for your collection is just one (although it is often appropriate).
If you just want option types that you can use with lists, the most direct way
to do that is by encoding None as NIL and Some x as (list x), which still lets
"no value" be false, makes "some value" true even for the empty list, and
pairs nicely with pattern matching libraries like Optima (which people
yearning for option types might also be interested in):
That seems cleaner to me, too: if you want to use the empty list for "no
value", then why wouldn't you use a list containing the value you want for
"some value"? This also scales to more than 1 optional value, and doesn't
require any special casing for lists, if you're thinking of them as values
like any other; the NIL thing doesn't even come into it. Maybe this sort of
confusion comes from how null works in languages like Java?
It might also make sense to represent the absence of a value by just not
assigning a value. For example, CLOS instances won't bind their slots to
anything by default, so trying to read them will signal UNBOUND-SLOT. You can
check if it's bound beforehand with SLOT-BOUNDP, and CLOS does differentiate
between a slot not being bound and a slot not existing at all. If you're
working on something interactively and UNBOUND-SLOT gets signalled, by default
execution will pause, and Lisp will offer strategies for resolving the issue,
like entering in a value to use instead for just this call, entering in a
value to store in the slot for later use, just aborting the computation, etc.
That's a pretty nice experience (and just the tip of the iceberg) that people
coming from more static languages might not even think about.
If you're returning a value to someone, maybe it would be best to work like
GETHASH does and return two return values, the first either a value pulled
from the table or a default, and the second a boolean indicating whether the
key was present or whether you got the default. Because Lisp has multiple
return values (not like Python or Ruby where you're really just returning a
sequence and then destructuring it), callers that are fine with the default
and don't care if it was actually present can just use the primary return
value, and since the default is provided when you call GETHASH, you can also
do nice things like (incf (gethash key hash 0)).
There's other ways too, of course, but it's impossible to say what would be
the most appropriate solution without knowing more about the problem. (eq nil
'()) returning T is not a bug, so the issue must be higher up the call stack.
If that seems like a dealbreaker to someone, they're probably just looking at
it in isolation and not thinking in terms of Lisp as a whole.
>The one that kills me, every time I look at CL, is the lack of a real Boolean type. Maybe I'm nuts, but that really isn't acceptable in this century.
There is a boolean type in CL. You can declare variables as boolean and they will be enforced to hold the T and NIL boolean values.
>Maybe I'm nuts, but that really isn't acceptable in this century.
Perhaps you haven't realized this: Booleans are all over the place in CL, they are first class: Any value is true in CL, except the NIL value, which is synonym with the empty list (). This sounds strange, but makes perfect sense in practice and makes writing code really easy.
Really wish SBCL ran on iOS. :( It's the only one with the perf I want. But like compilation at runtime on iOS is a no go so ... yeah. LuaJIT in interpreter mode is p fast tho.
Actually the performance is pretty good. My iPad Pro is basically running LispWorks as fast as my 2015 Macbook. It's faster than other ARM systems I've seen, too.
Interesting. I might give this a shot. Do you know if CFFI stuff works -- so that I could use eg. cl-opengl or cl-sdl2 -- if you somehow statically link stuff together? That's what I do w/ LuaJIT for example. Or maybe LispWorks has its own OpenGL bindings, not sure.
I like Lisp (I used to love it, but I'm more in the static typing camp nowadays;Typed Racket is good though) but Common Lisp honestly is an abomination. I can't really think of a situation in which CL is a good choice for a new good base.
Can you please keep programming language flamewars off HN? This comment is a classic flamewar-starter, and we're hoping for higher-quality discussion here.