Hacker News new | comments | show | ask | jobs | submit login
Common Lisp homepage (lisp-lang.org)
292 points by t-sin 6 months ago | hide | past | web | favorite | 300 comments



This is a very nice web site describing a programming language with unparalleled expressiveness, power and permanence.

I am heavily invested in Common Lisp. We are developing a programming environment for designing new materials and molecules called Cando (https://github.com/drmeister/cando) using Common Lisp as a scripting language. Cando is running on Clasp (https://github.com/clasp-developers/clasp), a new Common Lisp implementation that interoperates with C++ and is based on LLVM.

What attracted me to the language, after 35 years programming in almost everything else, was how organically it lets one write software and how I don't have to worry about it fading like the next programming fad.


Since you're already using LLVM, I'm wondering if you've thought about adding something similar to the includec function in Terra [1] which uses Clang to parse C headers. I used to think that Lisp's CFFI was the best FFI out there until I saw that it was possible to literally parse a C header file and have it "just work". It seems like if you're already building against LLVM, so bundling Clang with your project too might not be that big of a jump for you.

[1]: http://terralang.org/api.html#using-c-inside-terra . The same idea also occurs in e.g. Python DragonFFI.


Thank you - I'll check it out.

We exposed the AST and ASTMatcher libraries of Clang inside of Clasp and we use it to analyze all of the Clasp C++ code to build an interface to the Memory Pool System memory manager.

I want to have something that can include C and C++ header files and automatically expose them to the Common Lisp - that would be neat! Maybe we can steal some ideas like the includec function of Terra.


Cando and Clasp are really inspiring. Is it easy to get academic funding for such a non-incremental idea?


No, none, zero, zip, nadda. What you do is you hook it to another really important idea and then you work until 3:00am every night for months and then years writing code. You take all those voices in your head that say you should not be doing this and squeeze them hard until they... go to sleep and stop bothering you. For me it started paying off in the last year and it is starting to look like it was a really good idea.


It's sad agencies are not funding this kind of stuff, although I think they are getting slightly less conservative.

In any case, congratulations. It's an amazing piece of work.


Hi, great fan of your presentations on clasp.

Have you had an opportunity to look at Julia? It seems that there is some overlap in what can be achieved - with some obvious differences: as I understand it you had a lot of c++ - that could probably (today) be linked from julia, but not "integrated", starting from a green field with Julia, ideally one could do most things in Julia - maybe with some help from rust. And Julia is not a common lisp, obviously.

But in merging a high level language and the love, it seems Julia has been successful in getting a lot of real-world, "all Julia" libraries, with "sufficient" performance.

If you were starting over today - would you still prefer to build a common lisp with close integration to c++?


Thank you.

I did choose Common Lisp over Julia - I started learning Common Lisp in 2013 and Julia started becoming something around 2012.

I did so because of the permanence and the demonstrated expressiveness and power of Common Lisp. Common Lisp has been around for almost five times longer than Julia, and Common Lisp has demonstrated again and again that it is capable of solving hard, poorly understood, real world problems in a lot of domains.

Fun fact: Julia bootstraps off of an implementation of Lisp. Furthermore, most compiled languages are translated into an abstract syntax tree (AST) as part of compilation. Lisp S-expressions are a text based AST. So I can make the argument that Lisp is as close the "One True Programming Language" that we have. :-)

I'm doing chemistry - that's fundamental and timeless - I find it maps beautifully to a fundamental and timeless language like Lisp.


>I am heavily invested in Common Lisp.

And you're my hero. For having the big courage to write a completely new CL implementation, and attempting what wasn't attempted before: LLVM output, great C++ interoperability. Even more, leveraging the latest state-of-the-art compiler modules (Cleavir, etc.)

Keep on with the good work, dr. Schafmeister!!


I'm waiting to pounce on that. Oh, the testing I will do.


> how I don't have to worry about it fading like the next programming fad.

I'm not sure I understand this bit. What other programming language has "faded" and how has it "faded"? At least since the more mature age of software, which I'd say started around '95 or so (so 20+ years).


I say this with a perspective of watching it happen over forty years. I'll get all kinds of heck if I start saying this or that language has faded. But many languages have changed over time, sometimes in breaking ways, as their developers work to make them more expressive and add features. Code that is written one year often doesn't work a few years later - that's a kind of fading as specific reference implementations fade away and code written in them needs to be updated or it rots.

Common Lisp doesn't have this problem - and it has many features that other languages have added over time, often in less comprehensive ways. For example C++11 lambdas are not as expressive and powerful as Common Lisp first class functions, and I joke that C++ template programming is to Common Lisp macros what IRS tax forms are to poetry. Note: Clasp includes a lot of C++ template programming and C++ lambda code that I enjoyed writing and I am so grateful that C++11 added variadic templates. New language features can be added to Common Lisp by the programmer with macros, and for the really adventurous, they can be made efficient by augmenting the Common Lisp compiler. Common Lisp compilers are almost always written in Common Lisp and available and accessible. This is why Clasp uses a new and understandable Common Lisp compiler "Cleavir" (developed by Robert Strandh) and LLVM, which is a great C++ library for generating native code on a variety of processors. Cleavir will be an excellent platform for efficiently implementing new dynamic programming language features.


Thing is, except for CS theory evolving, the programming language environment from 30 years ago is in my opinion irrelevant from a maturity perspective. The number of practitioners was probably 1 million or so worldwide, now there’s probably 20-40+ million programmers worldwide. We didn’t have widespread internet access, barely a handful of Open Source communities, compilers were generally commercial and extremely expensive, companies behind technologies were extremely volatile, etc.

While that made for very nice war stories, it also meant that you’d have the “ship” sinking under you completely. Now a tech still stays around to some degree.

A C developer from 1990 could still be a C dev in 2018. A Java dev from 1998 could still be a C dev in 2018. And a programmer in your average mainstream language in 2018 could reasonably expect to be working in the same language in 2018. Especially if that language has enterprise traction (Cobol, Java, C#, Javascript, SQL, etc.).

Also, languages are a lot more similar to each other than they used to be, in terms of expressiveness and power. Lisp was a “lightsaber” to the “stones” of the 80’s, now it’s at most a slightly more powerful “machine-gun”, at best.

And regarding language evolution itself, everything evolves. Or did Common Lisp programmers generally write unit tests, have integration test harnesses, use package repos, etc. back in 1995? I somehow doubt that :)


Common Lisp has: generic functions, homo-iconic macros written in Common Lisp, reflection, multiple dispatch, closures, optional typing, correct implementation of lexical scope, dynamically scoped variables, a metaobject protocol, conditions & restarts, and a compiler that is available for customization. One or a few of these have made it into other languages in some form or another - but not all together.

Common Lisp also has an excellent package manager (quicklisp/ASDF) a programmers IDE (emacs/slime vim/vlime), interoperation with C libraries (CFFI) and lots and lots of libraries that don't need to be constantly updated to keep them working because the underlying language changes.

I wouldn't call it just a slightly more powerful weapon.

Just the macros - my goodness - in Common Lisp you can write programs that write programs! It's easy and organic and elegant because they are written in the same language as everything else. Macros let you write code that customizes itself and optimizes itself at compile time for specific use cases. You can escape the drudgery of writing similar code over and over again. You kind of need s-expression syntax for macros to be easy to write - so no other language with complex syntax will ever have them like Common Lisp does.

Languages are all Turing complete - we can do everything in every language - but for the kind of hard problems that I want to solve, only Common Lisp has helped me out of the Turing tarpit. Also, C++ is a great domain specific language for creating tables of cache friendly compact data structures and it has great compilers.


>Just the macros - my goodness - in Common Lisp you can write programs that write programs!!

... and at runtime OR compile-time (your choice!)

... and by using plain, simple Common Lisp; no special, cumbersome AST-transformation libraries needed!


At least optional typing and macros are not considered to be universal benefits for programming.

Having a ton of features has been proven in many occasions to be counterproductive. Paradox of choice, higher learning curve, etc.


As a Professor of Chemistry - I'm all about higher learning and I love curves (non-linear functions that is).

I don't agree that Common Lisp has a higher learning curve than other languages. To become operational - Common Lisp is no more difficult to learn than other languages. Bind variables, conditionals, loops and a little arithmetic and you can start doing some damage. You can write procedural code, functional code, object oriented code - whatever your problem needs.

It's that Common Lisp has a LOT more "up" than other languages. It keeps going up and up as you learn about macros and CLOS and multiple dispatch and restarts and upwards. It's really neat discovering all the things that you realize you were missing from other languages - some things that they add over time with ever more complex and arbitrary syntax.

When you get tired of writing "for" loops and agonizing how to manage memory for the next fancy data structure you want to implement - dive into it - it's great. Common Lisp will still be there.


DrMeister for President!! (of the Land of Lisp -- est. 1959)


>>At least optional typing and macros are not considered to be universal benefits for programming.

They are, if they use them enough.

Some body mentioned in this post that number of programmers from the 1980s has gone up from probably a million to 20 million or so.

What happened is that we had to lower the bar to entry, such that we now have many novices and a few experts. Most people want to just come in coast 9 - 5 and go home, so you have very few people who want to push the limits of their tools.


>At least optional typing and macros are not considered to be universal benefits for programming.

That's just your opinion.

>Having a ton of features has been proven in many occasions to be counterproductive. Paradox of choice, higher learning curve, etc.

Ok, let's see you need to get the numerical value of several big integrals. You have two choices:

a. Pen and paper plus calculator.

b. Computer with sophisticated computer algebra system

Learning curve in (b) is way higher. Alternative (b) is also way more productive.

Helicopters have also a steeper learning curve than cars.

>At least optional typing and macros are not considered to be universal benefits for programming.

So perhaps you don't know the benefits. Lisp Metaprogramming benefits become obvious after taking the time to learn them.

Optional typing dramatically increases execution speed, something that is very relevant and important in 2018.


You seem to be a bit too... enthusiastic about this conversation, for my tastes :)

I won't go into everything cause it would take way too much time, but I have to point one thing out:

> Optional typing dramatically increases execution speed, something that is very relevant and important in 2018.

As compared to what? I'm guessing your assumption was that I was thinking about purely dynamic languages? Not really, I was thinking about using static languages with type inference.


I don't understand what point is. Do you agree that Lisp metaprogramming has benefits or do you disagree with that?

If you agree, then what is your point? That we should not use a powerful language because it is not popular? Or that features in popular languages are the only features worth having?

If you disagree, can you elaborate what exactly about Lisp metaprogramming do you not find powerful? That would make your arguments more concrete. Otherwise you are just going around in circles stating the same opinion over and over again which is really only your opinion that many of us don't seem to agree with.


If you're choosing a powerful but "unpopular" language, perhaps a more interesting comparison would be between lisp and, say, haskell.

Sure, you're sacrificing homoiconity and macros, but you get strong static typing and purity etc.


True, but they support entirely different philosophies to software development: the "build a cathedral" (Haskell) versus "create a living organism" (Lisp), to paraphrase an Alan Perlis quote.


> use package repos ...?

If one used a Lisp system at MIT in the 80s, probably.

The typical Lisp Machine at MIT was networked and shared file servers (also from other types of machines). When one uses a Lisp Machine on the network, one uses a name server (not DNS or NIS, which did not exist at that time) which define one or more sites. The site information in the name server provides information about machines, services, users, printers etc. In the 80s the MIT network was already vast with hundreds of machines. It was basically similar to what Unix system did with mounted NFS volumes, but here built into the Lisp system. This was before TCP/IP, when Lisp Machines used the CHAOS protocol for networking.

Now when one wanted to load some software into a Lisp Machine, one uses a command like 'Load System FOO', where FOO is the name of the system. The Lisp Machine shares with other machines a central registry for system names, which then point to a location in the file system. So this would find the system definition, load it and then load the system components. One could also ask it to load a specific version or load a specific patch level - or just the newest version. A system could also have other files like documentation, C files, etc.. There was also a way to write those to tape or an archive format and transfer it that way.

Thus that was a networked repository for libraries and applications. Some of those libraries were shared across different machine types and Lisp dialects. For example the LOOP feature was a library sharing source code with Maclisp, Zetalisp, NIL, etc. So you could load this from a central location into different Lisp systems - not just Lisp Machines.


"Or did Common Lisp programmers generally write unit tests, have integration test harnesses, use package repos, etc. back in 1995? I somehow doubt that :)"

Richard Waters wrote and made available a nice pair of utilities that I've long used.

http://www.cs.cmu.edu/Groups/AI/util/lang/lisp/code/testing/...

http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/l...

The first is the RT package, which allows definition and execution of unit tests. Many elaborations of this have been written over the years.

The second, COVER, is more impressive: it uses Common Lisp macrology to implement a code coverage tool. Combined with RT you can not only unit test your code, you can confirm the level of branch coverage your unit tests achieve.

(I've since hacked up COVER to enable various extensions, including saving of coverage information for aggregation from different executions, and rollback to enable automated search for minimal coverage-improving inputs. Not available publicly right now, unfortunately.)


>Also, languages are a lot more similar to each other than they used to be, in terms of expressiveness and power. Lisp was a “lightsaber” to the “stones” of the 80’s, now it’s at most a slightly more powerful “machine-gun”, at best.

Please elaborate more and explain.


How many features coming from Lisp, ML, Haskell and academic languages have made it into mainstream languages now? Many. How many were in mainstream languages in the 80’s? A few.

Memory management, generics, etc.


Allright, let's take some list of what can be understood for "mainstream" languages (because, according to other criteria, Lisp is mainstream too.)

TIOBE index top 10:

Java, C, C++, Python, C#, Visual Basic.NET, PHP, Javascript, SQL, Ruby.

let's compare to Lisp and Haskell:

- NONE of those langs have the hindley-milner type system, type inference and typeclasses of Haskell.

- even Common Lisp's type system is more sophisticated than almost all langs on the list, allowing for union types.

- all of those langs only allow you to write the code that will execute at runtime. CL allows you to specify if a function is to be executed at read time, compile time, and run time.

- NONE has an object oriented system with multiple dispatch nor a meta-object system. Unlike Common Lisp.

- None have metaprogramming based on homoiconicity (like Common Lisp), so the only way of macros is to use a cumbersome AST->AST library, and even then, it can only be done at compile time nor runtime.

- Only a very few allow changing the definiton of a class or function at runtime, without stopping the running program, and then it's often a difficult proposition because the language wasn't designed to do this on the first place. Unlike in Common Lisp.

- For the garbage collected languages there, NONE allows you to circumvent or disable the garbage collector, unlike CL.

- For the garbage collected languages there, NONE is faster than CL, and only CL allows the programmer to write Lisp code that reaches C speed, if he/she wishes so.

- only a few have built-in support for all of these: integers, arbitrary precision numbers, rationals, floats with IEEE compliance, complex numbers, and bit vectors. All which CL brings you by default.

Now, you can suggest more languages and I can go on.

There are very few languages that can compete with Coq, Agda, Haskell, Ada/Spark, Common Lisp, Julia and Racket today.

No, there are many features missing. Features that are very relevant. Do you know that many of the design patterns often used in Java are unnecessary in Common Lisp simply because many of them are workarounds for things that the Java OOP system lacks but Lisp's OOP system implements?


I do like Lisp, but you should learn better the languages you criticize.

C++ allows for compile time execution, improving since C++11. A feature also shared by Nim, D and Jai.

C# does offer control how the GC operates, and allows for manually memory management if really required. A feature also supported by D, Modula-3, Active Oberon.

.NET Expression trees can be manipulated at runtime.

There are GC enabled system programming languages that are as fast as Lisp, like D, Modula-3 or even the new .NET Native toolchain.


Thanks Pascal, i got stuck in C++ before the C++11 improvements.

As for the other languages (D, Oberon), my comparison was to the "popular, mainstream" languages the guy was rooting for.

>There are GC enabled system programming languages that are as fast as Lisp, like D, Modula-3 or even the new .NET Native toolchain.

I wrote "none faster than Lisp" which means "equal or lower in speed to Lisp"


Indeed, Nim also offers a lot of control over its GC. You can disable it or control precisely when it runs and for how long.


Did you mention conditions and restarts? That's a big deal too, missing in many languages.


No, I didn't. You are right.


"And a programmer in your average mainstream language in 2018 could reasonably expect to be working in the same language in 2018."

Pretty radical claim there, are you sure about that? :)


It’s simple math, really. Let’s say we had 1 million programmers in the 80’s, writing enteprise software costing 1m * 8h * 365d * 10y = too lazy to do the math, let’s say 30 billion dollars. A lot of that software died with the companies using it, so maybe 10 billions’ worth survived, needing today N C programmers, P Cobol programmers, etc. for maintenance.

How do you think the numbers and the timespans look from a time when we have 20-40+ million programmers, working with better tools on system which are even more tightly coupled with core business functions?

Java will outlive both Cobol and cockroaches :p


"2018...2018"

Obviously you made a typo there. I was just poking fun and wondering what time frame you really had in mind.


2018 - 2038 :)


I didn't want to call out an obvious typo - but then I thought "maybe they are talking about Javascript frameworks" and then I thought "yeah - one can always hope". :-)


"C++ template programming is to Common Lisp macros what IRS tax forms are to poetry."

I will definitely be stealing this quote in the future! Well put.


Other examples...

C++ template programming necessarily followed a very different style than it does since variadic template were introduced with C++11. There were books written before C++11 wherein much of them was obsoleted by variadic templates. You do not want to let students get hold of those books and think the old way was the way things have to be done. Compare how boost::python was (is? I haven't dug into it recently) written compared to pybind11 - night and day differences in readability thanks to variadic templates. That's not a dig at boost::python - it was written with the C++ they had back then and it had to use Lisp-like list comprehension with tedious replicated boilerplate code to unpack argument lists.

Then there are raw pointers in C++ - we aren't supposed to use raw pointers anymore. We are supposed to use references and std::unique_ptr<x> and std::shared_ptr<y> and this thing called "move semantics". That stuff sucks for directed graphs and so I switched to compacting, tracing garbage collection for Cando.

C++ is being developed in a way that it retains backward compatibility with existing code - thank goodness for that! But there are styles of programming even for something as staid and venerable as C++ that, while supported, should pass silently into the long, dark night.

Then you have a language like Python that made sensible but breaking changes between versions 2.x and 3.x. Wow - look at the problems that is still causing. I love Python but I won't do anything with it that I can't rip out root and branch with more than a few weeks work. I say that even though Clasp's build system is completely implemented using the very excellent Python based build system 'waf'.

Then you have Javascript - it seems to change every time I blink. That doesn't mean I will stop using it for our web based Jupyter notebook user interface. Javascript is the only game in that town. I just close my eyes and hold on tight and try not to scream.

Common Lisp isn't like any of those, it grew out of a lot of very careful thinking and was specified around 1984 darn near perfect. There are some warts (logical pathnames - I'm looking at you) but even its warts are better thought out than some other languages features. Better implementations and more and better libraries come along - but the language has the same expressiveness, power and permanence it has always had.


Pascal, perl, ASP, every ML, Fortran, Cobol, Eiffel, Modula, all "4GL" languages.


None of them have faded.

Cobol? Seriously, there are probably more Cobol programmers than there are Python programmers.

And C? Most of the world runs on C, and its irreplaceable language in a non desktop computer.

And you will be surprised how much Perl code is written everyday.

The root comment is talking about the fad of the year, the kind of a language web devs use as flavor of the year. The current ones are Javascript frameworks. In this case the 'fade' is just the web framework, that probably has less mind share and open source dev contributions.


Yeah, when I wrote the comment, I kinda guessed that people would point out the zombie-like state of living for these languages. That's only barely being alive, though, and very much faded.

(I did have C in there but edited it out. I don't think that's a dead language yet.)


But isn't Lisp almost as zombie-like as Perl or COBOL? (4GL I'll probably give you...)


There are several implementations of Common Lisp that are still being actively developed.

Because Common Lisp is so easily extended, the ossification of the standard is actually a good thing. The stability of the standard means that code written 30 years ago will still work today. Feature updates aren't required, because any desired feature can be implemented in macros and imported as a library.


Well, yes :) And 4GL languages are actively being used to create new business, so who knows what's dead and what's not.

Perhaps the whole programming field is just a bunch of undead things. Sometimes a bunch of living things stumble upon the field, but they're soon infected...


If by ASP you mean classic Visual Basic, I'll grant you that. And Pascal.

But ML, Eiffel and Modula were never mainstream.

And Perl, Fortran, Cobol probably still have more programmers out there than Common Lisp has :)


And the fewer CL programmers will run circles around P, F, C programmers :-)


So, why don't they? ;)

Why is viaweb and ITA trotted out decade after decade? If CL is so awesome why isn't the world filled with awesome stuff implemented in CL?

(To be clear, IMHO it's a huge shame that more powerful languages like CL or Haskell aren't more popular)


Because they won't put in the effort to just pull up and post the customer lists of companies selling LISP commercially. ;)

http://www.lispworks.com/success-stories/index.html

https://franz.com/success/

The companies that use LISP don't talk about it much. I'm not saying secret weapon or anything either. I'm saying they seem to be among the set that just identifies something good for the business, buys/builds it, and solves business problems. They don't write articles about their programming language. What I will say is the small number of case studies on those two companies have much more interesting software than what's advertised for many stacks. The kinds of things people use it for corroborates the LISP advocates' claim it's a favorite for hardest, constantly-changing problems. It's more remarkable with Allegro since they charge royalties on top of the licensing with customers still buying their stuff.


Interesting points.

>It's more remarkable with Allegro since they charge royalties on top of the licensing with customers still buying their stuff.

Do you mean that (apart from the licensing) they also charge a percentage of the revenue or profits that their customers make from products they develop using Allegro's products?


It looks a bit different than the last description i saw. Here's current offering:

https://franz.com/products/licensing/commercial.lhtml

They aint cheap or simple with the licensing. People still are buying it, though.


Good info, thanks.


Modula-2 had a good developer community in Amiga and MS-DOS systems, at least across a few European countries.


Fortran is still going strong with Fortran 2008. And even better, it’s backwards compatible still with Fortran 77.


Critically, though, it's not the same Fortran.

Fortran 2008 is VERY different from FORTRAN 77.


Very good point! I find the structure of modern Fortran very clear (although somewhat verbose compared to same python).


check out the new impressive built-in parallel features in Fortran 2018 here: https://goo.gl/ZbH4t7 GFortran 8.1 has already implemented the majority of it, and Intel is already adding the new features to their new Intel Fortran 2019. I don't know of any other mainstream language at the moment, that has native shared- and distributed-memory parallelism capabilities based on one-sided communications without recourse to the complex ugly out-of-language MPI library calls.


Perl, ML, Fortran, Cobol has not faded. You might not like them, you might not use them. But there are tons of them out there and being used in the real world and production systems.


I think it's fair to say those languages have "faded"

I would also say Python 2 has "faded" or started to.

Faded doesn't mean it isn't used anymore. It mean it's retreated into a niche, and is no longer on the short list for new development, and no longer interesting to most developers.


Your average Python 2 programmers “becomes” a Python 3 programmers after 8 hours of practice. I find your argument invalid :)

Also, Python 2 sees and will probably see more production use than Lisp for at least a few more years :)


Adding a few more to the list, Algol, PL/I, PL/S, NEWP, BLISS, Mesa, PL/8, xBASE, RPG, PL/M, NewtonScript, Modula-2+, Modula-3, Component Pascal, Oberon, Oberon-2, Active Oberon, Zonnon, StarLisp, StarC, Concurrent Pascal, Concurrent C, Forth


Have you tried other Lisps like Racket? If so, what's your take on them and how do you compare them with CL?


Lisp is quite popular at my current workplace. A few popular open source projects published by our organization have been written in Clojure (a dialect of Lisp that runs on JVM and CLR). A few domain specific languages used internally in our organization are also inspired by Lisp.

On a more personal front, I find Lisp to be simple, elegant, and expressive. I use Common Lisp (SBCL) for personal use. Working with Lisp induces a sense of wonder: how simple concepts can be composed elegantly to build complex functions and complex software. It's the same kind of sense of wonder one feels when one learns how the simple concepts of Euclid's postulates can be used to prove complex concepts in Geometry or how the Newton's laws of motion can be used to derive intricate and complex concepts in classical physics.

I sometimes wonder why Lisp has not been more popular in the technology industry. Is it the lack of sufficient marketing? Is it the lack of an extensive library ecosystem? I hope Quicklisp will address the concern about library ecosystem for Common Lisp.

I encourage everyone to learn Lisp and, if feasible, write a rudimentary interpreter or compiler for Lisp s-expressions. It's one of those things that can broaden's one horizon in the field of computing.


I'd say several reasons: 1.) a lot of folks have trouble with the abstactness 2.) a lot of folks think C syntax is how all languages should be 3.) the lisp ecosystem is fractured into too many lisps like SBCL, Clojure, Racket, Allegro, Franz, Picolisp, ABCL, Shen...etc, so some confusion amongst those that are new 4.) poor windows support for SBCL...it literally tells you it is experimental if I recall correctly. Setup wasn't straightforward 5.) tooling is complex and emacs is recommended for both SBCL & Clojure, which is arguably more difficult than hitting run in most modern IDEs 6.) multi-core seems to only be in Clojure 7.) lack of decent libraries. There is always someone quick to say that this is false and that they have everything you need, but coming from Python and Perl's CPAN, I couldn't disagree more 8.) learning resources are very hit or miss. I found it hard to pay attention to many of the top lisp books because a lot of the examples such as building an mp3 database would be one line of code or two in python. Of course they're just trying to show concepts and terse expressions isn't where lisp shines, but the effect is underwhelming. 9.) your fellow developers won't use it and a large company will probably have an IT department that won't let you run in production without it being far better known: Java, C#, Python, R, C++, Perl, Ruby, JavaScript...etc without major pushback...notice how many people equate lisp with personal projects?


1: I really don't see how CL or Clojure are any more complicated than, say, Rust or C++. If anything, CL and Clojure are simpler, just different.

3: Most useful libraries are portable between different CL implementations. The choice is really "CL or Clojure or which Scheme implementation?" Not to discredit Picolisp or Shen, but those languages feel very much (esp. Shen) like a research language. Picolisp has more of a convincing productivity story, but doesn't seem like it has the momentum of existing competitors and I fail to see an advantage in using it. Depending on your goals, the choice between Lisps can be pretty simple.

5: Portacle is helping in this area by making it much easier for a beginner to learn CL without fussing with setting up SLIME and Emacs. Clojure also has an IntelliJ plugin, cursive. For Scheme, Racket seems like a painless way to start learning.

6. You can easily go multi-core in CL. It isn't as easy as Clojure, but it's definitely easier than C++ or Java (the language).

7: Clojure has access to all of Java's libraries. CL does struggle compared to Python in this area.

8: Lisp related books are among the best in CS. SICP, LoL, etc. are timeless.

The Lisp family of languages definitely have room to improve, but the ecosystem is healthier today than 20 years ago, for sure.


>Lisp related books are among the best in CS. SICP, LoL, etc. are timeless.

Lisp books also tend to have the characteristic where you have to find the "right" compiler for the examples to work. I've yet to pick up a book where all the examples "just work".


Land of Lisp worked well for me. Can't remember any major issues. (That said, I think I also used racket for most of the work, so I had to change some things consciously.)

Though, this is a bit of a hard problem for authors. Most code that can fit as snippets or short examples does not have what any code in real projects would have. To that end, the code is literally made mainly for you to try and simulate in your head. Running on a computer is a convenience and should be used, but the goal is to internalize it all. Right?


I wonder if LoL was not referring instead to Let Over Lambda which I'd call more timeless... For Land of Lisp there is some implementation specific code the author uses, but he points that out, points out that there are libraries that make a de facto standard for the behavior, but didn't want to introduce quicklisp. (It's a fun book but not the best if your goal is getting productive in a work environment fast, in which case learning about asdf and quicklisp early on is crucial...)

Lisp books in my experience have been the most reliable for running old code. I can take Lisp code from 1960 and almost entirely unmodified run it in a modern Common Lisp implementation because there's a direct line of heritage. If a book from the 90s says it supports Common Lisp, well hey, barring errata the code still works every time...


That form of stability is one that I really wish modern developers would aspire to. I love that I can still run code in some of the old books. Seems the newer my book, the less likely I am to be able to run its code. :(


"Lisp books also tend to have the characteristic where you have to find the "right" compiler for the examples to work."

Probably true of every programming book more than a couple of years old.


> building an mp3 database would be one line of code or two in python

You mean because of the availability of libraries, so that the code would be something like

  import some_database_library
  print("Look, I have a database:", some_database_library.connect())
?

Otherwise, how do you build a database (or anything involving MP3 files) in a few lines of Python, and how is it so much worse in Lisp?


I mean the 1/2 page code was basically a built in dictionary in Python...I felt it was a step backwards. And this is someone who was very open to Lisp...I still think it has value...just a steep learning curve.


You mean this exercise:

http://www.gigamonkeys.com/book/practical-an-mp3-database.ht...

This is building a rudimentary in-memory database supporting SQL like queries. I didn't realize Python dictionaries support schema definitions and "select" queries?


>3.) the lisp ecosystem is fractured into too many lisps like SBCL, Clojure, Racket, Allegro, Franz, Picolisp, ABCL, Shen...etc, so some confusion amongst those that are new

You are mixing things into that list. Common Lisp, Racket, Clojure, and Shen are different languages, serving different purposes, etc.

"Franz", "Allegro", "ABCL", "SBCL" are implementations of Common Lisp, exactly the same language.

ALGOL-family languages are also "fractured" into Algol-68, Pascal, Go, Ada and others, that's not a problem, since they are really different languages that give different benefits/etc.

>"4.) poor windows support for SBCL"

SBCL works just fine on windows. But if you don't like, just use CCL (Clozure CL). I'm currently on a Lisp project which runs equally great in SBCL and CCL. CCL is professional quality software.

>6.) multi-core seems to only be in Clojure

False. There are many libraries to do all sort of concurrency and multiprocessing on Common Lisp.

>8.) learning resources are very hit or miss

At least for Common Lisp, there are books that are arguably some of the best books on programming ever written, like "Practical Common Lisp", or PAIP. For Scheme, "Structure and Interpretation of Computer Programs" is already a high-rated classic.

>notice how many people equate lisp with personal projects?

Except when it's used by Rigetti for quantum computing, or Grammarly for the core server of their core business.


You quoted two companies using Lisp when there are millions of Java devs. That's like a boxer with a record of 1-500. I love Lisp, but c'mon...I guarantee it has far more hobby users than professional users.

ABCL, FRANZ, ALLEGRO, SBCL, CLISP, GNU CL, Clozure might all be common lisp, but they are different implementations with different tools that will confuse a new person and several cost money.

Practical Common Lisp isn't very good in my opinion, that was the example i actually used. SICP is great for a mind-opening experience, but not super useful on how to write modern software and is why the creators dropped it at MIT.

For multi-core I'd rather it be core than a library, but perhaps that is petty of me.

I think Lisp is great, but why it isn't popular shouldn't be a mystery to anyone.


"they are different implementations with different tools that will confuse a new person and several cost money."

Right, just like all of these stop developers from learning Java:

https://en.wikibooks.org/wiki/Java_Programming/Java_IDEs

"SICP is great for a mind-opening experience, but not super useful on how to write modern software and is why the creators dropped it at MIT."

Bullshit.

Anyone who can understand and apply the lessons in SICP will be far more productive developers than someone who just memorized some git commands and knows how to bang their head against Spring Boot until something almost useful eventually comes out.

Outside of those two points, I mostly agree with your general thesis.


How many times have you written a metacircular evaluator in production? How many people code just fine without knowing Maxwell's Laws of Software as one popular blog post describes? Peter Norvig's review on Amazon agrees with you, I just think it is a bit overrated with infinite respect to Norvig as he could probably force choke me with his mind from 5 miles away :)


>You quoted two companies using Lisp when there are millions of Java devs.

So what? I was a Java dev myself. What does this mean?

There are even more millions of Javascript users. What does this mean, really?

>ABCL, FRANZ, ALLEGRO, SBCL, CLISP, GNU CL, Clozure might all be common lisp, but they are different implementations with different tools that will confuse a new person and several cost money.

Fear, Uncertainty and Doubt.

They all implement ANSI Common Lisp.

Right now i'm doing a project (not personal, it is for a company) and it runs on SBCL and CCL without any change. It would probably also run on ABCL straight away. It uses software transactional memory, bridges to C libraries and other stuff that isn't on the ANSI standard, yet it can also be written, very easily, in a portable way.

So, what you write is FUD.

>and several cost money.

More FUD. From your list, only Allegro CL (= Franz) costs money.

>Practical Common Lisp isn't very good in my opinion, that was the example i actually used. SICP is great for a mind-opening experience, but not super useful on how to write modern software and is why the creators dropped it at MIT.

SICP is not for writing modern software, is for understanding fundamentals. If you develop without having a solid grasp of the fundamentals, you become a "CODE MONKEY" (caps intentional), and this is what MIT is in danger of producing now (i'm not the only one to criticize the MIT for dropping SICP...)


My point is simply that niche languages are harder to hire for and find a job in. Most people aren't okay with that. There is a feedback loop here as less people use it and less libraries are written. That doesn't mean the language is inferior to Java in any way, but a new developer has to stay relevant by learning languages and frameworks that people use in-house. This is a major reason it isn't popular. I have 300 people in my IT department. They've all heard of Java & C#, but I've never run into anyone who has even heard of Lisp (maybe that is marketing).


You don't have to hire for Lisp, or any language for that matter. Picking up a language is way easier than fundamental skills (statistics for AI for example). IIRC Dan Weinreb of ITA fame said they gave their new hires a copy Practical Common Lisp and two weeks. You know, the book that you dissed earlier in the thread w/o an argument.

> less libraries are written.

Given that Sturgeon was an optimist[0], that is hardly a problem. The only reason why most OSS libraries are using is because people don't do the due diligence of vetting their dependencies. There are more than enough libraries of adequate quality for common tasks.

[0]: “And folks, let’s be honest. Sturgeon was an optimist. Way more than 90% of code is crap.” – Al viro


Once again, you quote isolated incidents when I'm talking about the entire programming industry. I never said it wouldn't work for some people.

It might have been Tarver talking about the Lisp curse. Lisp is so powerful that everyone kind of does their own thing while devs of C++ and Java have to band together and make massive libraries. This is a different argument, but I think your discounting the usefulness of some of the gargantuan libraries and frameworks out there. I'm not saying fundamentals aren't good, but a data scientist doesn't need to know about homoiconicity...they need Numpy/Scypy.


> ABCL, FRANZ, ALLEGRO, SBCL, CLISP, GNU CL, Clozure might all be common lisp, but they are different implementations with different tools

That's better than having different languages with different tools: Python, Ruby, Perl, ... share nothing - though they are in a similar coding domain.


Fair enough argument here, although you could probably still argue that it is easier to setup most of those languages than use emacs. That's a non-starter for a lot of devs who aren't open to the long-term benefits.


True. You probably mean GNU Emacs and (not so popular right now) XEmacs.

Even I usually prefer to use other Emacs variants for Lisp programming: Fred, Zmacs, LispWorks, CCL, ... I prefer embedded Emacs variants which are written in Common Lisp.

With Lisps without embedded Emacs variants, usually GNU Emacs / SLIME is the best option. I can understand that this is for people used to other editors (like Eclipse, Intellij, Atom, ...) not very tempting.

GNU Emacs is especially used because that's the most widespread programmable editor, which can be extended in Lisp itself. For a non-lisp programmer that's initially not very important.


Yea, I was talking about GNU Emacs + Slime. I'm with y'all that Lisp is an extremely powerful and frankly astonishing piece of engineering. It just seems to cater more to those on the fringes and those willing to put in the work to achieve enlightenment.

I've read a few Lisp books and have written a bit of Lisp code and understand code is data is code, but haven't put in enough time to really reap the rewards. I think my above posts were taken as an insult to the language rather than some observations over several years of playing in the community. One day I hope to join your ranks, but most of my current needs involve needing lots of built in language constructs for data analysis and scientific needs.


No, I think such observations are valid. I mainly tried to add that even for me, a long time Lisp user, GNU Emacs / SLIME is not the primary tool. There are a few groups and a large and visible group is using SBCL + GNU Emacs + SLIME. But there are other groups.


Ah thanks! I recall you work for one of the commercial lisp vendors. I assume you use their IDE environment that is similar to emacs? How do you describe using it?


I don't work for a vendor, but I use commercial and non-commercial Lisps with other editors.

Typically they are simpler to use, better integrated into the platform, have less features (no games, no latex modes, no org mode, ...), have simpler key commands, are multi-threaded, are directly integrated into the running Common Lisp, don't implement all tools as editor buffers, ...


Thanks for the synopsis as I don't see a lot of information on the Lispworks and Allegro sites.


They have tryout versions.

Editor User Guide of LispWorks:

http://www.lispworks.com/documentation/lw71/EDUG-M/html/edus...

IDE User Guide for LispWorks:

http://www.lispworks.com/documentation/lw71/IDE-M/html/ide-m...


None of them will confuse an old person armed with Vim and a command line. :)

Just they have different IDE's and REPLs.

That's like saying Visual C will confuse someone used to Turbo C. So C is bad.



lparallel[0] is a higher-level abstraction built on threads. I use it for spawning threads which execute OS commands and wait for the results, without blocking the main thread. Works very well.

And anyway, if you want concurrency and/or paralellism, just go for OTP (Erlang, Elixir, LFE).

[1] https://lparallel.org/overview/


... or STMX, which provides software transactional memory on Common Lisp in an ultra-easy-to-use way.


I'll definitely take a look at that, thank you!


These things all exist, but a lot of them seem like a library from one author as opposed to what is built-in to Java or .NET and had lots of people and money thrown at it. I'm sure the author is a wizard, but how good can the quality be? I don't have any proof here, but my suspicion is bugs and sparse documentation.


> It's the same kind of sense of wonder one feels when one learns how the simple concepts of Euclid's postulates can be used to prove complex concepts in Geometry or how the Newton's laws of motion can be used to derive intricate and complex concepts in classical physics.

Well, I think you've got your answer right there. Out of 100 people learning about newton's law of motions at university, how many are witnessing their intrinsinc beauty ? Maybe one or two ? And I feel like I'm being fairly generous; most just find it boring and just want to pass the test. They never feel that sense of wonder.

I certainly never felt it for Newtonian physics either, but I'll admit that the relationship between linear algebra and matrices made my heart pinch a bit at the time.

Likewise, most people (doing a programming job) are utterly bored by "how simple concepts can be composed elegantly" ; they just want to make the damn thing work, close their tickets and go back home. They don't read HN or LambdaTheUltimate any more than your average construction worker does read construction working forums, or your average accountant does read accounting forums.


To me it's because lisp offer is too conceptual in a way. Lisp AST like faux-syntax, freedom of idioms, macros... to most it's a problem more than a feature. I believe a huge amount of people prefer the comfort and "magic" of strong grammatical separation. They like considering a language like a set of tools to learn rather than a set of concepts to craft whatever is possible. It also leverage social structures more. You're the coder, not the language designer, not the compiler writer etc etc People like that sort of distribution of work, even though it means you're dependant on these groups which can slow you down.

For complex problems, without clear solution, Lisp is still used (from what I've read) because the people on these tasks are actually looking to find the right concepts and needs a metamaterial rather than a fixed set of bricks.


> I sometimes wonder why Lisp has not been more popular in the technology industry.

I think one of the problems is one of marketing.

The hackers of the 60s wouldn't have cared but there must be a number of people and businesses since who have been put off, consciously or unconsciously, by the language having the same name as a speech impediment.

I'm being entirely serious; I think the language would have been more popular with a cooler name.

Another problem is a perception of Lisp as being an old language. When I tell work colleagues about how great Lisp is, they're put off by it having been around a long time. I tell them that's ridiculous, the /computer/ has been around a long time, should we abandon that? But it is an uphill struggle.


> I'm being entirely serious; I think the language would have been more popular with a cooler name.

How do you explain the lack of popularity of Lisp dialects that do not use "Lisp" in their name, for example, Arc, Racket and Clojure. I know that HN was written in Arc, Arc was written in Racket and Clojure is used in a handful of companies but a far greater number of companies and projects use Python.


Probably because Arc is an incomplete hobby project of Paul Graham that will never be finished now and has only a tiny following of a few people. Yes it runs this site, but not much else. Clojure isn't very easy to learn and I found the Racket ecosystem too minimum. The name sure doesn't help.


Cool names might be neccessary, but not sufficient, for popularity?


I liked clojure when I dived into it but lack of static typing is a pain especially once you have got used to the wonderful refactoring and code-intelligence abilities you get by adopting the tools of static typed programming languages like Java/Go/C#/F#. Also performance!


From the dynamic languages the SBCL compiler is something you might want to try.

The compiler tells you the usual (?) stuff like missing args, wrong named arguments, missing functions, undefined variables, syntax errors, etc.

But Common Lisp has also a (relatively primitive, compared to something like Haskell) type system and the SBCL compiler can make use of type declarations (for compile time type checking and for runtime type errors) and does also do some form of type inference.

With SBCL you get all the interactive dynamic features and a compiler which you can use to check your code and make it faster. The compiler can for example tell you which operations it could not optimize and why. Then you can decide if it is worth to put some effort into improving the code - for example by adding better type declarations. Common Lisp also lets you inline code or have some data be stack allocated...

Common Lisp also has a sophisticated error system, which allows you to get lots of excellent error reporting and handling.


Type declarations in Common Lisp are unsafe. Violating a declaration at runtime is UB.

What SBCL is doing to Common Lisp is similar to what C compiler vendors did to C. In order to win at benchmarks, C compiler vendors started to use UB as a license to miscompile optimized code, which got them better benchmarks.

Common Lisp has very little UB, so the C strategy is harder to execute.

SBCL basically first needs to sucker their users into overusing one corner of the language that does have UB, type declarations.

They get their benchmarks, at the cost of saturating the ecosystem with code that overuses UB, basically poisoning the ecosystem.


> poisoning the ecosystem

Please don't take HN threads into programming language flamewar; we've all seen what this leads to elsewhere. It's both possible and much preferable to simply make your points flame-free.

https://news.ycombinator.com/newsguidelines.html


You are right, I misjudged the inflammatory power of my choice of phrasing.


That's not at all what SBCL is doing. Each type declaration is treated as an assertion, and only then it performs optimization. And there is no undefined behavior at all.

And even without type declarations SBCL is actively deriving types and optimizing things that are proved to have a restricted type.

Spreading misinformation like this is what's poisoning the ecosystem.


> Spreading misinformation like this is what's poisoning the ecosystem.

Please don't respond to flamebait by upping the ante. We've all seen what programming language flamewars lead to. Your comment would be much better without the last sentence.

https://news.ycombinator.com/newsguidelines.html


> And there is no undefined behavior at all.

Well, there is. Violating type declarations is UB in Common Lisp, the only thing the standard says about it is "the consequences are undefined". In SBCL, an error will be signalled, but there's no guarantees about what other compilers might do. This could show up if a Lisp programmer who primarily used SBCL was relying on the type system to do some of their input validation for them when portable code would have to actually check it (ie, in their head they might think their program will raise an error if you pass it garbage, when really it just invokes UB if you pass it garbage). I still don't think it's that big of an issue, and mentioning it alongside benchmarks certainly indicates some misunderstanding, but it's not completely baseless.


It's only undefined in the standard. Most implementations document their behavior somehow.

Generally I run my code with safety 3 and only selected portions might have unsafe code - but with args check before in surrounding code.


That is theoretically true, but what implementation really does that with SAFETY being 3? Claiming that SBCL following the standard is poisoning anything is completely baseless, though.


> what implementation really does that with SAFETY being 3?

LispWorks:

    CL-USER 1 > (declaim (optimize safety))
    NIL
    
    CL-USER 2 > (defun f (x) (declare (fixnum x)) x)
    F
    
    CL-USER 3 > (f "foo")
    "foo"


It needs (optimize safety debug) and the function needs to be compiled (interpreted code doesn't perform any optimizations based on the types).


Sure, but the point is that that exact same code will throw a type error in SBCL. It at least complicates what you need to do before you can say "what implementation doesn't behave like that?".


But what is the point, really? SBCL will disable type checks with safety=0, LW will enable with safety=3, debug=3. Is the point "read your documentation before relying on things"?

Or is the OP's point that SBCL shouldn't exist because it's "poisoning" and "a bad thing"?

That's the problem with such FUD, can't ignore it or somebody might actually believe it, can't just say "nonsense" since that's not convincing.


> What SBCL is doing to Common Lisp is similar to what C compiler vendors did to C. In order to win at benchmarks, C compiler vendors started to use UB as a license to miscompile optimized code, which got them better benchmarks.

I sort of see what you're getting at, but I think what SBCL does isn't really comparable to the C situation. The complaint people have with UB in C is compiler-writers treating it as a licence to do very unintuitive optimisations; it uses it as a justification for being extremely hostile to the programmer and then the defence is "well the standard said the behaviour was undefined, why would you think there's "obvious" code for the compiler to emit here?". In the SBCL situation, the standard doesn't guarantee any behaviour around that feature, and SBCL is using it as a licence to be more helpful to the programmer, so I think most of the issues people might have with the C situation don't really transfer.

For the great majority of programs, the difference doesn't matter either: breaking type declarations being an error (as in SBCL) vs being undefined (as in portable CL) aren't incompatible, as long as your program doesn't rely on a condition being signalled when you violate a type declaration.

I can see how relying on SBCL's interpretation of the standard would cause problems, but when the difference is that SBCL just changed "the behaviour is undefined" to "it is an error", I don't see that introducing a wave of unportable code.

> They get their benchmarks

Type declarations that can't be statically verified transparently degrade to runtime assertions; I don't think that choice was made to do better on benchmarks.

> at the cost of saturating the ecosystem with code that overuses UB, basically poisoning the ecosystem.

I don't think there's that much code out there that relies on errors being signalled when it violates type declarations as a part of normal execution (quick, link a few projects that make use of this), certainly not enough to say that trend poisoned the ecosystem.


You are missing the point of the Lisp's type system.

> There has been some confusion about the difference between type checking for the purposes of compiling traditional languages, and type checking for the purposes of ensuring a program's correctness. [0]

Lisp type system is mainly designed for the former. Abstract data types are useful for the later use.

[0]: http://home.pipeline.com/~hbaker1/TInference.html


Type declarations in Common Lisp support (safety <n>) and (speed <n>) syntax for configuring, you guessed it, safety and speed.

The undefined behavior kicks in when you explicitly defeat safety. (safety 0) (speed 3) means "throw out the life jackets and assume that these declarations are true, making the code as fast as possible".

If you don't have the confidence for this over some piece of code, then just ... don't do that.

Safety is on by default. If some implementation changes that, that is bad.

The C language lacks any means to specify these preferences over a specific scope of code. Some compilers have certain #pragma directives, and beyond that, there are only module-level compiler flags. Those don't specifically give you safety. Usually there is just an optimization level. If optimization is omitted or perhaps specified as -O0 or whatever, that makes code sort of behave more safely in some handwaving manner in that certain "confusing, optimizy things" don't happen.

Also note that Lisp has a defined-order of evaluation. A function call like (f (inc i) (inc i)) is not comparable to the C f(++i, ++i) because it performs the left increment, obtains its value, then the right increment, and obtains its value.

Also note that C requires declarations everywhere. Lisp declarations are optional. Before Lisp declarations can be used in a wrong way that causes a malfunction, they have to be used, period.

We can use Lisp declarations to cause an integer overflow, like in C. If we don't use them, the integer type automatically prevents them by switching to bignums.

In C, even the function which is called once, main, is stuffed with declarations and compiled for speed, often with optimization on in actual real-world projects.

The SBCL people would have a long way to go before they screw up anything to the level of C.


> Type declarations in Common Lisp support (safety <n>) and (speed <n>) syntax

To be pedantic, those aren't type declarations. More importantly, the (un)safety of a type declaration isn't predicated on any specific safety/speed settings. The mere use of a type declaration implies that the programmer certifies their accuracy inside its scope; an additional optimize declaration is not required.

It bears repeating: While Common Lisp has "type declarations", they don't behave as they do in statically typed languages. The are certificates, assurances by the programmer to be exploited at will by the implementation.


> To be pedantic, those aren't type declarations.

They are values of optimization qualities and guide the compiler strategy.

> More importantly, the (un)safety of type declaration isn't predicated on any specific safety/speed settings. The mere use of a type declaration implies that the programmer certifies their accuracy inside its scope;

With an optimize quality 3 for SAFETY a compiler will not reduce safety, even though there are type declarations.

The Common Lisp standard REQUIRES that safety at 3 causes 'safe code':

http://www.lispworks.com/documentation/HyperSpec/Body/03_eaa...

Which usually means full runtime checks in interpreted and compiled code, calls, system calls, ...

> an additional optimize declaration is not required.

Sure you need to have the right compiler qualities set to see those effects of your type declarations.

LispWorks:

    CL-USER 4 > (proclaim '(optimize (safety 3)))
    NIL

    CL-USER 5 > (defun foo (x)
                  (declare (string x))
                  (+ x 2))
    FOO

    CL-USER 6 > (compile 'foo)
    FOO
    NIL
    NIL

    CL-USER 7 > (foo 'bar)

    Error: In + of (BAR 2) arguments should be of type NUMBER.
      1 (continue) Return a value to use.
      2 Supply a new first argument.
      3 (abort) Return to top loop level 0.

    Type :b for backtrace or :c <option number> to proceed.
    Type :bug-form "<subject>" for a bug report template or :? for other options.

    CL-USER 8 : 1 > 
Oh, the LispWorks compiler has ignored the type declaration at safety 3. The runtime checks are still there. The code has the same safety as without a type declaration.


Violation of a type declaration is UB in safe code. SBCL obviously makes use of that fact when inserting assertions.


Undefined behavior gives the implementation the freedom to insert any behavior whatsoever (like the proverbial demons flying out of the nose). An assertion isn't any behavior, it is a form of run-time error checking.

Run-time error checking is what the "safety" optimization option means!

From CLHS:

  Name               Meaning                            
  compilation-speed  speed of the compilation process   
  debug              ease of debugging                  
  safety             run-time error checking            
  space              both code size and run-time space  
  speed              speed of the object code  
(optimize (safety 3)) means "greatest possible run-time error checking". Why wouldn't the implementation insert error checking based on the user's type declaration, when asked to provide the greatest possible run-time error checking?


> An assertion isn't any behavior

It is. For example, a failed assertion inhibits effects that should have occurred before the "actual" type error happens.

The only reason SBCL is able to insert assertions into safe code without giving up conforming implementation status is that, as you say, UB allows anything, and violating a declaration is UB even in safe code.


There isn't necessarily an actual type error! Remember, the declaration provided by the user can be an inaccurate estimate of what the real type constraint is at that program node. The program might in fact require a string there, but the programmer's declaration says integer. So the assertion goes off when a string value occurs, when that would in fact correct in the absence of the declaration.

This alteration of behavior by the assertion is not an arbitrary choice of behavior; it is a diagnostic behavior, in line with the diagnostic meaning of safety.

ANSI CL does literally say under "Declarations" that "The consequences are undefined if a program violates a declaration or a proclamation". Yet, it's not reasonable to allow any behavior whatsoever if a type declaration is violated by a run-time situation, in safe code. It just makes no sense at all to allow a crash, or random bits being silently flipped, etc. Given that safe code, in the absence of declarations, catches type errors; why would declarations regress from that, in safe code.


If you re-read everything in this subthread, you'll find a lot of argument from seasoned lispers that I can and should ignore the fact that type declarations are unsafe even at the highest safety settings because of "what implementations do."

If this discussion had taken place before SBCL, the logical conclusion from what these seasoned lispers say would have been that I can and should use type declarations in safe code for documentation purposes (say), because implementations ignore them at high safety, e.g. from lispm: "Oh, the LispWorks compiler has ignored the type declaration at safety 3. ... The code has the same safety as without a type declaration."

If I had followed such advice from seasoned lispers, liberally putting type declarations into safe code for documentation (say), SBCL's new behavior would have burned me, causing aborts in perfectly working code just because I got my "documentation" wrong. Some observations:

(1) SBCL would be allowed to burn me because I invoked UB. Nothing else gives them license to break working code in the presence of a wrong declaration, but they have that license, and I can't sue.

(2) I shouldn't have listened to lispm. Risking UB in safe code because most existing implementations wouldn't have burned me (before SBCL) would have been bad advice. Peppering safe code with declarations does nothing to improve safety, risking UB can only ever deduct from safety.

(3) As a consequence of (2), encouraging the use of declarations everywhere is wrong. To do so, in the words of PuercoPop, is "missing the point of the Lisp's type system." SBCL is wrong to do it.


> If you re-read everything in this subthread, you'll find a lot of argument from seasoned lispers that I can and should ignore the fact that type declarations are unsafe even at the highest safety settings because of "what implementations do."

The word "safe" in Lisp is tied to run-time checks.

"Less safe" means fewer run-time checks.

If a declaration causes a check to be inserted which signals a condition in code that otherwise would not, that isn't "unsafe"; it is something else. "Unsafe" in Common Lisp specifically means that a check didn't take place which normally would take place.

A monotonic increase in the variety of checks performed at run-time cannot be called "unsafe"; that amounts to an incorrect use of the ANSI document's defined term.

You can say that adding declarations to a program is "risky": it creates the risk that conditions will be signaled.

Those extra behaviors are in fact allowed because the program has violated a declaration, which is undefined behavior. That is a very general statement in the standard which is not expected to have malicious interpretations. Declarations are part of the program and can change its behavior; it is not spelled out in detail how, but the optimization parameters like safety and speed have obvious interpretations.

If a declaration is processed for safe code, and a behavior is added other than a diagnostic behavior, that could be characterized as malicious. For instance, supposed the program abruptly stops, without any diagnostic, so that it is not known why. That would not be acceptable. It would not be acceptable for a variable to be mysteriously altered, and for computation to continue, so that a problem occurs later which is nigh impossible to trace back to the violated declaration.

If you think the implementors are malicious, use something else. Malice could be perpetrated in almost any area of an implementation.

Think about it. Suppose you're hoping to use declarations to speed up code. You add the declarations and keep the code safe at first. If declarations are completely ignored, you're not getting any help! The code is working exactly as before and then when you drop safety and add speed, it fails. It fails not in nice ways, but catastrophically, due to reasons that could have been caught had the declarations been processed in safe mode.


> type declarations are unsafe even at the highest safety settings because of "what implementations do."

Type declarations are not unsafe at highest safety settings in most Common Lisp implementations.

Personally I write code for real actual implementations and their documented behavior - and not just for a spec. They implement the pragmatic part of a programming language, extend it with various features. The CL spec for example says nothing about GC - which would make memory allocation 'unsafe' when we follow your arguments - fortunately implementations provide GCs. Similar, implementations provide a mode when safety = 3 where full runtime type checks are enabled.

I for example sometimes exploit that parts of the implementation is using CLOS for built-in functionality where the spec does not require the use of CLOS (streams is an example) - knowing that this code will not run in some - often also uninteresting for me - implementations.

> SBCL's new behavior would have burned me, causing aborts in perfectly working code just because I got my "documentation" wrong.

That's true, SBCL tells you that your documentation is wrong. Which is a good thing. Somebody would read your code and would be confused by your wrong type declarations.

Actually: wrong type declarations for optimization purposes is the real danger. SBCL helps to find the problems.

> encouraging the use of declarations everywhere is wrong. To do so, in the words of PuercoPop, is "missing the point of the Lisp's type system." SBCL is wrong to do it.

That's not what SBCL does. SBCL allows you to add declarations. But also SBCL does type inference, so types get propagated without the need to type everything. Most better Lisp compilers do forms of type inference. I also gave you an example where SBCL takes advantage of method argument lists - thus you don't need to add type declarations, since the method already tells which class the argument is of.

Advice: follow the spec, but understand the broader Common Lisp tradition codified in its implementation.


Exactly. Most implementation will ignore type declarations in safe code. Adding or removing those has no effect.

SBCL honors type declarations. Thus it is defined behavior. SBCL defines it.

I propose to give up the irrational fear of added services provided by SBCL and actually use it as an additional tool.


SBCL checks type declarations at compile time and at runtime.

The default fully safe code in SBCL is already fast enough for many cases.

You need to check the manual of SBCL sometime.

http://sbcl.org/manual/index.html#Handling-of-Types


Thanks. Seems my assessment and SBCL's self-description match perfectly, as they explicitly aim to "reward the use of type declarations throughout development" and exhort users to "always declare the types of function arguments and structure slots as precisely as possible".

They do want their users to use one of Common Lisp's least safe features pervasively.

If the point was to mine assertions for type information, they could have encouraged the use of assertions. But they don't. They encourage the use of declarations, which is entirely not the same thing, and it's a bad thing for the Common Lisp source code ecosystem, even if not for the SBCL ecosystem. Hurray for SBCL.


> They do want their users to use one of Common Lisp's least safe features pervasively

It's not an unsafe Common Lisp feature. The standard does say nothing about its unsafeness. Only implementations may be unsafe. The Common Lisp standard says nothing how the implementation deal with type declarations.

There is a wide range of behavior in Common Lisp implementations dealing with type declarations. Some will do nothing with type declarations - thus its not more or less safe if one define types.

Some will add more runtime checks when types are declare. Thus the code is MORE safe at runtime.

Some compilers may add less runtime checks and will create specialized code -> less safe.

SBCL provides several ways to deal with that depending on compiler settings.

That SBCL advanced features are a disadvantage is nonsense.


You skipped the important part:

> If the point was to mine assertions for type information, they could have encouraged the use of assertions.

With that in mind:

> Some will add more runtime checks when types are declare. Thus the code is MORE safe at runtime.

Users can portably and safely use assertions if they want that behavior. Encouraging the use of declarations instead is neither portable nor safe since, as you said:

> Some compilers may add less runtime checks and will create specialized code -> less safe.


There are several misconceptions about how people use Common Lisp.

> Users can portably and safely use assertions if they want that behavior

The point is that SBCL can check both at compile-time and at runtime. Which the others (exception: CMUCL and Scieneer CL) can't.

    * (defun foo (a) (declare (string a)) (+ a 10))
    ; in: DEFUN FOO
    ;     (+ A 10)
    ; 
    ; caught WARNING:
    ;   Derived type of A is
    ;     (VALUES STRING &OPTIONAL),
    ;   conflicting with its asserted type
    ;     NUMBER.
    ;   See also:
    ;     The SBCL Manual, Node "Handling of Types"
    ; 
    ; compilation unit finished
    ;   caught 1 WARNING condition

    FOO
That means SBCL tells me already at compile-time that there is a problem in my code. That's something which a development environment can exploit. I get a list of compiler warnings and can then fix my code - without the need to run it and to go into a break - and without the need to have a test case.

Now if I fix that error and run that code in my other implementation - that problem is also fixed there.

Now you assume that adding an assertion would have had a benefit for the other implementation, since it would then create a runtime assertion violation if not a string is provided.

The thing is: this is usually not done. Nobody writes code with assertions everywhere in Common Lisp.

What Lisp a developer really does: if we need to provide a runtime check, then we write a CLOS method.

Thus by default we encourage developers to use CLOS:

    CL-USER 23 > (defmethod foo ((s string))
                    (concatenate 'string s "bar")) 
    #<STANDARD-METHOD FOO NIL (STRING) 4020259F8B>

    CL-USER 24 > (foo 3)

    Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO 406000173C> with args (3)
      1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 406000173C> again
      2 (abort) Return to top loop level 0.

    Type :b for backtrace or :c <option number> to proceed.
    Type :bug-form "<subject>" for a bug report template or :? for other options.
> Some compilers may add less runtime checks and will create specialized code -> less safe.

There is another misconception. Nobody forces me to use those compilers or use them in an unsafe mode.

The choice of compilers in Common Lisp is to give the user different tools for different situations. It does not force me to run my code containing type declarations in an unsafe mode - which usually is controlled by compiler switches: OPTIMIZE qualities for SAFETY, DEBUG and SPEED.

If I set safety to 3, most compilers will not create unsafe code - even though there are type declarations.

If we have a type violation in a CLOS method, SBCL even catches that:

    * (defmethod baz ((s string)) (+ s 10))
    ; in: DEFMETHOD BAZ (STRING)
    ;     (+ S 10)
    ; 
    ; caught WARNING:
    ;   Derived type of S is
    ;     (VALUES STRING &OPTIONAL),
    ;   conflicting with its asserted type
    ;     NUMBER.
    ;   See also:
    ;     The SBCL Manual, Node "Handling of Types"
    ; 
    ; compilation unit finished
    ;   caught 1 WARNING condition
    WARNING: Implicitly creating new generic function COMMON-LISP-USER::BAZ.

    #<STANDARD-METHOD COMMON-LISP-USER::BAZ (STRING) {5132DEA1}>
The developer then will fix that code. When it then runs on another Lisp -> benefit.

So my advice usually is: even if one develops for another Lisp implementation, additionally use SBCL to check your code.


>Type declarations in Common Lisp are unsafe. Violating a declaration at runtime is UB. What SBCL is doing to Common Lisp is similar to what C compiler vendors did to C. In order to win at benchmarks, C compiler vendors started to use UB as a license to miscompile optimized code, which got them better benchmarks.

Fear, uncertainty and doubt.

Type declarations are part of the ANSI Common Lisp standard.

Lisp implementations can decide what to do with type mismatches (thus the UB), but how can this be a problem? Code will still be standards-compliant.


The lack of static typing does not explain why there are so many companies using Python, Ruby and Perl but very few using Lisp. If the lack of static typing was the real cause of the lack of popularity of Lisp then Python, Ruby and Perl also would have been unpopular.


Because each of these dynamic languages offered one feature which was/is the best in the world at the time.

Python today has the best numeric computing libraries (numpy and sisters) which make it the most widely adopted language for scientific computing, ML, etc.

Ruby had Rails - the web framework that trail-blazed convention over configuration and rapid productivity and was considered leading just a few years ago. Ruby's popularity has waned since then as performance and scalability has become far more critical than plain productivity alone.

Perl had the best regex features and reporting facilities fifteen+ years ago. It has lost its crown and is rapidly declining since Perl 6 took forever and is basically a completely different language.


Python enjoyed a whole lot of popularity before the ML trend took off. If that's the best thing going for it, I don't think it really answers the question.


Perl also has CPAN. It had mind-share, as Python seems to have now.


Clojure has Java's static typing and you can also typehint things explicitly.


Clojure 1.9 also introduced spec which allows for more aggressive type restrictions among other things.


There is TypedClojure[1] for static typing in Clojure, but I am not quite sure how good the tooling is though, e.g. refactoring, code-intelligence abilities, etc...

[1]http://typedclojure.org


> I sometimes wonder why Lisp has not been more popular in the technology industry.

One major reason was hardware.

Lisp might have started in IBM mainframes, but the development environment from Xerox and the Lisp Machines that followed up where quite expensive for single developers, back when the industry was heavily focused on bringing down the costs of time-sharing services, which where owned mostly by mainframes, VAX and UNIX systems.

So when the workstation market started, Lisp machines had to fight against UNIX workstations, while being quite expensive, they were still cheaper than Lisp ones.

So the price, coupled with misunderstanding from Lisp from more regular developers busy with Algol, Fortran, PL/I and such, and lots of management mistakes doomed it.

Also quite important, 8 and 16 bit home computers were not yet able to deal with the hardware requirements of Lisp enviroments.

Note that this also affected C++. Lucid pivoted from Lisp Machines to C++, offering a Lisp like development environment that only now we are getting back thanks to LLVM and efforts from the major IDE vendors.

Nowadays we have pocket computers more powerful than SGIs.


>Also quite important, 8 and 16 bit home computers were not yet able to deal with the hardware requirements of Lisp enviroments.

There was a Lisp called XLisp for PCs early on, but it was buggy, IIRC. Had tried it out a bit. Just googled, here is a link about it from later on: http://www.xlisp.org/


Lisp ran on the first UNIX workstations too, Franz Lisp ran fine on even the earliest Sun machines.

Lucid pivoted from Lisp on UNIX to C++ on UNIX, they didn't have anything to do with Lisp Machines.


Correct, but it wasn't part of the platform tooling, requiring an additional expense, with a price tag which on the 80's computing world very few companies were willing to shell out.

Yep, I always get that part of Lucid's history wrong.


> wasn't part of the platform tooling

I think lisp did (to a lesser extent still do) suffer from the "whole system syndrome" that also affected Smalltalk; I'm still not sure that I'd find it comfortable to marry small lisp programs with a shell pipeline, or convenient to work with sockets, files or even databases in a familiar way from within common lisp. Nor would i know how to easily share such code (although quicklisp has made strides her).

That is to say, you could probably write ansible/puppet in common lisp - but I'm not sure if you should.

Certain special programs, like unison (in ocaml)[1] combine breaking new ground, doing something a little complicated - and justify the use of a powerful language. Other examples might be pandoc (haskell) or gnu guix (scheme software/Os manager).

Perhaps it's wrong to throw scheme under the bus, for common lisp being too full of features.

But my (admittedly superficial) impression is that even "pragmatic" schemes tend to be difficult to integrate as system glue; that means both on a single platform (use for scripting, piping files, writing config files, start/stop services - Running Os commands) and as cross-platform (either distribution of artifacts for different systems, like an exe for Windows and elf binaries for Linux and bsds - or readily available run-times that work well on multiple platforms).

It may well be that many of these issues are fixed today - but the impression that sbcl is a poor fit for a system language for writing parts that are easy to integrate well remain. Both if you want a deb/msi package of a gui app, and if you want to drop some binaries in /usr/local/bin to call snippets of code that work well with the shell.

[ed: What is funny/sad is that java kept almost everything that was bad with lisp/smalltalk, and very few of the really good parts. But they introduced a more familiar api for files, streams etc - and managed to push for the jvm to be installed everywhere.]

[0] http://www.cis.upenn.edu/~bcpierce/unison/index.html


> I think lisp did (to a lesser extent still do) suffer from the "whole system syndrome

There are a bunch of problems of Lisp for systems/application programming:

  * dynamic typing and dealing with runtime errors
  * garbage collection vs. real-time response
  * amount of memory used is usually higher for GCed systems
  * interoperability with C code and code in other languages
  * abstraction mismatch between C code and Lisp code.
  * the need for a foreign function interface to inter-operate with C code
  * the integration with C++
  * the minimum size of a program and how many programs can be started
  * sharing much of the runtime and library
When people started to write lots of Lisp code (80s) then the question of delivery came up. The were a bunch of responses to that:

  * don't run systems and applications in Lisp,
    for example rewrite the Lisp applications into C++
    (smaller footprint, possibly faster execution, ...)
  * develop new Lisp-like programming languages,
    but which are more efficient to implement.
    Example: Apple's Dylan or see Dynace ( https://github.com/blakemcbride/Dynace )
  * develop special Lisp implementations which address
    the above issues and are specialized in application
    delivery
Let's look at the latter approach.

Germany had for example the Apply project which for example created the CLICC compiler ( https://www.informatik.uni-kiel.de/~wg/clicc.html ). CLICC is a whole-program compiler for a restricted subset of Common Lisp. mocl is based on CLICC and was/is a commercial CL compiler for macOS X, Android and iOS.

There are a bunch of similar compilers (Thinlisp is one https://www.cliki.net/ThinLisp ). Many years ago Oracle also bought an Lisp compiler (which was very expensive for users, IIRC in the range of $100k), which actually generated 'maintainable' C code.

Another interesting system is WCL: https://github.com/wadehennessey/wcl WCL allows one to use Common Lisp as a shared library (mostly). Each Lisp application starts with tiny RAM usage and tiny application size. This allows to run possibly hundreds of smallish Lisp applications at the same time.

Eclipse was a Common Lisp which has a direct integration in C and compilation to C, which made something like a Foreign Function interface unnecessary: https://github.com/blakemcbride/eclipse-lisp

Generally these tools are often 'old', not widely used and not many applications have been written with them.


I was rather excited a while back when discovered iso lisp/ standard lisp/EU lisp. A few newer than common lisp standards that tried to extract a sane subset of clos and some other cl features (but trying to avoid cruft and overlapping functionality). Unfortunately, there didn't appear to be many viable implementations; one French one that was commercially licensed, and. Not much else that I found at a time.

But it looked a lot like the good parts of cl (pragmatic standard) with the good parts of scheme (small, but not minimal).

At least on paper. I guess the fact that there's a lot of viable cl systems and not really any iso lisp systems is telling...

https://en.m.wikipedia.org/wiki/ISLISP


ISLISP was defined/implemented mostly outside of the US (exception, Kent Pitman, IIRC) and mostly by work from Europe&Japan. But that funding dried up long ago. One of its purposes was to have a Lisp standard outside of the USA, not controlled by US standards bodies and US vendors/users.

It's too much like Common Lisp and the standard itself is not bringing exciting stuff. It's nice in itself, but not really needed.

Can't remember anything in the language which really addresses the delivery topic - though I've never read the spec in depth.


Actually Lisp systems were no more expensive than other advanced language implementations. But one needed a bunch of expensive RAM.

At that time there were already free implementations of Common Lisp for Unix: AKCL (later GCL), CMUCL, CLISP, XLISP, ... IDE for those were usualy GNU Emacs or Xemacs with ILISP.

Windows had less choice for free implementations, but one used the commercial implementations anyway and some were not that expensive.

On the Mac everyone used Macintosh Common Lisp - which at some point was owned and sold by Apple itself - not very expensive.


Franz Lisp was free.


> I sometimes wonder why Lisp has not been more popular in the technology industry.

Larry Wall said that a language should make the easy things easy, and the hard things possible. I think the problem is that Lisp doesn't make the easy things easy.

Before you tar and feather me, hear me out. There are two different ways in which Lisp doesn't make the easy things easy.

First, syntax. "But it's easy! In fact, it's the easiest!" you reply. But for the vast majority of programmers, Lisp syntax is not easy. You could argue that they just need to be trained. But the syntax is different enough that they are reluctant to try.

Second, installation and package management. This is part of what is now expected to be easy. If Lisp doesn't have an install and package management that is as easy as, say, Perl, that's a problem that will turn people off to Lisp.


>First, syntax. "But it's easy! In fact, it's the easiest!" you reply. But for the vast majority of programmers, Lisp syntax is not easy. You could argue that they just need to be trained. But the syntax is different enough that they are reluctant to try.

Could be more of NIH syndrome, since most of them would have learned another language first, which had an Algol-family or other-than-Lisp syntax (IMO). And regarding "You could argue that they just need to be trained", practice is also needed in order to feel comfortable with it.


Don't have the link handy, but I think I read somewhere that someone calculated the number of characters or punctation (things like parens vs. braces, etc.) for some code using Lisp vs. C-style languages, and showed that (at least for some examples), Lisp code actually took less characters (or punctuation) in total.


Another factor could be familiarity, by now people are so used to seeing C like syntax, any things else would feel automatically difficult attracting huge investments in time to learn.

Also if most developers have to learn lisp, there have to be enough jobs.


Right, good point. Sort of a chicken-or-egg situation.

Also, this reminds me of Paul Graham's famous essay, Beating the Averages [1], about why and how he and his co-founders used Lisp at their startup Viaweb, which was later acquired by Yahoo! for ~40 million USD, IIRC.

[1] http://paulgraham.com/avg.html

I think it was this essay, along with his other one about the first summer school for founders (which later became Y Combinator), that attracted the attention of a lot of developers (some of whom later became successful YC applicants) and helped both YC and HN take off.


>for the vast majority of programmers, Lisp syntax is not easy

I suggest that learning Lisp's syntax is easier than learning C-like syntax. There is learning involved, however, and that undoubtedly keeps people from lisp. But s-expressions are the secret sauce; there is no modern lisp without them.

>installation and package management

This is basically a solved problem at this point. Clojure has both Lein and Boot to handle dependencies and building. CL has quicklisp (as easy as `(ql:quickload :my-system)` for installation) and asdf for "building" and system definition.

When it comes down to solving problems, I feel that interactive development (with SLIME), Quicklisp, and ASDF make me extremely productive, and the easy things are easy.


Lisp's syntax is indeed easier to learn, but the vast majority of programmers have made it clear that they DO NOT want to READ code in S-expression format. And you typically have to read more code than you write.

I created 3 layered notations to keep Lisp's capabilities (including macros and homoiconicity) while making it easier for normal programmers to read. Instead of "getting rid" of S-expressions, I think adding just a few standard abbreviations to the Lisp reader makes Lisp more acceptable to more developers. See: https://sourceforge.net/projects/readable/


>the vast majority of programmers have made it clear that they DO NOT want to READ code in S-expression format

I don't think this is true. I don't know of many people (who actually use Lisp -- meaning they took the time to read a book and build a smallish, mostly complete project) who are actually bothered by reading s-expressions. They do take some getting used to, but I really don't see the problem. s-expressions scare newcomers, but eventually it isn't an issue. It's just different.


The vast majority of programmers make it clear that they don't want to use anything but the one of the several popular languages du jour, regardless of what that language looks like, and of what any other language looks like.

There is hardly anything familiar to complete neophytes in any programming language. Beyond simple arithmetic expressions like a * b + c (where the use of asterisk for multiplication is already unfamiliar) and function notation f(x, y), everything else is new.

Some languages with awful syntax have enjoyed popularity.

If you clone the salient features of a popular language to make a new language, you will not automatically end up with a popular language. "Off brand" versions of popular languages largely languish in unpopularity.


Lisp developers haven't adopted those syntax variants. There are some Lisp derived languages with non s-expression syntax (Logo, Dylan, RLISP, SKILL, ...) but the adoption is limited among Lisp programmers.


> for the vast majority of programmers ...

[me and a few of my friends] ...


>I sometimes wonder why Lisp has not been more popular in the technology industry. Is it the lack of sufficient marketing?

In the late 70s, cheap computers (IMSAI, Altair) were almost unable to run Lisp for useful purposes. Too little memory.

Minicomputers (DEC PDP /etc) were able to run full Lisp implementations but it worked slower than using other languages.

Lisp Machines (MIT CONS, etc) were able to run Lisp fast but those were dedicated, specialized hardware. And expensive.

Enter the late 80s; Lisp ran great on personal computers but implementations AFAIK were mostly commercial, so reserved to big budgets. I'd say Common Lisp did have success on the industry, used for many things (3D, CAD/CAM, simulation, etc).

Meanwhile the rest of the world was on Pascal, C, and C++.

Nowadays things are different, there are many Common Lisp implementations that are free and are good (SBCL and CCL, for example, are lightning fast).

But additionally, it is difficult to understand what advantages would Lisp bring. For this, the developer would have to grok (completely understand) the enormous value of metaprogramming, and to understand as well how flexible is CLOS.


> but it worked slower than using other languages

minus Maclisp

> Enter the late 80s; Lisp ran great on personal computers but implementations AFAIK were mostly commercial, so reserved to big budgets.

CLISP, CMUCL, AKCL, XLisp, ... all were no cost.

> SBCL and CCL

SBCL is a fork of CMUCL which was started in 1981 as Spice Lisp. That was always no cost and several other projects took code and documentation from CMUCL.

CCL is a free version of MCL - which was an affordable commercial Lisp for Macs in the end 80s.


Thanks for the corrections, R.


"I sometimes wonder why Lisp has not been more popular in the technology industry."

One historical reason:

https://en.wikipedia.org/wiki/AI_winter#The_collapse_of_the_...


> I sometimes wonder why Lisp has not been more popular in the technology industry. Is it the lack of sufficient marketing? Is it the lack of an extensive library ecosystem?

Data-point of one here, but I was instantly turned off by the conflation of the empty-list, nil, and boolean-false. I could tell instantly that I'd have a hard time interfacing with data from other real-world systems, so I decided to skip CL and stick to Clojure and various Schemes for my lispy kicks.


Your guess was wrong; there is no such hard time. There is no difficulty in precisely mapping data from the "real world" onto Lisp data structures, in both directions.

If it doesn't have nil that is also false and the empty list, it is not a Lisp.

The conflation is a stroke of genius. I was immediately attracted to Lisp because of its clever handling of lists and booleans. Also how nil is a symbol, but () means the same thing as an alternative spelling.

People who wake these things out are imbeciles who are mutilating their language to the point it shouldn't be called Lisp.

How nil works in Lisp has a tremendous virtue: it blows away a lot of useless verbiage from code that manipulates code.

Ancient Lisp in the 1960's didn't allow car and cdr on the empty list nil. The Interlisp people realized that it was an inconvenience to always have to test consp before using car or cdr so they turned the functions polymorphic, allowing (car nil) -> nil, and (cdr nil) -> nil. According to the HOPL History of Lisp paper, the MacLisp and Interlisp people held a conference. They didn't agree on some things, but MacLisp adopted (car nil) -> nil and (cdr nil) -> nil from Interlisp. It's a compelling idea; taking it out of a supposed Lisp dialect is a counterproductive regression.


> If it doesn't have nil that is also false and the empty list, it is not a Lisp.

Scheme is generally considered a Lisp.


{by whom}

Not by me. I'm a skilled Lisp programmer, yet Scheme isn't usable to me a the rudimentary coding level using the core language. There are stumbling blocks at every turn against straightforward Lisp coding. How can it be a Lisp?

I can't even rely on Scheme to evaluate the function arguments left to right. Many forms have an "undefined" return value; in Lisp, everything returns a defined value, as a fundamental tenet! Remember Alan Perlis: "Lisp programmers know the value of everything, but the cost of nothing." Well, Scheme programmers do not in fact know the value of everything; it is often undefined! Might Scheme not be a dialect of C, in fact? The expression f(i++, i++) is directly translatable from C to Scheme, undefinedness of behavior intact!

"Lisp" has to do with the little design details, not superficial resemblance.

Calling every language with parentheses a Lisp is like calling every web browser "Netscape". Chrome is Google Netscape, Firefox is Mozilla Netscape; Explorer is Microsoft Netscape. How can they all not be "Netscape"? They run Javascript, process HTML and CSS, fetch URLs and render content.

Also, MS-DOS and Windows are Unix! Proof: their filesystem design has a directory structure in which "." means this directory and ".." means parent. There is a "cd" command in the command interpreter, and slash is accepted as a path separator such that the leading slash indicates an absolute path. Piping between processes is done with |. Gee, how can DOS and Windows not be Unix?

Superficial resemblances mean that we can dilute any name we want, as thinly as we want, to the point that it means nothing.


Me (GP): Scheme is generally considered a Lisp.

Me (now, after reading P and other responses): I was wrong (or at least half-wrong) about this being generally accepted. I was not aware that I've been in a "Scheme is a Lisp" bubble.

EDIT: When I was working in Lisp (early to mid nineties), ANSI Common Lisp was relatively recent, and many of my co-workers[1] — a couple of whom in fact were members of the X3J13 committee — had come from working in and implementing a variety of earlier dialects. Maybe that has something to do with it.

[2] looks like a good summary of the current divide. [3] and [4] represent only one side of it.

[1] http://amigos.rdsathene.org/other/prefix-dylan/book.annotate...

[2] http://wiki.c2.com/?IsSchemeLisp

[3] https://en.wikipedia.org/wiki/Lisp_(programming_language)?wp...

[4] https://en.wikipedia.org/wiki/Scheme_(programming_language)?...


Regarding that C2 page.

My view isn't that Lisp is a coveted brand name, but that it should have a reasonably precise meaning. Abusing "Lisp" has created confusion.

Here is one problem: people come across some language which are called some kind of Lisp. They have some sort of experience that didn't sit well with them. Then, because of a human reasoning process called "guilt by association", they attribute the bad experience to everything that is called "Lisp".

It must all be the same, right? It's called Lisp!

The result is that if I make a language that is some kind of Lisp, I can't just present it to the users to enjoy; I have to work at dislodging their preconceived notions about what that language is, based on what they believe "Lisp" to be from some prior experiences.

Please leave the "Lisp" word for those of who earnestly implement something that has the important attributes of the original, so that over time, the meaning of the word will be narrowed and sharpened, regaining some of its discerning power.

It should be the case that if someone uses a language that is some kind of Lisp, they should be able to program in it straight away, if they have experience with any other language that is some kind of Lisp.

They should be able to define an elementary function, such as a recursive Fibonacci or whatever, without having to study the reference manual of the different Lisp.

If someone comes to me saying they learned some kind of Lisp, but then they are confused by nil in my Lisp, that is unfortunate. I put a bona fide classic Lisp feature into my Lisp, which baffles someone who thinks they had previously studied some kind of "Lisp"! How stupid is that?

Unhelpful, to say the least, and confusing.

What is the take-away message for that person? How can it possibly be anything other than: "Lisp is fragmented! If I learn something completely rudimentary under one Lisp, like how Boolean conditions work, I have to un-learn and re-learn it differently under something else, also calling itself Lisp! This is like Linux distros, only 100X worse! Phooey!"


> What is the take-away message for that person? How can it possibly be anything other than: "Lisp is fragmented! […]”

When I last worked in Lisp (in the 90s), "Lisp" wasn't just fragmented. The word described a family of languages, that weren't even unified around LISP1 versus LISP2, let alone Flavors vs CLOS, etc. If you wanted to be clear about which one you meant, you'd say MacLISP, or InterLISP, or Common Lisp, or Emacs Lisp, or something.

Hence my question in a sibling comment: does Lisp now mean, in general or in your community, Common Lisp?


Look, you need a CS degree or two to even understand what "family of languages means", first of all. It's not helpful when facing toward non-users of Lisp, who are relatively new programmers.

Less "family" nonsense, and more solid definitions. Problem with family is that drunken Uncle Al who comes over and sleeps on your sofa for three weeks is also "family". Al doesn't even understand that empty list is false, like everyone in this household was raised to; kick the lamer out!

But if Lisp means only "Common Lisp", that is very unhelpful and stifling.

That would be like "C" only meaning "ISO C 2011", excluding concepts such as "GNU C" or "Microsoft Visual C", or "C90" (a superseded ISO language, no longer C now).

There has to be some flexibility: a sweet spot between useless rigidity and scatter-brained dilution of meaning.


"Part of the culture of Schemers is to broadly interpret the term "Lisp." It's part of the culture of Common Lispers to wonder what is going on in the culture of Schemers. :-)"


That's right. Since I'm not in the culture of Schemers, I don't care what they consider Lisp. Every Tom, Dick and Harry clamors for his demented, non-Lisp programming language to be considered Lisp in order to create the appearance of a relationship to the associated allure and mystique. Ruby? Why, that's "MatzLisp".


> "Lisp" has to do with the little design details, not superficial resemblance.

Would you consider LISP 1.5, MacLISP, InterLISP, etc. to be Lisps? They differ on different design details than Scheme does, but they differ significantly.

Related: does Lisp these days or in your community just mean Common Lisp?


Common Lisp mostly superseded Lisp 1.5 and Maclisp. Interlisp went away - Xerox included Common Lisp into Interlisp-D and later sold the product to a tiny vendor, renaming the product to Medley. Most Interlisp users were switching to Common Lisp - including Xerox. Xerox even was a driving force behind the Common Lisp Object System (CLOS) and its Meta-Object Protocol. Xerox paid for much of the research that went into it. Parts of CLOS were based on ideas from LOOPS for Interlisp-D. The first actual CLOS implementation was called Portable Common Loops (PCL) and was actually developed largely by/at Xerox PARC. Xerox PARC also developed a bunch of research software in Common Lisp.

I would count Emacs Lisp, ISLisp, and some similar languages as mainline Lisps. Scheme is now called Scheme. Racket was renamed from DrScheme (IIRC), to avoid being directly associated with Scheme - since it is now its own language.


Racket was PLT Scheme. DRSheme was It's IDE now called DRRacket


> Racket was renamed from DrScheme (IIRC), to avoid being directly associated with Scheme.

See: a laudably reasonable behavior.



> Data-point of one here, but I was instantly turned off by the conflation of the empty-list, nil, and boolean-false.

Oddly enough, that’s almost always something I consider a feature, even when I'm interfacing with the outside world. If I’m expecting a list, then NIL is just an empty list; if I'm expecting a value, then distinguishing NIL & false hasn’t caused me trouble. Worst-case, it’s been something I take care of at the mapping level, and don’t worry about anywhere else.

In return, actually using Lisp is really pleasant. Using Scheme tends to be a lot more verbose, because it makes a distinction where it doesn’t really have to.


You can easily change that behavior if you really want to.

(in-package :my-package)

(shadow 'if)

(defmacro if (condition then else) `(cl:if (true? ,condition) ,then ,else))

(defgeneric true? (thing) t)

Defining appropriate defmethods to make things work the way you want is left as an exercise.


However, wanting to is rather a problem for the psychiatrist's couch.


Why? I actually agree with the OP that conflating the empty list and the boolean false value is a mistake. (I don't think it's enough of a big deal for me to want to change it, but in a perfect world I do think it is better to distinguish them.)


Isn't that true in Python too?


Not really, the empty list, None, and False are distinct values in Python.

    In [1]: [] is None
    Out[1]: False

    In [2]: [] is False
    Out[2]: False

    In [3]: [] == False
    Out[3]: False

    In [4]: [] == None
    Out[4]: False

    In [5]: False is None
    Out[5]: False

    In [6]: False == None
    Out[6]: False
This kind of relatively fine-grained distinction between different types of data is essential for operation in a world where other systems presume that you can handle these distinctions.


However:

    if not []:
        print("looks boolean to me...")


Python code rarely looks like this

For things like optional arguments you usually use None with an "if l is None" parameter

Meanwhile you can't iterate over None, so [x for x in None] blows up

Closure (to my understanding) allows a null value in the place of any empty seq. This seems like a major bug swallower to me


> Python code rarely looks like this

Using implicit truthiness to test for empty sequences is idiomatic Python, for better or worse.


Sorry, you're actually very right. I think I misread this in the case of a default argument example (which at least for codebases I work with are diligent about doing None tests for ) but if you know that the value is set to a sequence then the boolean test is common


Can't win em all I suppose :P

My point is that, for me, Common Lisp is particularly defficient in this regard, in a way that means I can't really consider using Common Lisp for development work today.


Different programming languages play different games with booleans, empties, undefinedness and such. In all of them, people are able to Get Stuff Done, correctly handling inputs from the world.

In the POSIX shell, a failed termination status of a command is a false condition. Storing false as a datum is usually represented by an empty or unset variable.

Yet, people are able to write robust shell scripts to boot systems, build programs and so on.

Everything you wrote is based on a guess; not even anecdotal experience. That is below the expected level of discussion on HN.

The nil/empty/false thing is an excellent tradeoff in code that manipulates lists, rendering it succinct. For instance an expression like (when attributes (do-this ...)) means, (do-this ...) when attributes are present; i.e. the list of attributes is not empty. All the default nil return values that occur are automatically empty lists, which is very convenient. For instance a two-clause if (if condition then) returns nil if condition is false. If the caller expects a list, then that is perfect; it's the empty list. All these little conveniences dove-tail together to clarify the code.

The empty/false conflation doesn't do anything in code that doesn't manipulate lists. For instance, if you're manipulating numbers, or strings, there is no ambiguity: nil isn't a number and it isn't a string. It is distinct from 0, 0.0, and "".

You cannot have a meaningful opinion about this that anyone should take seriously unless you have experience with it: having experienced both the convenience in writing list manipulation code (particularly code-writing code) and the situations in which the ambiguity between nil (the atom) and nil (the empty list) was actually a problem.


(Nice to see the Lisp Defence Force out in strength.)


I highly recommend Eitaro Fukamachi's caveman2 for very quickly setting up a very fast lisp web server including very easy to configure ssl. Fukamachi's Dexador library is also fantastic for very quick and easy ssl http functionality. https://github.com/fukamachi/caveman https://github.com/fukamachi/dexador


What exactly is meant by the line "Design patterns disappear as you adapt the language to your problem domain."

The word "disappear" is a hyperlink to a pdf, which refers to domain-specific design-patterns. I don't quite seem to grasp what is being implied on the home-page.

Is it something like Lisp is very customizable and allows you to easily overload operators? (I am primarily a Java programmer and have never used Lisp).

EDIT: I mean, e.g. this page asks "Are Design Patterns Missing Language Features" http://wiki.c2.com/?AreDesignPatternsMissingLanguageFeatures

Is this what's being referred to, and if so, what makes Lisp especially good for adding language features?


The short version is that many "design patterns" are de facto macroexpansions. So if you write the corresponding macro, you don't need the pattern any more.


Yes, basically Lisp gives you the tools to automate the process of implementing patterns. You implement the pattern once and then you just reuse the implementation the way you would ordinary logic in normal programming languages.


Lisp macros are what make lisp extensible. They make it extensible in a way that make it indistinguishable from language features.

The best simple explanatin I found for macros is this one [0].

In a rough sumary: you could add to lisp any "missing" functionality -present in another language- with just macros (without altering the compiler/interpreter), and this functionality would work as if it was in the compiler/interpreter from the beginning.

[0] http://www.gigamonkeys.com/book/macros-defining-your-own.htm...


Remember that Common Lisp, at least, has many different types of macros. The most common are ordinary s-expression macros (macros that look like s-expressions and output an s-expression at compile time).

However, there are also symbol macros and reader macros as well, each of which are used to make Common Lisp massively extensible and an amazingly expressive language.


Indeed Lisps claim to fame is how much the language itself is extensible. Something along the lines of "you rewrite the language / expand it" for whatever project you're working on. Writing your own lisp interpreter is another popular type of project people do as well given how simple Lisps syntax is to parse and follow.

If you are interested in Lisp on the JVM check out Clojure.


For Lisp on the JVM there is also ABCL, which is a full implementation of Common Lisp: https://abcl.org


And you can change Lisp language on the fly. Think like you can chage "if" in Java.


If it is indeed true that you are primarily a Java dev who has never used Lisp, I am impressed by how well you managed to ask a question in an area where you must have very little familiarity.


Some of the problems in imperative languages are solved with design patterns. You don't have these problems if you use Lisp.


What's with the full-page splash that just says "Common Lisp" with no indication that you need to scroll to get to the content. This is worse than the 1990's "Click here to enter the site" silliness.


I think this trend really took off with Medium, which encourages large (and useless) banner images at the top of articles. It's an unfortunate trend. It probably also increases bounce rate if I had to guess.


I agree. A small change to show something peeking from the bottom to indicate there's more would go a long way. I liked Windows Phone's way of having stuff from the next "page" peek from the side/bottom to hint that there's more to scroll to.


If you fancy giving Lisp a try, look into Portacle [0]. I've not used this exactly but I did install all the components myself over time and they play together nicely.

[0]: https://portacle.github.io/


Nice web site! I have been using Common Lisp since 1983 when I updated my Xerox 1108 Lisp Nachine.

In the last 35 years I have probably only averaged using Commin Lisp for about 10% of my development but still love the language.

One problem the Lisp world has is too many fine implementations of dialects of different Lisp languages. I find it impossible to not experiment with most of them.


What insights do you gain by experimenting with different implementations of Common Lisp? Can you share some of those insights or things you learnt by experimenting with different implementations of Common Lisp?


I don’t have much to say about different versions of Common Lisp. I have mostly been using SBCL for years, except sometime switching to Clozure for faster compilation.

I experiment more with different versions of Scheme: Gambit, Chez, and Guile. I really like the Racket ecosystem - a lot of good work is done around Racket. I don;t much use Clozure (except for my hobby site cookingspace.com) unless I am using it because that is what a customer uses.

Church, for Probabilistic programming is interesting but it was apparently been supplanted by WebPPL(which is not a Lisp).


Any advantage to learning Lisp instead of something like Elixir today? I find Elixir to be modern, a large amount of libraries, larger and active community, BEAM VM, has macros also... Can`t think of a reason to invest time in Lisp at the cost of mastering something Elixir, but maybe I am wrong.



I'm in the same boat as you.


You can have your cake and eat it too

http://joxa.org/

http://lfe.io/


Often those who are curious to try Lisp are faced with a number of choices: Which dialect to choose? Which implementation to choose? Which book or tutorial should one follow? Is it necessary use Emacs? SLIME?

Here are my recommendations:

- Choose Common Lisp because it has been the most popular dialect of Lisp in the overall history of Lisp. It is more convenient than Scheme if one decides to develop serious software in Lisp. Clojure appears to be more popular in the technology industry than Common Lisp among organizations (my workplace included) that use Lisp. I still recommend Common Lisp because I believe that it is more likely that one would work on an independent open source or hobby Lisp project than one would encounter one of the rare organizations who use Clojure at work.

- Choose SBCL (Steel Bank Common Lisp) as the implementation.[1][2] It is the most popular Common Lisp implementation and is recommended in many online discussions. CCL (Clozure CL, not to be confused with Clojure which is a separate dialect) is also another very good implementation. I still recommend SBCL because as a result of its greater popularity, it is readily available via package managers such as brew and apt-get as well IDE packages such as Portacle, Lisp in a Box, etc. CCL is currently missing from both brew and apt-get.

- Work through this book: Practical Common Lisp: http://www.gigamonkeys.com/book/ (available in print too if you search online). Skip the sections about Emacs and SLIME if you don't use Emacs.

- There is no need to use Emacs if you are not an Emacs user. Any good editor would do.

- A Vim user may consider installing Slimv[3][4]. Superior Lisp Interaction Mode for Vim ("SLIME for Vim") or Slimv is similar to Emacs/SLIME, displays the REPL in a Vim buffer, and comes with Paredit mode that makes typing and executing Lisp code quite convenient.

- Emacs with SLIME or Vim with Slimv are quite useful but not necessary. To get started quickly without being bogged down by the details of an editor, just execute the Lisp source code file on shell.[5]

- Optionally, keep another implementation of Common Lisp. Common Lisp is a standard that is implemented by various implementations. Each implementation may have its own extensions or implementation-specific behaviour regarding error handling, command line options, FASL format, unspecified behaviour, etc. Experimenting with concepts with another implementation of Lisp occasionally may offer some perspective about how some things could be different in different implementations. I keep CLISP around for this purpose.[6][7][8]

[1]: Install SBCL on macOS: brew install sbcl

[2]: Install SBCL on Debian-based distro: apt-get install sbcl

[3]: Slimv in a ZIP file: https://www.vim.org/scripts/script.php?script_id=2531

[4]: Slimv as a Git repo: https://github.com/kovisoft/slimv/

[5]: Load (execute) code in a file and exit: sbcl --script foo.lisp

[6]: Install CLISP on macOS: brew install clisp

[7]: Install CLISP on Debian-based distro: apt-get install clisp

[8]: Unfortunately CLISP is missing from Debian's stretch (stable) repository but it is available in its buster (testing) and sid (unstable) repositories. Hopefully this will be addressed when buster becomes stable. CLISP is available on Ubuntu.


What confused me a lot is that nobody seems to give an example on how to build a binary out of a Lisp program/make it runnable from command-line.

Also most tutorials/books I found don't guide you on how to build an application/structure your code – which is rather confusing for a beginner. You have to spend a lot of time and try and error to get things working using Quicklisp. I got often the impression, that since Lisp is so old, everyone using it knows how to do things and forgot to document for newcomers their knowledge.


See this book:

http://weitz.de/cl-recipes/

Common Lisp Recipes by Edi Weitz.

That's from 2016 and it covers a LOT of practical Lisp lore. Basically Edi wrote down much of his knowledge how to write Lisp code. Not only from hobby projects - Edi also has extensive knowledge from commercial projects.

The commercial, and expensive, Lisps also have extensive facilities and documentation how to deliver applications. LispWorks for example supports delivery as standalone programs, applications with GUI, shared libraries and delivery for iOS and Android apps. Edi Weitz for example wrote a bunch of applications for Windows with LispWorks.

http://www.lispworks.com/documentation/lw71/DV/html/delivery...


The reason nobody can give you an example for how to make a binary is because there are many many different ways of doing that. To name just a few:

* in ABCL you would generate a jar file, just like with java or clojure

* in SBCL you could dump a core file, there are some tools that can package that up in a command line binary

* if you use a bytecode compiler(like clisp), you'd use that the same way like python or ruby, you'd put your script in a text file, with a #! line at the top and run it like any other shell script.

* if you use a Lisp->C compiler, you'd generate C code and then compile that with GCC or Visual Studio or whatever

* if you use image based programming, you'd just load your code in the lisp image and just use lisp itself as your command line, your "binary" would then be just a normal lisp function.

* If you're deploying a service, you might want to package it in a docker image or even a VM image or have some build and deploy script depending on your environment or needs.

I'm probably missing some, but that's the basics.


As to your second point, in Lisp (well, Scheme) I know nothing better than HtDP [1] for that purpose.

http://www.htdp.org/2018-01-06/Book/index.html


Since Common Lisp is a language standard (not an implementation) it is hard to provide a single set of instructions or guidelines that would work for all implementations. There are various implementations of Common Lisp that target native machine code, C code, bytecode, JVM, etc. So the build instructions, project structure, etc. depend on the target.

Here is a minimal example that builds a Lisp program into a binary executable with SBCL:

    (defun main () (format t "hello, world~%"))
    (sb-ext:save-lisp-and-die "hello" :executable t :toplevel 'main)
The SBCL-specific `save-lisp-and-die` function saves the Lisp process as a core image. The `:executable` argument includes the SBCL runtime in the image to ensure that the image is a standalone executable. This is why the executable for even a simple hello-world program tends to be quite large (30 MB to 50 MB)! The `:toplevel` argument specifies which function to run when the core file is run.

Here are some example commands to get you started:

    $ cat hello.lisp
    (defun main () (format t "hello, world~%"))
    (sb-ext:save-lisp-and-die "hello" :executable t :toplevel 'main)
    $ sbcl --script hello.lisp
    $ ./hello
    hello, world
If you would rather not have SBCL specific code in the Lisp source code file, then you could move the `sb-ext:save-lisp-and-die` call out of your source file to the SBCL command invocation. The source code now looks like this:

    (defun main () (format t "hello, world~%"))
The shell commands now look like this:

    $ cat hello.lisp 
    (defun main () (format t "hello, world~%"))
    $ sbcl --load hello.lisp --eval "(sb-ext:save-lisp-and-die \"hello\" :executable t :toplevel 'main)"
    $ ./hello 
    hello, world
By the way, there is also Buildapp[1] that provides a layer of abstraction for building executables from Lisp programs. It works with SBCL and CCL. It requires the toplevel function to be called with an argument though. Therefore the source code needs to be modified to:

    (defun main (argv) (declare (ignore argv)) (format t "hello, world~%"))
Then Buildapp can be invoked like this:

    $ cat hello.lisp
    (defun main (argv) (declare (ignore argv)) (format t "hello, world~%"))
    $ buildapp --load hello.lisp --entry main --output hello
    ;; loading file #P"/Users/susam/hello.lisp"
    $ ./hello 
    hello, world
[1]: https://www.xach.com/lisp/buildapp/


>What confused me a lot is that nobody seems to give an example on how to build a binary out of a Lisp program/make it runnable from command-line.

This is covered on the documentation of CCL and CLISP.

You only need one (1) statement to generate a binary. It's easy.


Portacle makes it all even easier, it's a self-contained package of emagc/SBCL/Slime, etc. with sane defaults

https://portacle.github.io/


Can Portacle be used without Emacs? Does it integrate with Sublime Text, Atom or Vim?


You are doing yourself a huge disservice developing in lisp without slime. I've been using a vi-like for 25 years and vim for 20 years and still use emacs/slime for my lisp development environment.


Portacle is an emacs packaged to be easy for beginners, so no :)


>Can Portacle be used without Emacs? Does it integrate with Sublime Text, Atom or Vim?

For Lisp development, Emacs is superior.

There's also SLIMV for VIM, giving the same features.


I think that roswell helps new comer. Roswell is a CL implementation manager and a program launcher.

https://github.com/roswell/roswell.


I'd recomment a ClozureCL instead of SBCL for a newcomer, because CCL does not do some optimizations and it is easier to debug. For example, it will show you local variables created by `let`, but SBCL – does not.


Running

    (declaim (optimize (debug 3)))
in your repl will turn these optimizations off and give you roughly the same debugging experience.


- Choose SBCL (Steel Bank Common Lisp) as the implementation.[1][2] It is the most popular Common Lisp implementation and is recommended in many online discussions.

And note that if you find its interactive interface very spartan, you're not doing anything wrong. It is very spartan. It's not really meant for direct human use; you're supposed to throw expressions at it from your editor. (Do one thing and do it well and all that.)


Saying SLIME (or Slimv) are useful is an understatement. Having a quick interface to the REPL is not an optional part of Lisp programming for me.


CCL is available from homebrew. You can also simply download the compiler as well as the cocoa based IDE from the App Store.


I can't find CCL in Homebrew.

    $ brew search ccl
    ==> Searching local taps...
    cclive
    ==> Searching taps on GitHub...
    ==> Searching blacklisted, migrated and deleted formulae...
    $ brew search clozure
    ==> Searching local taps...
    ==> Searching taps on GitHub...
    ==> Searching blacklisted, migrated and deleted formulae...
    No formula found for "clozure".
    Open pull requests:
    clozure-cl 1.11.5 (restoration of a deleted formula) (https://github.com/Homebrew/homebrew-core/pull/25768)
Looks like the Homebrew formula for CCL was removed.


Oh, you're right. That's a bummer.


The only Cl implementations worth using are those with roots in the real CL. imho.


The giant header that fills the first page, forcing a scroll into content, is right up there with unskippable flash intro videos from 10 years ago.


I'm not sure why this is even a thing, even on news articles with huge images, it's just POINTLESS to me. "Oh I guess you don't want me to read this article" click


Fluff.:O


I know the article is about Common Lisp, but I have a question about Racket, Typed Racket specifically. Can anyone say if types and Lisp play well together? Are there any success stories?


Types are supported since the 80ies, with (the) and optional declarations. Almost nobody uses them. Well, CL says: types are always carried around in every value, so we do have types, and we are always type-safe. Which is a proper point.

Then SBCL has superior internal type support in its python compiler, leading to many optimizations. It creates specialized copies of typed methods, and has a nice optimizer framework to deal with that.

Felleisen (Typed Racket) seems to hate types, he summarizes it with types make racket slower, not faster. But he is still developing the only seriously typed scheme effort. Forgot about Stalin, but if I remember correctly it was similar to the CMUCL/python type optimizer.

In my dynamic language I've implemented gradual typing with great success (cperl): more precise, producing better specialized code, faster, detecting more errors at compile-time and better documentation, so I'm sceptical why Felleisen has so many problems. But I implemented it with performance in mind (premature optimization and such), not completeness. Typed php seems also to go well, also the various JavaScript variants. Just scheme, python and ruby not so.


In my experience Typed Racket programs are never slower than their dynamically typed Racket equivalents. Sometimes they are faster (such as when the program does a lot of numeric computation).

What _can_ be slower is a _mix_ of TR and R. Because TR protects the boundary between the two with runtime contracts. Because it wants safety/soundness.

I'm not any kind of gradual typing expert, but, it seems like you could calibrate speed vs. soundness for boundaries in different ways. TR has prioritized soundness but it seems like you prefer speed.


Well, I do have the same problems between unchecked and checked expressions. You get the speed only with series of checked expressions, but you have to cast to any/dynamic/cons boxed type on each boundary. This is the major slowdown. When the series of expressions is too short, no transformation to native should be done, because the casting and conversion back and forth is more expensive than the win by using the specialized ops.

Similarily SBCL/CMUCL has the similar problem with polymorphic explosion of generating too many methods, which are rarely used. Javascript V8 and friends solved that better.

My speed comes from avoiding consing, my native types are bitfields. Lisp types are usually a class pointer for every cons cell. For me certain casts are permitted, which are not permitted with Typed Racket. So yes, I have to permit traditional code which worked before, esp. with inferred types from literals. Racket has the advantage of defining stricter language subsets, which we only have with Perl 6 also. There you can override even language and type semantics.

Chez has a nice tagging scheme, which should speed up Racket a lot.

Several gradually typed languages don't infer from literals, but that's one of the best speed gains. E.g. the literal 1000 is a union of int32, int64, uint32 and uint64, all possible valid integer types for a certain value. A negative number cannot be unsigned, and a too large number cannot be 32bit. Problem is that people rarely add types, you need to infer 90% of them.


Typed Racket is more ambitious than other attempts at adding types to an underlying untyped language. Namely, Typed Racket guarantees that typed code is never to blame for certain contract violations, and, if any such contract violation happens, it will be properly traced back to an offending piece of untyped code. This is what makes gradual types gradual (as opposed to merely optional), alas, it is also what has been found to have unacceptable overhead.

Relevant paper and talk:

http://www.ccis.northeastern.edu/home/types/publications/gra...

https://www.youtube.com/watch?v=1u1JGwmW0IQ


I think this is the correct video link.

https://www.youtube.com/watch?v=5DlEj6daNEo


Oops, sorry, yes.


It might surprise you, but Lisp is actually mixed typed. Here 'object' is of the type integer:

    (defmethod description ((object integer))
      (format nil "The integer ~D" object))
http://lisp-lang.org/learn/clos


You might like to look at Shen: http://www.shenlanguage.org/


i remember seeing that a while back, but at the time it had a weird bespoke license and the main developer seemed to have a... strong personality. it might be worth checking out again to see what's new.


You might also want to have a look at Julia. It's not really a lisp, but it sits on top of a scheme - and it has a rather interesting, pragmatic type system:

"Julia: to lisp or not to lisp?": https://www.youtube.com/watch?v=dK3zRXhrFZY

https://julialang.org/

Maybe compare and contrast with Maxima?:

http://maxima.sourceforge.net/

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: