Hacker News new | past | comments | ask | show | jobs | submit login
How to write Common Lisp in 2017 – an initiation manual (articulate-lisp.com)
476 points by macco on March 28, 2017 | hide | past | favorite | 254 comments



I started a big project at work using Common Lisp in 2017 and could not be happier. Sure, most nice features have trickled down to other languages, but they are rarely as nicely integrated. And Lisp still has many advantages that are not found elsewhere: Unmatched stability, on-demand performance, tunable compiler, CLOS, condition system, and Macros to name a few. It has its warts too but which language does not?

I found lack of high quality library documentation a bit annoying, but a non-issue, there were tests and/or examples included in practically all of the libraries I have used so far.

Lastly, this rarely gets brought up, but I think Common Lisp has some of the best books available out of any programming language. The fact that it is so stable means that most of material, and code, from the end of 80's and 90's is quite relevant today, and new stuff is being written.

The biggest downside is that it makes JavaScript and Python revolting to work with. But I can still enjoy SML for example.


Curious over why CL vs Clojure? Any comments.


I have used both in production setting and can say that the tooling for most CL implementations is just plain light years ahead of Clojure, and there is no sign of it really improving.

For Clojure the interactive debugging experience is just plain dreadful and for a dynamic language this is pants on head crazy imo. For me a dynamic language has to have a good interactive debugging experience because you have foregone the support of a static compilation and instead wish to reason about your program at runtime. But with the JVM stack traces and lack of interactive debugger Clojure just does not support you in this regard. And as a side note the lack of an identity print is annoying as well e.g. (+ (print 4) (print 4)) "4" "4" 8 (yes, it's easy to add...)

Clojure's decision to use persistent data structures is good but it's really the only standout thing for me other than some syntax sugar like hash maps.


The JVM indeed quickly got me disinterested in Clojure. However, I have similar issues with other (free) Lisps, with probably only Emacs Lisp being the exception.

As a Smalltalker ("the other heroin of the programming world") I'm used to a highly integrated and responsive development environment; from what I heard from other Lispers, the quick feedback and gradual building up of your program is a shared aspect, but the thing that Smalltalkers always bring up and Lispers less so is also that your program and your IDE are basically indistinguishable, which is quite powerful because it makes it trivial to adapt your IDE to your project as you go. In the Lisp environments I've tried - latest being Emacs+SBLC - the split between editor and REPL seems unnatural to me, more so because in the standard case you work with two different dialects of the language. It seems that only ACL and LW have more unification.

Am I wrong and are the $$$$ versions not better in that respect and should I just drop my Smalltalk habits/arguments when learning CL?


> also that your program and your IDE are basically indistinguishable

Lisp provides that,too. But you can also deliver programs with the IDE and much of the development tools removed.

Examples for integrated IDEs:

Allegro CL on Windows/Unix+Gtk, Clozure CL (free) on Macs, LispWorks on Windows/Macs/Unix+GTK/Unix+Motif. There the IDE and the user code runs in one Lisp. Other examples: Lisp Machines, CMUCL on X11, ... there is also the McCLIM project where some ideas from the Symbolics GUI are used.

There are also countless other implementations from the past, which are now mostly forgotten, which had an integrated IDE (Golden Common Lisp for Windows, Corman Lisp for Windows, Macintosh Common Lisp, Medley, Open Genera for X11...),


You should really write an analysis contrasting the functionality of these IDEs, especially those 'mostly forgotten' ones so they aren't lost into the abyss.

I've used Squeak Smalltalk and DrRacket in the past. As someone who finds slime/swank/sbcl in Emacs a pleasure, I can only imagine how great some of those other environments can be.


Hi,

I read this article on your site:

http://lispm.de/why-lisp-is-different

Quote from it:

[ When in doubt design software to be introspective.

when in doubt design software to be reflective. ]

What is the difference between introspective and reflective? I've used introspection in Python and know about and used (a bit) reflection in Java earlier, but thought they were roughly the same sort of thing. Please explain the difference if possible.


Introspection means that the program's structures and procedures are discoverable and we can query about them. Typical questions we might want to have answers for:

* find me the class, typically by name or by some relationship

* what are the fields of that class?

* find me a function

* what are the arguments of the function? does it have documentation? Who wrote it? Where is its source? What values does it return?

* where is the function used? what functions does it call?

* what are the values of a field of some object? What class does it have?

This and more for example enables you to inspect a program at runtime and find out what it does, how it does it and what its state currently is.

Reflection means that elements of the programming language are itself exposed and we can change/extend them.

Two examples of the Lisp world. Very unusual is the reflective tower, where a Lisp program is run by some kind of machine, which for example is a Lisp interpreter. This Lisp interpreter is itself a Lisp program. Thus you can not only write the program, but you can change/extend the interpreter running the program. A certain Lisp dialect allowed also to look at the interpreter running the interpreter running the interpreter running the interpreter ...

In Common Lisp a typical form of reflection is the Meta-Object Protocol of CLOS. It allows you to program the object system to implement new variants: persistent objects, transactions over objects, different inheritance mechanisms, classes which record their instances, ... Thus the MOP exposes classes, methods, generic functions, slot descriptors, ... as CLOS classes and methods - and protocols about them. Thus CLOS can be programmed in itself. Even at runtime.

More primitive reflection would allow you to create/change/remove things like user classes and user methods. Task: create a new subclass of an existing class at runtime.


Interesting, and thanks for the answer. It's a while ago, but when I used reflection in Java (in v1.4, IIRC), what it did seemed like some of the capabilities you describe under introspection above. Maybe they just used a different term for it than you do. Example: Finding the methods of a class and calling them dynamically at runtime, finding the number and types of the arguments of a method, etc. These capabilities were part of the java.reflect* package, IIRC.


https://en.wikipedia.org/wiki/Type_introspection

> Introspection should not be confused with reflection, which goes a step further and is the ability for a program to manipulate the values, meta-data, properties and/or functions of an object at runtime.


Thanks.


You might find MIT Scheme interesting, because it has a built-in Emacs clone. Just call (edit) and you're in.

I think what it really comes down to is that writing and maintaining an editor (especially a good one) is a very large task. A language that lets you use an editor you already like thus has a double advantage, with the possible downside that the integration won't be as good as possible. SLIME is pretty nice though.


For debugging try https://cursive-ide.com it uses intellij and has improved nicely in the past year.


Have you tried out recent versions of CIDER? I find it works good for that I need. But I have to say that the big problem I have is Emacs and its inability to handle long strings.


I may be missing some massive leap but until it does this http://malisper.me/2015/07/07/debugging-lisp-part-1-recompil...


AHHH, not quite the same, nor as good but sayid(https://github.com/bpiel/sayid) is a step in the right direction


CIDER has had debugging abilities like this for a year, there's a sexp based interactive debugger that allows injection, evaluation, stepping etc.

Here's a 39 second video about it https://www.youtube.com/watch?v=A3JAlWM8qRM


The debugger needs to 'instrument' the code with breakpoints...?

That's something different.

In a Lisp system I would have an arbitrary amount of code and can halt and inspect/debug any code without any prior need to 'instrument' code.


To debug a function, yes, you need to eval the form using instrumentation. Then the debugger starts when this form is evaluated. This is the the only major difference to what SLIME provides. Adding breakpoints is somewhat easier with cider, you don't need to call break but you can just add a breakpoint to any sexp by pressing b.


> Then the debugger starts when this form is evaluated.

You mean the stepper using break-instrumented code? That's just a part of what one would call 'debugger' in Lisp.

What CIDER calls 'debugger' would be mostly called 'stepper' in Lisp.

http://franz.com/products/allegro-common-lisp/stepper_dialog...

http://www.lispworks.com/documentation/lw70/IDE-M/html/ide-m...

> Adding breakpoints is somewhat easier with cider, you don't need to call break but you can just add a breakpoint to any sexp by pressing b.

LispWorks does that, too. Usually via a break icon in the window toolbar or the context menu.


Toggling pretty printing via `cider-repl-toggle-pretty-printing` may provide a workaround there. Courtesy of: https://github.com/clojure-emacs/cider/issues/1115


I'm not OP, but there are a few I could see some mentioned in the OP's post in passing.

* The common lisp condition system [3, 0] is a way better way to do errors.

* Tune-able compilers [1] allow you to make better trade offs than even gcc allows for c, let alone Clojure's very opaque compiler.

* CLOS [2] is a very powerful object system which clojure does not replicate (they have multi methods, and java classes, but not a meta class system; think python's meta class features but better (because of macros and compiler access) and with multi methods designed in).

* Nice native interoperability (Clojure generally being on the JVM, and common lisp capable of direct calls and memory manipulation).

That being said. Clojure's unparalleled concurrency features are amazing. I would compare them as: if Clojure is the Java of lisps, then Common Lisp is the C++ of lisps.

[0] http://www.gigamonkeys.com/book/beyond-exception-handling-co...

[1] http://franz.com/support/documentation/current/doc/compiling...

[2] http://www.laputan.org/pub/papers/MOP.pdf

[3] http://axisofeval.blogspot.com/2011/04/whats-condition-syste...


Not speaking for hydandata, a few key points of CL as opposed to Clojure (which might be advantages or disadvantages depending on context and preference) include:

- Defined by a standard (the text of the standard is available for free online as the Common Lisp HyperSpec [1]), not by a reference implementation led by a BDFL

- Multiple mature implementations, both free and proprietary

- Wholeheartedly multi-paradigm; has FP features, but also provides plenty of tools for imperative (notably including a powerful macro for loops) and (meta)object-oriented approaches

- "Lisp-2" rather than "Lisp-1"; functions and variables are in different namespaces, which reduces name clashes but requires disambiguation in some places

- Designed for implementation in a wide variety of environments, including being embedded as an extension/scripting language in C or C++ applications (see ECL [2])

[1] http://www.lispworks.com/documentation/HyperSpec/Front/index...

[2] https://common-lisp.net/project/ecl/


> Defined by a standard

How does this help, compared to Clojure?

> Multiple mature implementations

Why wouldn't one free mature implementation be good enough?

> Wholeheartedly multi-paradigm

Exactly like Clojure.

> "Lisp-2" rather than "Lisp-1"

I can't remember which one Clojure is :) But I haven't had any problems with namespaces and the like.

> Designed for implementation in a wide variety of environments

So is Clojure, right? There's the CLR version, and ClojureScript too.


Clojure variants are incompatible in various basic ways, since they use the host language for various things. The underlying platform also restricts what the implementation provides. No TCO in JVM -> no TCO in Clojure. Numbers are internally floats in Javascript -> Numbers are internally floats in ClojureScript and use Javascript semantics, not Clojure-Semantics.

Clojure:

  Clojure 1.8.0
  user=> (/ 3 4)
  3/4
ClojureScript

  cljs.user=> (/ 3 4)
  0.75
Looks like these are different languages...

Common Lisp implementations OTOH implement most of the standard. There is also more choice in implementations:

* native implementations AND hosted implementations

* actual Lisp interpreters or a mix of interpreters and compilers, interactive compilers, batch compiler

* compilers written in Lisp itself with good error messages, forms of compile-time type checking/inference, and advanced error handling (like SBCL, CMUCL)

* compilation to C, full embedding into C programs (ECL and others)

* full embedding in C++ (CLASP)

* compilation to shared libraries, embeddable into other applications

* whole program compilers for delivery of compact applications (like mocl)

The main Clojure compiler is written in Java and it shows...

https://github.com/clojure/clojure/blob/master/src/jvm/cloju...

If one targets a certain host environment (JVM, Javascript, ...) then this range of options and choice might not matter, even hinder - besides getting a poorer version of interactivity.

If we see a Lisp as a language on its own, then it matters a lot.


> > Wholeheartedly multi-paradigm

> Exactly like Clojure.

Clojure is definitively not multi-paradigm. The only blessed by the developers/community paradigm is functional.


Obviously highly opinionated and personal statement coming, but Common Lisp is to Clojure what Meccano is to LEGO, and I have never been a LEGO person.

If I work on a project where functional programming with emphasis on purity is required I will much rather go to ML family, which I think offers more natural way to express ideas in functional paradigm with beauty and integrity not even Lisp can match.


Why Clojure vs CL? The parent already mentioned some features that Clojure lacks, so if you don't care about features, what do you care about?


SBCL compiles to native code.


Clojure is tied to the Java environment. Someone who has no use for or interest in that environment has no use for Clojure.


> Clojure is tied to the Java environment.

I guess you're not familiar with ClojureScript and/or Clojure CLR?


Last time I tried to use Clojure CLR, all the useful libraries were JVM-only because they called out to JVM libs underneath.


That's sort of like the following:

- I don't want a door here.

- Ok, you don't like the wooden door here. You can have a plastic one or a steel door.

With CL you're not tied to _any_ platform.


"Clojure" is being used as a family name for similar, but incompatible dialects. If I were Hickey, I'd trademark it and enforce against these projects.

Look at lispm's comment: https://news.ycombinator.com/item?id=13984011

It's not just the platform libs, but the semantics of things that really should be portable.


Clojure makes certain language sacrifices to ensure great integration with its host platforms.

Easy access to the tens of thousands of person years worth of effort put into the Java/JavaScript ecosystems in return for dealing with a bit of ugliness is a win in my books.


It's a loss in my books. I like Clojure but pretty much the only thing that's been putting me off is how the JVM "shines" through in lots of aspects.


I'm also interested in an insightful answer.


Which lisp implementation are you using? I had the pleasureof using AllegroCL at a gig I did recently. The environment has a bit of a learning curve, but once everything clicks it is like heroin.

I ended that job a couple of months ago, but I still have wet dreams about it.


which lisp interpreter did you use ?


No interpreters used, mostly SBCL and CCL. Folks over at Grammarly have a blog post which mentions few reasons for the combo http://tech.grammarly.com/blog/posts/Running-Lisp-in-Product...

I write portable code and use libraries which abstract away implementation specific stuff so it will be easy to run on any other conforming implementation should the need arise.

The community is quite responsible when it comes to portability and utility libraries which help with it, but on top of that Quicklisp maintainer tests library compilation on most major implementations and reports bugs.


It baffles me that this notion of Lisp as an interpreted language still persists. Common Lisp is a compiled language for all non toy implementations, and in general Lisps have had compilers for decades.


It baffles me that people refer to languages as interpreted or compiled. That is a property of the implementation of the not the language.


Having a good REPL is important, though.


Having a REPL does not necessitate interpreting http://www.sbcl.org/manual/#Compiler_002donly-Implementation


REPL and interpretation are orthogonal concepts.

SBCL (most common CL implementation) don't have interpreter at all. It complies into machine code before execution even in REPL.


SBCL actually does have an interpreter, but doesn't use it by default. You can set

    sb-ext:*evaluator-mode*
to :INTERPRET to enable it.

http://www.sbcl.org/manual/#Interpreter


Though, to be fair, a 'real' Lisp interpreter often provides more interactive features. The Lisp interpreter sees the code as Lisp data and interprets that. This enables a few things.


You don't need an interpreter for that. SBCL (as does other compilers) always compiles an expression prior to executing it, even when it's code as data passed to EVAL.


> You don't need an interpreter for that.

For what?

> You don't need an interpreter for that. SBCL (as does other compilers) always compiles an expression prior to executing it, even when it's code as data passed to EVAL.

True, but that's not what a Lisp interpreter does and provides.

I was talking about the difference between an interpreter and a (possibly interactive) compiler. The interpreter works over source code as Lisp data. The compiler does not - it does not matter that the compiler compiles individual forms. At runtime the code is machine code. With an interpreter the code is Lisp data.

Think about it: What difference could that make, if the code gets actually executed by a real Lisp interpreter?


Common Lisp would imply SBCL, that's my guess at least.


There are half a dozen CL implementations, for various purposes:

sbcl, ccl and open source implementations with built in compilers.

ecl is similar, but designed to be easily embeddable in C applications

clasp is a newer one that's built on top of llvm (still unstable, as far as I can tell, but actively developed)

abcl targets the jvm platform and, consequently, can gives you access to all the libraries in the Java ecosystem.

And then there are commercial implementations like LispWorks and Allegro that have their own benefits: Lispworks has, I here, a very nice cross-platform GUI library and, although I don't know much about it, I suspect Allegro has it's own perks.

And, because of the community's emphasis on avoiding implementation-specific behaviors, which implementation you choose isn't that big a deal: I regularly develop a project under multiple implementations, and seldom have major issues doing so.


Can you elaborate on your use case?


How do you sneak something like lisp in to work?


You build trust with stakeholders, usually by solving important problems until you reach a point at which having you solve a new problem is more important to them than what you solve it with. It helps if whatever tech you are introducing actually helps you in the task of solving more problems, increasing quality of solutions or reducing cost, in that particular order.


Good advice and similar to the strategy I use at work to use Common Lisp and Parenscript. They are yet to be completely accepted, but I'm working in it.


I'd replace the first few steps with "Install Roswell" -

https://github.com/roswell/roswell

Roswell will install a Lisp and QuickLisp for you, and give you a single point of entry to install libraries, create and run code, and launch en editor (Emacs with Slime of course).

I can't recommend it highly enough (I'm nothing to do with the project, just a very happy user).


Neat. Does Roswell do all this without modifying your current lisp/emacs/slime/quicklisp if you already have them set up the way you like but want to play around with it?

And is there something like Roswell for Clojure?


I don't know about emacs/slime, but it can manage local quicklisp and local implementations (and even have e.g. multiple sbcl versions installed simultaneously).


This sounds a lot like Pyenv and Virtualenv for Python. Neat!


I don't like Clojure (any more), but one nice thing about it is that is includes the Clojure implementation itself as regular versioned dependency on a project-by-project base (thanks to the transparent usage of Maven by the leiningen build tool). Once you tried this, everything else feels like a cludge; there's simply no reason the language version should be managed by some obscure mechanism instead of being a regular dependency!


I'm used to languages like Python, that have a number of files that are modules, and to start a program you run one of them as an entry point.

C programs consist of a lot of files that are compiled and linked into a binary executable.

Whenever I've tried to learn CL, I couldn't really wrap my head around what the eventual program would be. You build an in-memory state by adding things to it, later dump it to a binary. How do you get an overview of what there is?

I'm just too used to my files, perhaps. Or I'm missing something.


I mentioned elsewhere I'm learning Common Lisp. I'm also learning Python by translating some Common Lisp code into Python and part of that is to make sure I understand what the Common Lisp is doing. As an understatement, that's meant building some very unPythonic abstractions (yay, me).

Anyway, I think there is a fundamental design difference between Common Lisp and other 'first class' programming languages: Common Lisp was designed as a way to use computers because a computer user would sit down at a Lisp Machine (which was the future when Common Lisp was designed) and use Common Lisp to do ordinary computer stuff like store their recipes and manage appointments and copy files between directories. It's reflected in the :cl-user package not being called :cl-programmer and the logic of typing (in-package :tps-report) on Saturday at the usual time.

Most other languages don't have this idea...In Unix users don't fool around with stdio.h. In Python, there's no clean way of switching between applications at the REPL -- or rather interpreter -- because of how Python handles the hard job of naming things (like most languages, there's a bit of punting on third down with the catchall epithet 'unpythonic').

This makes it hard to get one's head around Common Lisp, but it explains why a person might leave a REPL running for a week and make better progress because of it.


> Lisp Machine (which was the future when Common Lisp was designed)

Common Lisp was actually designed so that you don't need a Lisp Machine. The people working on it were from CMU (Unix workstations), Lucid (Unix workstations), Franz (Unix/Windows), Apple (Mac), Symbolics (Lispm, PC), Xerox (Lispm, Unix), and many others.

Users used an editor-based IDE (like Franz ELI with GNU Emacs, ILISP with Gnu Emacs), a special IDE usually written in Lisp (Franz, LispWorks, Macintosh Common Lisp, Golden CL on Windows, ...) or even a Lisp Machine which combines the IDE with the operating system.

> copy files between directories

The Symbolics Lisp Listener (REPL + commands) has a command language/interface with a lot of comfort - but with its own usability problems.

You would often type

   Copy File *.lisp.newest >subdir>
instead of using the Lisp function COPY-FILE.

Still today I tend to keep the Lisp IDE running for days/weeks/months... as long as possible. Sometimes Linux tells me I have to restart the machine after some software update...


Sorry for not being clear. I was not stating that Lisp Machines were an intended requirement, my intent was to point out that Lisp machines were the zeitgeist during the period when Common Lisp was incubated and developed and hypothesize that this is reflected in the design of the language.

As a point of contrast, Smalltalk was developed in part with the idea of Dynabooks in the hands of children. Hence its stereotypical use cases presume a substantial difference between the cognitive capabilities of the system's users and the cognitive capabilities of system's programmers.


> hypothesize that this is reflected in the design of the language.

Common Lisp was originally mostly a simplified and modernised version of Lisp Machine Lisp. It was supposed to be 'cheaper' for users (industry, military, ...), when delivering software. Without the 'hardware dongle' of a Lisp Machine, which would cost a lot - both hardware and software.

One of the main purposes of its existence was to be a standard Lisp able to work on a wide variety of hardware - hardware which was less powerful. Thus the design in the early days was also driven by taking features away from what a Lisp Machine would provide. Less features, easier to implement, able to integrate into different environments.

For example Common Lisp provides type declarations. This was added for non-Lisp-Machines. The Lisp Machine compiler ignored type declarations. It did not use it. The CPU does runtime type checking/dispatching on a Lisp Machine. Always.

Everything fancy, which was difficult to implement on small machines, was removed. The result was CLtL1.

The assumption was that you could develop a Lisp based application (say, an expert system helping with jet turbine maintenance) on some platform of choice and then deliver it to the Airforce on some rugged PC, where it would be used in some airbase. People then thought, why not develop on the PC? Thus various implementations for PCs came up.

NASA was putting it on some embedded computer running million miles away from earth, controlling a spacecraft. There was really no limit to where one would have wanted it to be deployed... thus it ended up controlling your cleaning robot (Roomba)...


I have a couple REPLs for specific projects that I routinely keep running for months at a time.

The idea that user = programmer was part of the MIT AI Lab culture before there were Lisp Machines. For example, the top level of ITS, the PDP-10 OS they used, was the debugger. Imagine if the default Linux shell was GDB!


That's kind of how my old MSX was, essentially a MS Basic shell that you could program on. If you wanted to run a compiled program (a game, who are we kidding?), you used Basic to bootstrap the load and replace itself with whatever the tape had. Good times.


That was pretty much how most 8-bit computers worked -- the user interface upon bootup was the Basic interpreter.


Some vestiges of that persisted for much longer. For example, QBasic, and even VB for DOS, still had the FILES command. Which, as you'd expect, produced the list of files in the current directory - except it didn't return it, but simply printed it out to the console. So it was something rather useless in an application, but handy in a shell.

Less obvious things were having a variety of filesystem-related functionality as built-in statements with special syntax, rather than functions. For example, renaming a file: NAME "foo" AS "bar".


It's a little like that in TempleOS. The shell feeds into a HolyC compiler.


The idea that it is an artifact of Media Lab culture makes sense. One of the analogies I saw is to Emacs Lisp as a language for editing text (and Stallman is listed in the acknowledgements of Steele's book).


As other people have said, you can use Lisp the same way you describe using Python or the same way you describe using C.

You also have a third option: you can use Lisp as a sort of interactive command-line calculator-plus-kitchen-sink. Leave it running and teach it to do odd jobs for you. If you accumulate your odd jobs into a file then you'll have them for later. You could of course save the state of the running Lisp, but a source file is better in that it keeps a nice tidy record of the source code of your hacks.

Serious programs written in Lisp are mostly organized pretty much like serious programs written in other languages: the program is factored into a collection of source files that, in the best case, reflects a logical decomposition of the functionality. There's some form of system loader that compiles and loads the sources in the right order and, if desired, dumps the result into an executable program.

Nowadays most people use ASDF for system loading and dumping, I think, but it's not hard to write your own system loader, and I still know people who prefer to do it that way. Just as an example, CCL still uses its own homegrown loader to build.

If your program is like a Python script then it's a Lisp source file that you pass to the Lisp kernel, in just the same way you would do it in Python.

If it's a compiled executable, then your sources were compiled into the Lisp and dumped to that executable, and it works pretty much like a C program. The main difference is that there is no distinguished main(); instead, Lisps generally treat the Lisp's own repl as the default main function, and the image-dumping tools offer you the option to substitute the main function of your choice.

Things are a little different, but only a little, and it's pretty easy to learn how they work.


In a compiled system like SBCL, things work much as they do in C, except the process is more "programmable."

To recap: when you load a (dynamically-linked) C program in Linux or OS X, the OS will create a new process, and map the dynamic linker (ld-linux.so or dyld) into that process's memory. It then transfers control to the dynamic linker. The dynamic linker then loads the rest of the program into memory (from ".so" or ".dylib" or ".dll" files) and links everything together (i.e. resolves symbol addresses) before passing control to the application entry point.

A compiled Lisp system like SBCL works much the same way, except it uses its own linking/compilation machinery. A new process is bootstrapped by running a C program, which loads the Lisp image into memory. The image is compiled code/data containing the Lisp runtime, compiler, and standard library. After the Lisp image is loaded, you've got two things to work with: COMPILE and LOAD. COMPILE takes Lisp source code and compiles it into a FASL (compiled code and data, equivalent to a ".o" file). LOAD functions much like dyld or ld-linux, and loads compiled code and data into the running process. Indeed, in some implementations like ECL, LOAD is just built on dlopen(). The "eventual program" is ultimately built by LOAD-ing the FASL files comprising the program.

In practice, you don't do this manually. Instead, you use something like ASDF: https://common-lisp.net/project/asdf/asdf.html#Defining-syst.... ASDF functions much like 'make' in that you've got a system definition listing the files belonging to your program. Whenever you call ASDF from the REPL, it'll look at what source files need to be (re)-built, compile them to FASLs, and (re)-load them into the running process. In contrast with C (but similarly to Python), Lisp allows code to be loaded into a running image. In the Lisp workflow, you don't restart the program you're writing each time you recompile (although you can, if you want). Instead, you keep it running, and load or re-load code into it as you work on it.


Common Lisp implementations do whatever you want.

> I'm used to languages like Python, that have a number of files that are modules, and to start a program you run one of them as an entry point.

You can do that. start a Lisp with a file to load at start, it can then load the dependencies.

> C programs consist of a lot of files that are compiled and linked into a binary executable.

You can do that, too. That's typical when you create a 'system' declaration which describes your software. ASDF would be a tool for that. You then compile the software, load it and dump an image. Some implementations have a more elaborate way to create applications or can create loadable libraries, which you can integrate into other applications.

> You build an in-memory state by adding things to it, later dump it to a binary. How do you get an overview of what there is?

Typically you would work with files. Write the code in files, evaluate the code from there and build the software from time to time as a whole. If you use SBCL, then you get tons of compile time information, type checks, efficiency hints, warnings, ....


You can build your Common Lisp programs from a set of files like any other programming language. "asdf" is the most common system which provides loading complex lisp system. I actually use a makefile to build my Common Lisp based executables from my set of files, no neeed to build up an in-memory state.

The difference with Common Lisp is, you are not limited to this build model, but while your whole code is loaded, you can keep redefining functions for development. This creates the minimal cycle of editing and testing any single function in your program. This is a big asset in development, but in no way means you have to build your systems that way. Actually the contrary, I can only recommend to restart your lisp image frequently to ensure the system loads and builds from file rather than depending on the image state.


I don't know where you got that impression, but CL development is no different than Python development.

Code goes in .lisp files, and then you either load it into the REPL or pass it to the compiler and tell it what the entry point is.

I suppose it's technically possible to develop an entire application at the REPL and then dump to a binary, but I don't think anybody does that any more than they do it in Python.


Lisp offers a lot of ways to check what there is. You can inspect, search, look at source code, docs, assembly code. All of these things are dynamically inspectable. Tools like Emacs and SLIME make it easier.

You can still look at the files that get loaded. Lisp organizes itself around systems (libraries) and packages (namespaces). It's good to check what packages a system has, then you can check out the symbols provided by a package.

Lisp isn't totally wild-west in the concept of the Lisp image. There is organization to good code.


In Common Lisp, you write to source code files and then use ASDF/Quicklisp to compile/load that project. If you feel the need to create a standalone executable you can dump the image with an entry function specified. It's essentially the same as python, although standalone executable are less prominent than in-image programming.


I've been working on a CL project for a couple of years. Was my first big stab at using CL for something other than a toy. Sbcl is a nice choice, but far from the only option. It has many tradeoffs. CL is not without its frustrations. Documentation that has not aged well. A community that can be less than welcoming. (in contrast to say the Racket community) Inconsistencies, e.g. first, nth, elt, getf, aref... However portability appears to be a strong point vs on the scheme scene. Single binary compilation on SBCL/LW/ACL/CCL are great. Found GC to sbcl to be lacking on large garbage rates. Tended to promote garbage to a tenure that prevented it from being removed. It would max out 32GB of memory, even with explicit gc's enabled between files. Where as the other implementations would stay below 4GB.

So ymmv.

Performance benchmarks using cl-bench really highlighted some strong points http://zeniv.linux.org.uk/~ober/clb AWS Cloudtrail parser. https://github.com/kunabi/kunabi


This talks about GC settings that worked for them in SBCL for production, not sure if you already tried the same kind of stuff but may be an interesting read http://tech.grammarly.com/blog/posts/Running-Lisp-in-Product....

If you can make an easy to run example of the garbage collector not working correctly, the folks in #sbcl on freenode are very responsive and have commit access to fix it.


Good to hear someone had good interactions in #sbcl Still have a bitter taste from attempting to report the issue there.


Yeah, i've had good and bad interactions.

They seem to be cranky due to getting a lot of idiots (including sometimes me) coming in thinking something is sbcl when it was actually the programmer's fault.

Generally if I think something is sbcl now I put together example code that clearly shows the issue and that seems to work well.


Hi, HN! I made this! Ask me any questions you like. I'll try to respond as the workday progresses and I wait for deploys to complete!

paul@nathan.house if you want to email me instead. (or @p_nathan on Twitter, if that's your thing).


You might want to mention https://github.com/fare/asdf/tree/master/uiop which is part of ASDF and bundled by default with most implementations.


Also, the ASDF information is somewhat dated. It will work, but the central-registry has been deprecated for years. I'm not even sure how it might work on windows.

If you're recommending quicklisp, there's the local-projects approach, otherwise there's $HOME/common-lisp/


Yeah, I need to examine a bit more of the situation with local-projects. I reject ~/common-lisp because I have my own directory structure, thanks. If I can symlink farm under ~/common-lisp, then what's the difference? :)

(and it's not even a hidden directory, so it clutters up ~).


> (and it's not even a hidden directory, so it clutters up ~).

Well in that case you have two options still:

    ~/.local/share/common-lisp/source/
or put config file(s) in:

    ~/.config/common-lisp/source-registry.conf.d/
And keep your lisp source in any arbitrary number of directories.

Of course if the symlink farm is working for you, that's fine; it just has issues (some historic with some implementations doing odd things with TRUENAME, and of course the big one is "windows").

[edit] forgot some other options:

* setting the CL_SOURCE_REGISTRY environment variable

* using one of the newer functions for configuration, particularly with the :tree directive which in many cases removes the need for symlink farms.


I do need to study the matter. I've found the ASDF docs to be unusually opaque (last time I read them), and my current solution to be Very Simple (and a little Stupid), so I've been content not changing the setup.

Thank you for your kind information.


Indeed, I think nearly every section of the chapter on configuration requires you to know the contents of all the other sections for it to make sense. I have probably spent a few hours going over that one chapter at this point...


IMO the "Getting Started" section presents way too many choices. Perhaps instead just link to Portacle (or some other lisp quickstart) with a mention that there are other options?


Some notes based on my (brief) experience toying with Common Lisp:

* Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec ? Having good, easily-browsable documentation is a core-problem.

* The relation between the various native data-types were quite unclear to me.

* dealing with the external world was quite a mess. Many project/libraries implementing only half of something and then got abandoned.

* some libraries had a compatibility matrix... with common lisp implementations. that seemed weird to me.


> Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec ? Having good, easily-browsable documentation is a core-problem.

A friend of mine is working on that as we speak. The CLUS, or Common Lisp UltraSpec:

https://phoe.tymoon.eu/clus/doku.php

That said, CLHS isn't bad - it's lightweight, available off-line, and on-line you can pretty much always find what you're looking for by searching for "clhs [term of interest]".

> some libraries had a compatibility matrix... with common lisp implementations. that seemed weird to me.

Common Lisp has a pretty good standard, but it's a bit old so it didn't predict some things we're currently using, and also left some other things to the implementers. So some features are only available via vendor-specific extensions, which makes it necessary for some projects - especially compatibility libraries - to be tested across plethora of CL implementations.

The flip side is that you have a few commercial and open source implementations to choose from, which is valuable if your project can exploit strengths of a particular one.


Ah, this is something to follow. Sadly it doesn't seem to have a permuted symbol index, which is what I really love about the CLHS. Still, I suppose they have to start somewhere.


> Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec ? Having good, easily-browsable documentation is a core-problem.

The hyperspec is only distributed under a restrictive license. The TeX sources of actual last draft of the ANSI specification is public domain though. This means that someone would need to retranslate from the TeX sources rather than modifying the hyperspec.


> Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec

I mostly access the HyperSpec using Erik Naggum's hyperspec.el while programming in Emacs, configured to bring up Lynx. A lot of other people probably do something similar, so there has not been much interest in adding CSS.


> The relation between the various native data-types were quite unclear to me.

I'm not sure what was unclear. Perhaps you cuold give a specific example of something that confused you?


> * The relation between the various native data-types were quite unclear to me.

http://sellout.github.io/media/CL-type-hierarchy.png


> http://sellout.github.io/media/CL-type-hierarchy.png

Indeed.

Take for example base-char, which is a type.

base-char derives (derives? the straight solid black line is not represented in the legend) from character, which is a built-in class (according to legend).

So is character a class too? does base-char encapsulate a character or something like it?

Before anyone comment, please keep in mind that this comment doesn't want to start a flame or be critical in any way.


Types in Common Lisp may or may not be represented by classes (all classes are types, not all types are classes). In this case, character is indeed a class, while base-char is not.

It's definitely true to say that base-char is a subtype of character, in that some characters are also base-chars, while others are not. (The spec defines that there are at least 96 standard characters, all of which are of type standard-char, and thus also of type base-char. Implementations may add others. Interestingly, the spec doesn't define how those characters are encoded; of course most modern implementations use some variant of Unicode.)

Some examples:

    (typep #\A 'base-char) ⇒ true
    (typep #\A 'standard-char) ⇒ true
    (typep #\A 'extended-char) ⇒ false
    (typep #\A 'character) ⇒ true
    (typep #\𝀽 'base-char) ⇒ false
    (typep #\𝀽 'extended-char) ⇒ true
etc.


https://pbs.twimg.com/media/C56V5kyWMAAi0DD.jpg

Blue are CLOS classes. Brown are structure classes. Red are built-in classes. Black are types without any classes.

This is the hierarchy for types in LispWorks, packages CLOS and CL.


Can someone point me at an argument for why I'd want to write CL in 2017, given all the great alternatives available now?


It's a seamless dynamic programming language, which can, with care, be given excellent performance and a high level of abstraction. In my opinion, it's miles better than the other dynamic languages out there, by nearly every factor. It rewards investment and development very well; it's a tool for mastery, not for quick and easy starting.

If you're looking for statically typed languages, it's not going to win there. But my experience writing a lot of Perl and Python, with a certain amount of helping with Clojure and Ruby experience, strongly indicates that Common Lisp is very very competitive there outside of the 'library' front.


The Common Lisp library ecosystem isn't as polished as it is in Python from what I know. Nor is the documentation for many libraries up to scratch, even the stellar ones. Some libraries merely document the functions provided and don't go into how to use the library. Even stellar libraries such as Hunchentoot do this, which I find rather annoying after using Python libraries which seem to make it an important point to tell you how to use things.

I love CL and the idea of Lisp in general, but its modern unpopularity makes it hard to ask questions (because chances are you won't get a response any time soon) and the library situation is a real downer.


The CL community tends to expect you to read the source. You're not just a user, you're a programmer too - and quite possibly of the library you want to use.

It's an interesting dynamic, very alien to modern computing modalities.


Sometimes I get the feeling that the lack of CL libraries is somewhat self-imposed. "We don't need no stinking libraries. Since CL is so AWESOME, a competent CL programmer can reimplement whatever he needs in a fraction of the time that programmers using a lesser language would need even with the help of ready-made libraries". See "smug Lisp weenie".

Not saying that the above is actually factually true; certainly quicklisp has done a lot to make 3rd party libraries a lot more approachable.


I'm not an experienced web/http developer, but it [1] seems pretty understandable and even contains "Your own webserver (the easy teen-age New York version)" section for those who don't care, give them teh routez guys. define-easy-handler serves exactly that purpose.

From what I'm able to recall from my childhood, all docs were like that.

  [1] http://weitz.de/hunchentoot/


Do you prefer it over Clojure? If yes why? I'm a bit versed with Clojure but sometimes I feel that it is not a true lisp.


> sometimes I feel that it is not a true lisp.

That feeling may be the result of some questionable syntactic design decisions in Clojure.

Hickey went over Lisp syntax and tried to remove parentheses wherever possible. (It was the hip thing, PG's Arc did it, too, which may be where Hickey got it from.) As a result, you get a sub-par editing experience —generic sexp-based structure editing doesn't get you as far as with a Lisp— and diminished readability once you get above three omitted pairs of parentheses in a row.

On the other hand, Clojure arbitrarily mandates a secondary kind of list literal in some places of the syntax, again making Clojure harder to write and edit. I don't even see a readability benefit, but of course YMMV.

TL;DR: Clojure syntax is a "complected" derivative of Lisp :-)


> you get a sub-par editing experience [in Clojure]

Paredit and vim-sexp have no problem doing structural editing. I don't use it myself, but Parinfer was created with Clojure in mind and people seem incredibly happy with it.

> diminished readability [in Clojure]

If you watched the talk that you lifted "complected" from, you'd know that this is subjective. Is German unreadable because I don't know German?

> Clojure syntax is a "complected" derivative of Lisp :-)

Complected in some ways sure, but far less so than many lisps, and certainly less so than Common Lisp. It seems your familiarity is complected with your reasoning.


So you seen that video? Great talk though.


Yes.

Clojure often feels like a skin over Java; an ergonomic library. Most visibly this is manifested in stack traces.

Clojure also wants to compile files, rather than read them.

There are good ideas in Clojure, but they lie around the parallelism abstractions, not around the implementation.

Socially, of course, the JVM has a huge mindshare.


Much as I love Clojure I find the list vs seq abstraction goes against the lisp tradition.


Common Lisp has a sequence type that includes lists and vectors (and because strings are vectors, strings). Ordinary functions like sort, remove, find, reverse, and length operate on them.

Clojure's seq extends the abstraction to include maps and sets and other types implementing the interface.


There are some other practical answers here so I'll take a different angle.

Fun! CL is a language to play in, after a day of wrangling Java & ObjC issues I love settling down to just play in an environment that lets blast some code out and play with ideas. Of course this applies to other languages too and this is dependent on your interests, so the case I want to put out there is:

Even if a language isn't suitable for your current business needs, see if it gives you joy. Languages have trades offs to meet their goals, evaluate languages for pleasure too.

Also come visit #lispgames on freenode sometime..most of us are procrastinating making engines but it's always nice to have new folks around.


I'll take a stab at this glib comment:

1. You've inherited an application written in Common Lisp

2. Common Lisp has features you find desirable that aren't available in another system

3. You like Common Lisp

4. Common Lisp helps you get the thing you're trying to do, done


You can replace "Common Lisp" with any other language and these points still stand (at least in a tautological, nonsensical way).


Those are the kind of answers "glib" questions tend to get in response.


You're right that these arguments apply to many languages, but they're not tautological or nonsensical and the choice of Lisp (like many other choices) can be defended.


1. Because I was forced to? That's not why I'd want to.

2. Examples?

3. Fair enough. Though AlexCoventry's question probably shows that he does not (currently) like Common Lisp, so this answer doesn't give him a reason.

4. Sure, that's true for every language and tool. Choose it when it helps you, don't when it doesn't. But why would CL help me more than another language?


To 2 and 4.

Better exception handling (conditions and restarts), advanced OOP capabilities, proper macros which are convenient to use (pretty much requires a language to be homoiconic), image-based development with ability to hot-swap any code in a running program, including but not limited to full class redefinitions without losing data. All that with resulting code able to achieve close-to-C++ performance thanks to very good (commercial and open-source) compilers. Not to mention the stability of the language thanks to the ANSI standard.


> 1. Because I was forced to? That's not why I'd want to.

You might find it pleasant. Who knows? Maybe you're a professional and you'd want to understand CL so that you can embrace and extend the system you've inherited.

> Examples?

Conditions and restarts. Incremental compilation. CLOS. Numerics.

> 4. Sure, that's true for every language and tool. Choose it when it helps you, don't when it doesn't. But why would CL help me more than another language?

CL is more like a system than a compile-and-run language. The compiler, debugger, standard library, and platform libraries all live in the image. You build the program as you go, incrementally, and can inspect it while it is running.

An error doesn't halt the entire process... instead condition handlers can either choose a restart (including asking the user what to do) and the execution continues. That's where you can attach to a remote image running on a server, notice that there's an error with a particular request, inspect the entire stack, fix the problem, and continue the request. No need to crash or anything like that. If a more hands-off approach is necessary then a handler can be written to choose an appropriate restart.

CLOS is great. You can change the class definition in a running image and the instances will incrementally update without recompiling and restarting the program (and rebuilding all of that state).

You can develop the language you need with these tools instead of using the language you've chosen (or inherited). Need an interactive prover? Try ACL2. Experimenting with sequent calculus? Want a language built around recursive fractals?


>1. Because I was forced to? That's not why I'd want to.

Better than rewriting the whole thing in your pet language.


This is a question that has been answered many times over the years. Search for "why common lisp" and you'll get a long list of them.

The answers now, IMO, are not substantially different than they were, say, ten years ago. The big differences, I think, are the new competitors in the LISP world: the rise of Clojure, and the rebranding and further development of PLT Scheme as Racket.


I did a presentation about this some time ago where I try to explain how I develop Lisp software. Perhaps someone will find it informative:

https://engineers.sg/video/web-development-in-emacs-common-l...


Well then you are giving us a fantasy scenario. Because in reality, the alternatives are not great.


If you're so inclined I'd make it a "living document" that gets updated as the state-of-the-art evolves. Writing CL in 2017 is not likely to change rapidly in the next decade but even compared to what writing CL was like 8 years ago it has changed enough.

Nice job.



Yes, send in PRs if things significantly change! It's a stable ecosystem, but things do improve over time. Things like examples would be quite useful.


While I remember - we need a refreshed SOTU for 2017. 2015 one[0] seems to be still mostly correct, but the CL scene is pretty active.

[0] - http://eudoxia.me/article/common-lisp-sotu-2015


For people using macs, it's probably worthwhile to mention CCL's IDE, which you can easily build from within the CCL sources using (require :cocoa-application), or which you can get for free from the Mac App Store (it's called "Clozure CL").

It's a little bit bare bones and a little bit perpetually unfinished, but it works and it gives you a Lisp-aware environment for editingand running Lisp code, and even has a few handy tools.


Is there something like paredit available for CCL's IDE?


>Repository for local libraries with the ASD files symlinked in

This method is so old, I can't believe people are still doing this. You can easily setup ASDF to look in a subtree of a directory and never care about it finding your libraries again.

[1] https://common-lisp.net/project/asdf/asdf.html#Configuring-A...


How does Common Lisp compares to Racket nowadays? I've seen a lot of activity but I can't decide which one to try out. I only have time for one of them ATM.


1. Racket is a multi-paradigm programming language. It has Java-style class/object system, a CLOS-like object system(swindle) and a prototype object system (like self and JavaScript). 2. The macro system is arguably the most sophisticated available. 3. Functionsl programming! including '(purely)Functional Data Structures'. 4. Parallelism (futures) 5. concurrency 6. Contracts 7. Typed Racket 8. Pattern matching 9. Modules 10. Units 12. Pattern Matching 13. Exception, continuations 14. Reflection 15. arguably the most sophisticated tools for creating full languages and DSLs 16. Datalog language (like in datomic) 17.an amazing IDE - it has a tool to debug macros! (but you can also use Emacs) 18. Typed Racket 19. Functional pictures (pict) 20. OpenGL 21 cross platform (win, mac, Unix, Linux) it will even run on a raspberry pi 22. Amazing community 23. Math library (and plot library) 24. Scribble/@-expressions (like markdown but much more powerful) 26. Quite a few libraries


The question was how do they compare. 90% of your list applies to CL as well. (CL has sheeple as its prototype based Object system).

The statement about macros is more nuanced that you let on, language decisions like being a Lisp-1 w/o a symbol namespace (CL packages) make hygiene a necessity which in turn complicates writing macros. One thing I do like about racket is that it comes with support for writing pattern directed macros, that although possible to write in CL has no built in tools for it. Another point for racket is that it has support for macros with better error reporting (ej the elipsis).

15 is really the main selling point for racket. It is a Language creating toolbox (I forget the term they use for it). It's module system stands out in that regard compared to CL, which has read tables as a way to customize the reader. Racket's solution is more general and extensible in that regard.

16 is not true when compared to CL. The IDE of racket is way behind slime/sly, however it doesn't require setup.

The other big difference is that CL is image based and suited for interactive development while Racket is batch oriented, 'python-style' of interactivity. If you come from Python you will think racket is interactive, but if you come from smalltalk you'll know what you are missing.

As Aiden said, both are great languages, flip a coin.


> Racket is a multi-paradigm programming language

and its only implementation.

1. I use LispWorks, a multi-paradigm programming language (and its implementation) based on Common Lisp. It has CLOS and can use multiple other object-systems. 2. uses procedural macros 3. procedural functions, 3. concurrency, 6 assertions, 7 type declarations, 8 unification 9 systems, 10 conditions 11 reflection 12 arguably excellent tools for creating languages and dsls, 13 prolog 14 rule system 15 database interface 16 OpenGL 17 cross platform and will run on the raspberry pi 18 nice community 19 interface builder 20 delivery as shared libraries and applications, ...

and a long list of other features...

It's commercial and closed software, though.


> 17.an amazing IDE - it has a tool to debug macros!

This may be true for an experienced user, but I've mainly used VS Code/Atom/Sublime etc, and I did not enjoy DrRacket -- I'm assuming that's what you're talking about. Some of the dropdown menus didn't render for me, and while this may sound stupid, I had the damnedest time figuring out that half my screen was a REPL and the other half a file.


How long ago did you try it / what platform are you running on? It was a few years ago now, but they rewrote the GUI libs to be native rather than whatever x-platform lib they used. DrRacket seemed to get... better then.

I mostly use emacs, but was playing with paredit for DrRacket with emacs keybindings the other day and decided that I could get to like some of the other creature comforts of DrRacket.


I think you will be happy with either one. It's targeted mainly toward people familiar with them, but here's a recent comparison:

http://fare.livejournal.com/188429.html

Note that a lot of what Faré likes about racket is somewhat intrinsic in there being one implementation, and a big part of why racket branded itself away from scheme (it was formerly PLT-Scheme).

In lisp there are still a lot of very different implementations in use, so if you want to "grow down" you either have to be non-portable or do a lot more work.

I love Common Lisp, so I will say "learn Common Lisp" but you'll probably be just as happy if you flip a coin (and I would recommend doing so rather than debating much as learning the "wrong" one now is probably better than learning the "right" one in the future).


My personal, subjective, feeling is that the Racket offers an easier entry path (docs, libraries and library discovery, relative non-cruftiness, package management via raco, IDE, etc.). I also like that Racket is a little more biased toward FP than CL. I'm told the macro system is ahead of CL's, though I can't say I deeply grok macros.

The Racket community always feels friendly and welcoming to beginners, which is something the CL community hasn't always been.

I do like that CL has a standard with multiple implementations. That said, the standard feels old, and you can quickly run into libraries that were built with less-than-universal compatibility. Things like tail recursion in CL exist commonly in implementations, but not in the spec.

I've tried to carve out a bit of hobby time for lisps over the past few years. I started with CL and fought with the tooling and I could see the power, but I never felt great about my abilities with it. I tried Clojure and periodically use it as a stand-in for Java, and in that sense it is good. But these days, I've been playing in Racket, and it feels like the lisp I wish I'd started with.


One other item: the "enlightenment factor"—I'm really far from enlightenment, but I can see the crazy stuff people are doing in Racket with related, compatible languages, like Typed Racket and have a deep suspicion that Racket might carry me further on the path of enlightenment. That's down-the-road stuff for me though. Right now I'm enjoying small-time Racketeering.


If you have a Mac you should try Clozure Common Lisp (http://ccl.clozure.com). It has an integrated IDE so you don't have to futz with emacs and slime.

Also, this library smooths over some of CL's rough edges:

https://github.com/rongarret/ergolib


If this could have a "start" page and a "next" button that will take me from topic to topic in order I'd enjoy that.


Ah yes this is exactly what I needed. I was recently trying to start a CL project but I had trouble wading through all the outdated material, especially with regards to including external packages. Thanks for putting this together!


I'm going through a similar process. My caution is that 'package' has a very technical meaning in Common Lisp that is at odds with how 'package' is used in other languages (and a bit at odds with how the author uses it in their tutorial).

A package in Common Lisp is a set of interned symbols. In Common Lisp, systems are more in keeping with an ordinary understanding of packages...but combined with the idea of a build system just for fun.

ASDF is a way for managing systems (but it is worth keeping in mind that Common Lisp does not have any 'official' understanding of systems). ASDF is pretty much a de facto standard by consensus.

Quicklisp is a 'package manager' in the sense that it will go out and fetch a dependency from a repository. But what it fetches is a system: it is usually not a package in Common Lisp's technical sense.

From the Quicklisp FAQ:

How is Quicklisp related to ASDF?

Quicklisp has an archive of project files and metadata about project relationships. It can download a project and its dependencies. ASDF is used to actually compile and load the project and its dependencies.

ASDF is a little like make and Quicklisp is a little like a Linux package manager.

On the other hand, Common Lisp is very stable around ASDF and SLIME and QuickLisp. ASDF was started in 2002. Quicklisp in 2005. SLIME in 2003.


Hmmm. I tried to convey the idea, without getting too bogged down in the details that don't per se matter on Day 1.

I would be interested in any PRs you have to clarify the matter adequately for people just starting out.


A few remarks:

0. It's a hard problem. I have a deep respect for your trying to tackle it.

1. The hard part is that it all matters on day one and everyone will get a lot of stuff wrong for a long while and some stuff wrong always when attempting non-trivial projects. This is the rule whether it's Python or Common Lisp or Racket.

2. I'm a fan of polyglotting languages in general and Lisp's in particular. For a person really just starting out, I'd point them at the Racket ecosystem because it is designed to be newbie friendly with student languages and Common Lisp is designed for production programming on hard problems.

2. For people with some programming experience and simple curiosity, I'd just advise them to install SBCL and play around in the REPL. With an exercise of adding a script to wrap it in readline. And some exercises using the text editor of their choice to load and read and write files and such.

3. I don't think that there's a way to add training wheels to SLIME and Quicklisp and ASDF as the development environment. It just won't ever be DrRacket. At best it produces something like Aphyr's Clojure from the Ground Up...which uses Emacs; is more like a book; and definitely a labor of love. It also dives into the details at day one.

None of which means I might not submit a pull request. But I take what Norvig says seriously -- http://norvig.com/21-days.html

Again, kudos for taking it on.


The niche of "welcome to Lisp, here's how to code" has been super well filled over the years. And regularly people write sort of intros to SBCL, CCL, etc. articulate-lisp isn't intended to do that - there are a few notes in that regard, just to whet your thoughts - and because I was bored - but it's not a thing there really.

But what is typically lacking is how you go from just a Lisp environment to a development environment that lets you operate at a professional level. Articulate-lisp is intended to deliver the thumbnail of how to get that put together, along with assorted references to further study.

At the time I created it, there was nothing really suitable for pro development out there. I guess roswell and portacle are things now.

Training wheels aren't really my bag of things. I'm notorious for preferring to read the O.G. paper on subjects rather than work through tutorials and simplified whatsits. But giving all the data at once doesn't provide the map of the territory that newbies crave.

There's no Royal Road to Lisp... or geometry. But a map to get you to where you're going does exist for geometry, and Lisp, I think deserves one too.

If that-all makes sense.


What you say makes sense to me.

I think a resource for "leveling up" is probably better if it is opinionated. I mean, there are reasons a person might not want to use Emacs for Common Lisp development (aside from using a product with a built in IDE), but there's no reason to handle edge cases which have little to do with "leveling up" on Common Lisp...there may be a SLIME mode for Atom, but someone who chooses it is swimming upstream in terms of Common Lisp.

The situation is similar in regard to Lisp installations. There are good reasons not to use SBCL, but they probably don't have that much to do with "leveling up" (again outside the commercial IDE world) and trying to cater to those non-leveling up reasons is a distraction.

To put it another way, a person who is just starting out is not in a position to make decisions based on experience. A year later, they may have the experience to make informed decisions because they have learned what matters and what does not.

I hopeful for Roswell and Portacle, but not terribly optimistic in ways that are similar to when I hear about a new Linux distro. The hard work is not the exciting honeymoon period. It's grinding out maintenance over the years without getting paid. It's designing good features for other people without getting paid. Most projects cannot do it.

Part of the problem is that leveling up on Common Lisp is mostly a matter of will to RTFM. Sure a site can have a great article explaining Common Lisp packages, but to understand packages, readers will need to understand symbols and so the options are:

1. Expect the reader to already understand symbols.

2. Describe all of Common Lisp.

3. Accept that the reader will still have a lot of work to do after reading the article.

1 and 3 collapse into similar requirements for an author. 2 works if the author writing a book and really knows their stuff.

Not sure how any of that is applicable here.


I'd have to review it but i found Common Lisp Recipes to be excellent at explaining well... everything. Useful inspiration may be found there.


Wondering if anyone has any experience using lisp for machine learning? I'm aware of mgl[0], but it seems to be abandoned. The lack any wrappers for tensor flow or caffe is also a bit surprising to me. The cliki page [1] is also unhelpful and out of date. Is machine learning on lisp dead or are there projects out there that I'm just not aware of?

[0] https://github.com/melisgl/mgl

[1] http://www.cliki.net/machine%20learning


Not directly, but have been following various projects over the years..

My take on this is that the people using CL for machine learning have been doing this for some time, and so have their own toolsets; tensor flow is relatively new in that regard, and w/r/t lisp interfacing would entail low-level binary interfacing (and therefore mostly non-portable between implementations) to hook CL code with tensorflow kernels (definately not an expert here on either however).

also, what is popularly referred to as 'machine learning' is in my opinion mostly only one aspect of the field - e.g. classification neural networks - while lisp can definitely do this, lisp AI programming (in my amateur opinion) shines more in the realm of machine reasoning/inference due to the symbolic / dynamic nature of the environment - e.g. constructing a set of reasoning primitives (e.g. functions and facts and decision trees) and a meta-interpreter to reason/infer about external data and walk around a problem space.. also, owing to the dynamic and rapid development nature of the language, likely many people are working with their own prototype/core frameworks, possibly cobbled together from various small bits and pieces of 3rd party code. Also, neural networks have been around for quite a while - what these new frameworks bring to the table is not so much new core algorithms, but the ability to quickly cobble them together in a more popular/user friendly way, and also take advantage of fast hardware (e.g. GPUs)

as for projects - in the general sense, lisp has been a latecomer to the 'languages with cpan-style trove of public addon modules' crowd, owing in my opinion to the need to support multiple implementations in order for such a project to take hold - so older but yet still quite functional libraries might be around in various hodge-podge repositories which are not standardised but old timers have already included in their own local systems, etc. (see also CMU AI repository)

In the last few years, much has been done in the module space - I would definately consider 'quicklisp' to be roughly the defacto definitive list of current modules, especially those under active development, since the active community is basically converging on this as a module/distribution platform and so many (most?) active community projects are available as quicklisp modules and included here - and so would probably be the one of the first places to start in checking into for available libraries for any topic.

Also, the best way to 'explore' quicklisp is to install it, and then install various packages and then muck around/explore with the source code that they download into your environment - the documentation tends to be much less 'external' (e.g. websites) and much more 'internal' (e.g. READMEs, in-tree code examples or unit tests).

https://www.quicklisp.org/beta/releases.html


I wrote in Common Lisp the star map generation software at the core of my startup, http://greaterskies.com, and could not be happier. But now that it's getting off the ground I wonder whether it may adversely impact my chances of being acquired. Are there any known examples of recent CL-based startups?


I don't know about being acquired but I believe there is a canadian travel-oriented startup using CL for an algorhythmically important part of their stack. I forget what it was called.


Not a startup per se, but Grammarly is a well-known company using Common Lisp for their core product.


If emacs is an obstacle to Common Lisp in 2017, maybe what's needed is a Lisp-interaction plugin for vi(m) (or whatever it is that vim uses in lieu of emacs modes). I don't get the hype for modal editing but you can't argue with the data clearly showing emacs users are in the minority.


slimv is a swank client for using Common Lisp inside vim. http://www.vim.org/scripts/script.php?script_id=2531


I used to have a problem with emacs. But just a few days ago I finally sat down to finally learn it and now, 2 days later, can no longer imagine life without it.


I've never used Emacs, so I can't really say how it compares to Slime, but I've thoroughly enjoyed using Vim-Slime[0] and Tmux[1] for development in lots of languages (CL, Racket, Clojure, Ruby, JS, SQL, Haskell, Bash, etc.).

- https://github.com/jpalardy/vim-slime

- https://tmux.github.io/


Quoting this dead (and useful) comment for others who don't browse showdead:

```

l04m33 3 hours ago [dead] [-]

I wrote a new CL plugin for Vim, which only depends on +channel and supports most SLIME features: https://github.com/l04m33/vlime It's quite new, and I'd be grateful if you could try it out and provide some feedback.

```


Thank you.

A new user introducing his own work in the first comment did seem suspicious, but I tried to make it relevant, and didn't mean to spam around.


I wrote a new CL plugin for Vim, which only depends on +channel and supports most SLIME features: https://github.com/l04m33/vlime

It's quite new, and I'd be grateful if you could try it out and provide some feedback.


Honestly I don't think Emacs is an obstacle to people who use Vim. It's an obstacle to people who use IDEs


IMO, It's an obstacle only to people who don't use IDEs.

Someone who uses e.g. visual studio for C++ isn't going to balk at the suggestion of installing IntelliJ for Java, and I doubt that they would balk at the suggestion of installing SLIME for lisp.

In my experience it's people who live in their editor (sublime, vim, &c.) that balk at the idea of installing an IDE as the first step for using a new language.


I use(d) vim and see EMacs as annoying.


I use vi and emacs and see vim as some odd vi that is trying to be emacs but ends up being neither.


It might be annoying but it probably wouldn't be an "obstacle", especially considering things like evil-mode exist.

That's how it went for me, anyways. From vim user to trying Emacs because it has some feature that I need to becoming a full-time Emacs users with evil-mode


If somebody is not comfortable to use emacs (I am), there is a atom plugin for use with CL: https://atom.io/packages/atom-slime

It doesn't replace emacs, but it works as a first Lisp ide.


Really! Do you use it? Would you feel comfortable submitting a PR explaining how to set it up? I HATE recommending emacs as the IDE for Common Lisp if it's not something people are comfortable with already.


It's very promising but still very young. I manage to break it very quickly each time I have tried using it.

Eventually though this will be a huge boon.


Hmm, this just gave me an idea: Visual Studio Code Common Lisp plugin might also be a good way to expose the language to people unfamiliar to the language.


Every time I tried to get into CL, having to use Emacs/SLIME (which, let's face it, is the development environment for it for all practical purposes) has been a huge turn-off.

So, yes. Please?


I had (and still have) a similar aversion to emacs, so I used to use Sublime with SublimeREPL plugin, which allowed me to run Common Lisp and Clojure (or any other language with a REPL) inside the editor. Pretty much a makeshift IDE, with an interface you're already used to. I've since moved on to VSCode, and I bet there must already be a way to replicate this in it, I just haven't had the need to work with Lisp since to explore this.


It would be a very good idea. Be sure to make it talk to swank (the server side of the current editor tooling) so you can benefit from all the work that has been done there


While people are directing their attention here:

Last year I looked into Common Lisp for a while, but got turned off when I found that there's no distinction between the empty list and boolean false (or nil, in CL-speak).

I found this kinda weird and vaguely off-putting. I don't want to write code to handle the diffence between, say, an empty array and false or null in deserialized JSON data.

Can anyone comment on whether this comes up as an actual issue in practice?


It comes up in practice but is easilly avoided with the mapping of:

    null -> :null [] -> #() and false -> nil
Mapping arrays to lists instead of arrays seems wrong to me, but is what most json libraries do by default. Fortunately most of them allow you to change that.


Would that mean I'd need to carry the `:null` symbol all throughout the codebase, wherever that data-structure might be accessed?


I'm not sure I understand the question. If you need to check if a value is null, you compare it with :null. It does mean that values you are going to serialize out to JSON should use :null rather than the more lispy nil, but JSON interfaces where you are required to send null values are quite rare (in fact I don't recall ever encountering one), so it hasn't been a problem for me in practice.


Symbols are interned, and become a simple integer comparison once compiled. It's no different than comparing against 0, except that :null will have some other non-zero value.


So, I really dislike truthiness and falsiness in other languages but somehow, in Lisp, it just seems to work. One thing that helps is that, in places where the distinction between null and false matters, other features of lisp help keep them distinct: so, for example, getting a value from a hash table returns nil if the value is not found and, consequently, you can't distinguish a missing key from a stored null. GETHASH solves this by returning multiple values: the first is the value stored (if there is one) while the second indicates whether or not the key was found in the hash table. Similarly, optional arguments [declared like (defun foo (&optional bar))] default to null but, if you want to no whether a value was actually passed, you can change the argument declaration to account for this: (defun foo (&optional (bar nil bar-p))) In this case, when bar isn't passed, bar-p will b nil but when bar is passed, it will be t.


Probably the worst implementation of the 5-puzzle problem you can write.

Artificial Intelligence Assignment 1

Problem 09: Write GET-NEW-STATES to implement all possible movements of the empty tile for a given state.

CG-USER(151): (defun get-new-states (state) (setf new-states '()) (cond ((= 0 (first state)) (setf new-states (list (list (second state) 0 (third state) (fourth state) (fifth state) (sixth state)) (list (fourth state) (second state) (third state) 0 (fifth state) (sixth state))))) ((= 0 (second state)) (setf new-states (list (list (first state) (fifth state) (third state) (fourth state) 0 (sixth state)) (list 0 (first state) (third state) (fourth state) (fifth state) (sixth state)) (list (first state) (third state) 0 (fourth state) (fifth state) (sixth state))))) ((= 0 (third state)) (setf new-states (list (list (first state) 0 (second state) (fourth state) (fifth state) (sixth state)) (list (first state) (second state) (sixth state) (fourth state) (fifth state) 0)))) ((= 0 (fourth state)) (setf new-states (list (list 0 (second state) (third state) (first state) (fifth state) (sixth state)) (list (first state) (second state) (third state) (fifth state) 0 (sixth state))))) ((= 0 (fifth state)) (setf new-states (list (list (first state) 0 (third state) (fourth state) (second state) (sixth state)) (list (first state) (second state) (third state) 0 (fourth state) (sixth state)) (list (first state) (second state) (third state) (fourth state) (sixth state) 0)))) ((= 0 (sixth state)) (setf new-states (list (list (first state) (second state) 0 (fourth state) (fifth state) (third state)) (list (first state) (second state) (third state) (fourth state) 0 (fifth state)))))))

GET-NEW-STATES

CG-USER(152): (get-new-states '(1 2 3 4 5 0))

((1 2 0 4 5 3) (1 2 3 4 0 5))

CG-USER(153): (get-new-states '(1 2 3 4 0 5))

((1 0 3 4 2 5) (1 2 3 0 4 5) (1 2 3 4 5 0))

CG-USER(154): (get-new-states '(1 2 3 0 4 5))

((0 2 3 1 4 5) (1 2 3 4 0 5))

CG-USER(155): (get-new-states '(1 2 0 3 4 5))

((1 0 2 3 4 5) (1 2 5 3 4 0))

CG-USER(156): (get-new-states '(1 0 2 3 4 5))

((1 4 2 3 0 5) (0 1 2 3 4 5) (1 2 0 3 4 5))

CG-USER(157): (get-new-states '(0 1 2 3 4 5))

((1 0 2 3 4 5) (3 1 2 0 4 5))


"Dear windows user, tell us how this is done for SBCL"

Watch YouTube video from Baggers. It's a lot more complicated than your average windows user will want to go through. Than you have to setup EMACS, quicklisp...etc. I never really new what quicklisp was doing and it made me nervous (I trust VS nuget).


Everything should Just Work (TM) on Windows these days. SBCL provides reasonably fresh binaries to download. Quicklisp can be installed by entering the necessary commands into SBCL console. Emacs has a Windows installer. SLIME can be installed from Quicklisp. And so on. I develop in Lisp on both Windows and Linux machines, it works exactly the same (which cannot be said of some other languages).


Portacle[1] and lispstick[2] both provide portable sbcl-based development environments on windows.

Quicklisp is really simple to track what it's doing, and trivial to manage multiple installs of quicklisp simultaneously (though you can only use one install at a time in a given lisp image). Everything goes in <install-directory>/dists/<distribution-name>.

1: https://shinmera.github.io/portacle/

2: https://github.com/jasom/lispstick-automate/releases/tag/1.0


I've been telling people for years to just use Clozure CL on windows. It might be a bit slower, but it's still blazing fast and you don't have to deal with compatibility issues.


The video in question: https://www.youtube.com/watch?v=VnWVu8VVDbI&t=744s

As aidenn0 mentioned portacle will hopefully be a good option for some when finished as it gives you a complete environment in one shot.

For installing CL I may update my video to use Roswell at some point, I've been having good success with it when using TravisCI.


A newbie question please, How to deploy CL on production? I mean for long running programs.


There are a few ways of doing it all with different tradeoffs:

If very quick startup times are a necessity, then you just build a standalone image. This is very much like deploying a program in e.g. C. You build it with asdf, and then install it (and any dynamic libraries) into your deploy environment.

For "long running" programs, this may not be as necessary, and since the lisp runtime includes a full lisp compiler, it's not uncommon to just have a script that launches your lisp executable, uses ASDF to load the system, and calls the entry point.

For either one, it can also be useful to expose a swank server, though that does have security implications (anybody who can connect to the swank server (either localhost or unix domain sockets) can run arbitrary code).


The biggest problem with lisp adoption imo is that the first step of every path begins with emacs.

Emacs needs to die for lisp to flourish in a more modern editor.

Light Table was a good start, but we need some power behind similar projects.

I always thought guilemacs was the obvious successor, but it still hasn't happened.


Since you mentioned Light Table, I'm going to guess that you're open to other lisps. I think this is one thing that the Racket folks get right. My wife's first foray into programming was a coursera course starting with a simplified variant of Racket, and the dev environment could not have been less of an issue.

EDIT: Also, emacs doesn't have to die. I love emacs! But DrRacket is a nice alternative for beginners (and I'm told) advanced users alike.


Yes, but racket is a terrible language to work with. I've mentioned for years that drracket needs a general sexpr mode and integrated terminal for something like geiser. They don't listen.

I am working on a general purpose solution, but I'm just one person. And I don't leverage public dollars to write bad tools from the safety of university, so that slows it down a bit.


> racket is a terrible language to work with.

> ...

> I don't leverage public dollars to write bad tools

That's, like, your opinion, man!

Further, if your habit is to make feature requests in combination with this kind of subjective disparagement, I can see why the Racket community didn't jump to implement your request.

Lighten up. Everyone here is on the same team.


[flagged]


This comment seriously violates the guidelines around civility and name calling, and is just nasty besides. If you'd like to continue to post on Hacker News you can't do this again.


I prefer to be ruthlessly censored and have my narrative controlled by dang. Is he available?


Lisp and Emacs have decades of shared history, it's unlikely someone will really break this bond.

It would be nice to have alternatives, of course, but there is a reason certain people like emacs. (To be fair, I was an emacs user before I came across Lisp, but I can imagine how annoying/intimidating/frustrating it can be to somebody who has never used emacs before.)



Atom is somehow even heavier than emacs.

Because EVERY editor should definitely include the bloat of a web browser.


i do not know how to message the guy "Scott" author of page, so i am putting this here.

in the "LispWorks CL" page, under "Implementations", the "Notes" section elicidates a mystery about the Personal Edition not recognizing the lisp init files. This is actually a limitation in LispWorks Personal Edition which is described on the link provided to retrieve said edition.


Ah, interesting! I don't remember reading that when I wrote it.

I'll go back and correct it after my current project wraps up (but PRs are also accepted!).


The Personal Edition has several limitations:

does not load init files

supposed to be used with the IDE

IDE quits after some hours

Memory is limited

can't create applications

---

Actually the Personal Edition is mostly for people to play around a bit or students doing some homework.

Other than that the typical user will buy a version, which won't have the above limitations. Users also may also test a time-limited full version, before buying. People use either LispWorks or Allegro CL, then.


Yeah, I recall using it a bit more now.

The personal edition is really a trial edition, it's not a 'low budget low features' edition, which is what I was thinking when I originally was fooling with it back then.

(These days I use SBCL nearly exclusively for my CL implementation)


> 'low budget low features' edition

In some school/university course it would be like that. The students get a simple installer, get the IDE up and running after a few clicks and thus they don't have to learn GNU Emacs + installing Lisp infrastructure for some basic homework.


NICE!


Short answer: don't do that, use Clojure instead. It doesn't have any of listed problems.


I am sure there are plenty of people who have a preferred Lisp. It does drive me nuts how if there is a post on R the top comments are people touting Python as being the bigger player in statistics and data science (Which it isn't) or a ton of other languages.

PS I perfer Racket :)


Came here to share my love for Racket. Parameterization, custodians, threading, channels, message inboxes, and the packaging tool is also great. I just recently published my first application to the Racket package repo, and I found it to be the least painful experience so far.


R is pretty pointless though. Unless you want to rewrite everything between prototyping in R and real deployment with a real language, you are better off writing in Python.


That right there is what I am talking about. I make a point and people have to attack that that the language is pointless when it is the number one language in that domain, which just happened the last two years.

Why would you ever have to rewrite the code? Python isn't faster and has less function then R and if you want you can just drop a few lines of Python in a cell of a Notebook. I like Python and its a good choice but R is a great language that in fact that Python's pandas library is trying to make a R equivalent.


R is useful only more in pure analysis - interfacing to other systems and dealing with application/control logic is not it's strong point, so unless you are embedding R into a larger project (e.g. R batch scheduler), if you need these features, you'll likely need to rewrite.


>I make a point and people have to attack that that the language is pointless when it is the number one language in that domain, which just happened the last two years.

That's not true though, R doesn't have anywhere near the ecosystem that Python does for Natural Language Processing, Web Frameworks, Machine Learning, Computer Algebra and Symbolic Reasoning, Systems Programming, Image Processing, Document Processing, and other things I don't know about but if I needed something else then I can use Python confidently that there will be good packages for them with a community around it.

The entirety of R's unique mindshare is that it has a million variations on linear regression and contingency table tests. And frankly they're all so simple to implement that if you can't be bothered to learn how to implement it in 2 lines of Python then you probably don't know what your program is actually doing.

R is something that caught on because it made its statistics package top-level, saving keystrokes for statistics and bio-statistics professors who never needed anything else and didn't know how to otherwise program. It's unique syntax has lead their poor students to have to learn C-family syntax many years after they could have been working and being productive with it.

Now some companies are accommodating R for their entry-level data scientist positions in order to hire cheaper help that can't find better options, but their skills are limited by being disconnected from the rest of the programming world in both packages and syntax.


> That's not true though, R doesn't have ...

Your comparing a general purpose language and a domain specific language.

Domain specific just talking in statistics and data science. I don't need a webframe work I need to make my report and have the charts work. I also push out my reports in Word and PowerPoint and I can't do that in Python but I can in R with ReportR library (The whole reason why I left Python and Pandas and came to R)

I say if you want to do more then the domain specific then sure Python is an awesome language and you can use your Python skills for more, but if you want the extras that the domain specific language gives you come on board.

To say R is a "Pointless Language" is very troll like especially with companies spend millions of investment and infrastructure that have come to R in the last 24 months.

I could also say what I always say about Python. It is the World's Great Second Best Lanaguge. It doesn't do anything "best", but it is a awesome second best and if that is fine with you have fun. I love Python but I also see the limits of the language. I thought 15 years ago Python would revolutionize everything and everything would be written in it. We now have Lua that took over for game scripting. We have C++ still ruling the day for applications. Web development has been dominated by JavaScript and Mobile development and Python is well .... So the two largest platforms aren't really impacted by Python.

> Now some companies are accommodating R for their entry-level data scientist positions in order to hire cheaper help

http://www.datasciencecentral.com/forum/topics/what-pays-mos...

http://www.kdnuggets.com/2015/05/r-vs-python-data-science.ht...

It isn't a clear picture but NO way is Python dominating in pay or work. R is a GREAT language and so is Python but for some weird reason Python community hates on R. While the developers of R and Python respect and work with eachother and help both sides. I am shocked you feel Python is so over powerfully awesome.


Nope, it has new problems that are (IMO) worse.


I have dabbled in Common Lisp over the years and am quite comfortable with it. I know nothing about Clojure. What are the problems with Clojure? I am just curious. Thanks!


Usually, people tend to see the following problems:

* no native support, hosted language (js java)

* unreadable (java) tracebacks

* no tail call optimization

These are not my problems with Clojure, but then things I often read. I am interested in Common Lisp because of the native support.


I prefer loop/recur to TCO. You know exactly when you are, and are not, in tail position (which can be a bit obscure) and the compiler won't let you recur from non-tail.


loop/recur handles only a tiny part of TCO.

TCO means all calls in tail position are optimised, not just the self-recursive ones.

Personally I find loop/recur ugly. It's a partial hack around the lack of TCO.

Though I prefer looping statements like Common Lisp's ITERATE, whose design I like a lot...

https://common-lisp.net/project/iterate/doc/index.html


In addition to the other problems described, Clojure just breaks the value of Lisp's syntax. Common Lisp does have its irregularities but code littered with Java imports and square brackets might as well just use C formatting and be done with it.


The primary value, to me of Lisp syntax is that code is represented as data structure literals which can be manipulated as easily as any other data structure prior to execution. Clojure has that; it just comes with literals for a few more data structures and uses two of them, vectors and maps in the syntax of built-in forms.


Why would arguments in Lisp (= "list processor") form a vector?

Is it so that the syntax of vectors can be used? Then it has technical consequences.

Is it because of technical issues (arguments are vectors internally) and thus it can be exposed in the syntax, too?

Or both?


I think the motivation for making the sequence of arguments in function/macro definitions vectors instead of lists was primarily syntactic - to make them look different from the body. They're definitely vectors internally though.


How do square brackets break the syntax? Aren't they just syntactic sugar for (vector ...), exactly the same as ' is syntactic sugar for (quote ...)?

And imports, obviously, are just special forms. It's not like Lisp doesn't have others. It's still an S-expression, so why is it problematic?


To add to what @macco said: No reader macros. Apparently this extraordinarily powerful tool violates a religious principle with the inventor Rich Hickey.


If it was powerful enough to warrant use, someone would hack it onto tools.reader. It's obvious where it would go. For some reason, nobody has bothered with this. Likely because the reader already has programmable data (via tagged literals) and anything beyond that results maintaining your own special reader.

I haven't yet encountered somewhere that I'd have used reader macros for that wasn't better solved by using data literals or tagged literals. There might be some random place where I need syntax beyond clojure's data literals, but I'm far more likely to use a combination of data literals + tags. If absolutely required (it hasn't been yet in 5 years of daily production usage), I could simulate custom grammar using clj-antlr and a macro. If, for some unknown reason, I needed a custom non-sexp grammar embedded in my lisp that I wanted to serialize exactly and then read back in the same syntax (probably because I hate myself and/or my team), I could hack it tools.reader.

tldr; you have reader macros if you want them. Nobody wants them who uses Clojure day to day. If they did want them, they could trivially extend tools.reader themselves.


Virtually nothing requires reader macros. That doesn't mean they aren't useful.

Personally if I wanted reader macros in clojure I wouldn't bother implementing them because it wouldn't be worth the pain to me. However, if clojure had them I/libraries would likely make good use of them.


It has a reader as a library. It's entirely possible for you to change its lookup map to a defmethod and go to town altering s-expressions for arbitrary syntax if you wish. You're not hacking a new reader here, you'd refactoring something that probably should have been a generic method to begin with into one. Nobody has bothered with this because it's not useful to anyone, as far as I can tell. People who want non-data syntax write it with clj-antlr/instaparse (which are very easy to use) and use a macro when they want to interleave it into their code. These universally end up with a data-based AST that is readable and printable.


If you know Common Lisp reasonably, you'll probably have some insight into Clojure, e.g. Leiningen is a system definition facility; seq is an extension of sequences; and multimethods are generics much like methods. I'd say Clojure stands on the shoulders of giants.

There are some Lispy things that Clojure does out of the box that are more Lispy than than Common Lisp does out of the box, e.g. lists and other seq's as functions in the function position of an x-expression. There are things Clojure will not do such as reader macros. Clojure's syntax is a bit different, but generally reads cleaner in the same way one might say a font reads cleaner. YMMV.


> lists and other seq's as functions in the function position of an x-expression.

Makes code more difficult to read. I consider this a language design error. I had that on the Lisp Machine 30 years ago (example callable arrays, ... - maybe even Maclisp in the 70s had it), few people used it and it did not make it into Common Lisp.

Common Lisp was designed such that for the programmer and for the compiler the first element of a Lisp form is easily recognisable as a function: it has to be a symbol naming a function or an actual lambda expression.

Pays back it code maintenance over time...

> but generally reads cleaner in the same way one might say a font reads cleaner.

I think it reads cleaner in the way PERL code is read cleaner.


Personally, I find code easier or harder to read in the same ways I find prose easier or harder to read -- it depends on the author's ability to tell a story and the story the author is trying to tell and my interest in hearing the telling.

Seq's as functions are sometimes more readable to me for the same reasons code that uses reader macros may be more readable to me (even though reader macros may do away with normal s-expression syntax entirely). But again, mileage varies.


Arrays are functions, by definition. They relate inputs to outputs.


That does not mean I want them in a programming language to be functions.


I guess I just don't see a disadvantage. An array is a pure function, no side effects, so it should always be safe to treat it as one.


One disadvantage: you can't inline the code for such a vector reference, unless the compiler knows at compile time that this is a vector reference.


But this is true for anything that's in the callee position, right? Either it can be determined (possibly via static flow analysis) that it's a particular function, in which case it is inlined; or else it is an unknown function, in which case it won't be. It would seem that statically verifying that something is a vector reference isn't really harder than doing the same for a function, all else being equal.


if a Common Lisp compiler sees a call to cl:aref then it can inline it, without using static flow analysis or similar...


I was under the impression that CL compilers already had to do a lot of fairly complicated static analysis to achieve acceptable performance, because of dynamic typing.


Depends. Since the developer can use low-level functions and provide type declarations, a compiler can generate usefully fast code without too much work. More advanced compilers need less declarations, since they do some amount of type inference. One of the costs: the compiler then usually is quite a bit slower.


Sorry, there is a temporary ban on speaking about clojure in OP's thread. This discussion is about Common Lisp.


But it has many other problems.


I wish an experienced LISPer would explain why should one use Common Lisp over a language like Golang. Golang now has https://github.com/glycerine/zygomys for scripting. For that matter, why would one choose Common Lisp over GNU guile ? (guile now supports fibers). What does Common Lisp offer for the working programmer that is an advantage over other languages ?


"I wish an experienced LISPer would explain why should one use Common Lisp over a language like Golang."

Those are two very languages. Very few people would find themselves ever staring at a choice of what language to use, having narrowed it down to just those two.

Lisp is notorious for its mind-expanding freedom. I think perhaps this attribute of it is less unique than it used to be, but it is still present. While I have hard time recommending it for production usage, you can still learn a lot about programming from using it for a while. It is still one of the most programmer-empowering languages there is.

(Indeed, I think the vast bulk of the reason why it's not really all that great of a language and why it has never taken off is that it is too programmer empowering; it grants power the vast, vast bulk of programmers are not actually capable of handling well at scale. It makes it so that two programmers separated by some communication gap rapidly find it challenging to write code that works together. That said, deliberately spending some time in such an environment is an important step for the budding systems developer, I think.)


> It makes it so that two programmers separated by some communication gap rapidly find it challenging to write code that works together.

The problem is not within a team, distributed or not. That will work, even for larger teams with some guidance. It's more the application and libraries which the team produces over time. 'Syncing' them with the outside can be hard. But you'll experience that in Java applications which use large frameworks, too. Same in Javascript.

This made it important to use a common Lisp standard and was the reason DARPA called for the creation of Common Lisp. Every DoD contractor brought their own Lisp dialect with their application, which made maintenance and sharing difficult and costly.

The standard could have been about more, concurrency/graphics/..., but much of that was rapidly evolving technology and it made little sense at that time to create a real standard for graphics - later Lisp didn't have enough commercial backing for these things and people were moving on to simpler / more conventional languages like Java with lots of industry support.


I like Go and Guile. I love Common Lisp.

It's an expressive language with a powerful development environment in the form of the REPL plus Slime. I love using Lisp for exploratory programming, as it's so simple to write something that both works now and makes sense later.

It has a wider range of built-in programming paradigms more fully realised than Go or guile. In comparison to CLOS, all other object systems are just kidding. Or you can program functionally or procedurally if you like. This gives you a better toolbox.

It's a mature and stable environment. I can run code from 15 years ago or I can update it to use the latest libraries at my option.

And I just find the regularity of the parentheses has much less of a screeching brakes effect on the experience of the code. It's smoother to read and write.

So I find common Lisp a productive pleasure to use. If I was working on a project other people had to read and maintain I'd use Go, though, as it's easier to apply skills from more common languages to more quickly.


> In comparison to CLOS, all other object systems are just kidding.

I almost agree, with the exception, oddly enough, of Perl (either Perl6 or Perl5+Moose). It's just about as full-featured as CLOS, complete with a usable MOP.


You can run the code you wrote 15 years ago, and (hopefully) you will be able to run in 15 years the code you write today.


Stability of (most) implementations and solid language standard.


Given a reasonable choice, I would almost never use a language like Golang that doesn't enforce memory safety (at least, by default). Null pointers, ugh.


Most of the traditional definitions of "memory safety" are not violated by having null pointers, as long as they just crash when used incorrectly. nils in Go don't let you start accessing things you shouldn't or anything. Golang is memory safe by most definitions. (Possibly not by a definition that includes concurrent memory safety. I expect in 20 or 30 years the term "memory safe" will indeed involve that. But at the moment, it doesn't, because almost no language has that right now.)

This doesn't invalidate your underlying point that you want to use a language without implicit nulls... it's just the wrong terminology.


I'd put that one under type safety, and it definitely seems bizarre to me to make a new language that claims to have strong, static typing and allow arbitrary types, or pointers to arbitrary types to be null.


Go's nil is statically-typed. The nil keyword is polymorphic (as numbers in the source code are) but individual nils that occur at runtime in Go are actually typed. You can actually put methods on them and they execute just fine:

     type S struct{}

     func (s *S) Test() {
         if s == nil {
             fmt.Println("Nil pointer")
         } else {
             fmt.Println("Not nil pointer")
         }
     }

     func main () {
         var p *S
         p.Test()
     }
That will print "Nil pointer", not crash.

While I'm actually still in agreement that I don't love a language with nil pointers, it is less bad in Go because they are less pointy and stabby. It's not like C where they are automatically a crashing offense; they are valid values treated in sensible ways in a lot more places than in C, because in Go they still have their type associated with them. But I'd still like to be able to declare a non-nillable pointer type.

Please do note carefully the difference between less bad and not bad.


The type of s/p there can't be guaranteed to be (S not nil) at compile time, which is a bizarre decision in an otherwise-statically-typed language where the type system is designed to provide runtime type safety. You still have to do a null check at runtime if you want to be safe here

While it's nicer that you can perform the check inside Test() instead of before the call, I don't understand why the type system doesn't just prohibit this and make you use a union type if you want (S or nil) given that a lot more situations call for (S not nil) than (S or nil).


Pascal has null. Does that mean it is dynamically typed?


It means every type in Pascal is actually a union of itself and null.


I don't use Go, but I thought that nulls in it were memory safe (i.e. they abort rather than corrupt); am I wrong?

If I'm not wrong, this is a non-sequitor on a post about Common Lisp as Common Lisp is untyped, but (by default; some implementations allow you to disable safety) safe.


Technically Common Lisp is optionally typed. You can add static typing if you feel like it, and it can give you a significant performance boost when you do.

Fortunately, I don't have to bother with it most of the time since the amount of code that is time-critical enough for it to matter is very small.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: