Hacker News new | past | comments | ask | show | jobs | submit login
GNU Guile 2.2.0 (gnu.org)
331 points by amirouche on March 16, 2017 | hide | past | favorite | 144 comments

For me this is the most exciting part:

Complete Emacs-compatible Elisp implementation

    Thanks to the work of Robin Templeton, Guile's Elisp implementation is now fully Emacs-compatible,
     implementing all of Elisp's features and quirks in the same way as the editor we know and love.
This means we can finally have a proper GuileEmacs!

Also exciting is the "Fibers" functionality for I/O as well as the updates to the vm instructions that will more naturally facilitate JIT compilation. Especially now that a 2.2 release has been cut, I wouldn't be surprised if JIT compilation makes it upstream soon:


While others in this thread rightly point out that there's still work to be done to make GuileEmacs, I'm interested in another use case that seems like it could potentially be realized in the near-ish term:


I love org-mode, but wish that some of the features could be ported elsewhere just to make the format more ubiquitous. Or a stand-along org-mode notebook server. Or support in other text editors. Or...

Do we want the format to be more ubiquitous? Org-mode is fantastic, but the format itself is .... obviously organic.

So that I can use it everywhere and people won't look askance? Definitely!

If I compare RMarkdown and org-mode, for example, org is strictly better for academic writing or anywhere else where internal cross-references are needed. Org isn't an astounding format, but it is quietly competent.

There’s still work to be done in pre-compiling the elisp in Guile Emacs to reduce the starttime (a lot).

I never understood why people care so much about Emacs' start time. I start emacs maybe once a month and I'd say that anyone who's starting their Emacs so often that start time becomes a nuisance isn't using Emacs the way it's supposed to be used.

Actually, I just have a systemd unit launching the Emacs server for me. So starting an Emacs frame takes 0.2-0.3 s at best :-)

So it's stopped being an issue years ago!

The split here is between people who start it as a console application and those who start it as a GUI app. The former group start and stop it often, with separate instances running in each terminal. The latter don't, using emacs' server mode to get extremely fast "start up" times. The latter way is better, but if you really have to edit files on remote servers a lot then you might have to take the former approach. Yes you could learn to use tramp effectively also.

So it's not supposed to be started often? Where does it say that?

Wrong angle; the point is that Emacs is so useful that you want it open all the time. I have Emacs running always, I use Org-mode for time tracking the tasks in a sprint, I keep my personal diary in Org-mode, I program in Javascript, Python, Ruby. I run the Python interpreter in an Emacs buffer. I use Emacs for file management tasks, Dired is much better than pretty much all file other file managers except for the ability to drag and drop to other windowing applications. And the really serious Emacs users will have a much longer list.

In my experience, most people who use emacs as a primary tool run it in server mode. On modern hardware, its footprint is trivial, and that allows you to launch the interface in a fraction of a second. My machines all have an alias for emacs that checks and launches the server (if necessary) before the interface, so I only have to bear a slow start once per reboot.

A lot of people seem to use it with tmux or similar, where it get's started a lot more often, at least with no server running.

This is fantastic news. Are there any other major roadblocks for GuileEmacs?

yes! String representation for instance, lake of utf-8 filenames support (at least there is bug) but nothing that can't be fixed.

The question is more of the strings are fast enough yet. You could have GuileEmacs for a long time, but they weren't fast enough.

No, we couldn't have had GuileEmacs for a long time since the compatibility layer with Elisp wasn't complete - that meant that not all Existing Emacs packages could be used as is with a GuileEmacs. This release specifically addresses that and I am sure that now there will be serious effort to bring about a "stable" GuileEmacs release.

The missing compatibility layer is peanuts compared to the string performance problems underneath. This was my important question, which is not answered in the release announcement.

You could force guile on everybody, just as e.g. perl5 did with their Test2 replacement, but not everybody will be happy with a 20% performance hit in the real world, and esp. when telling nobody about it. This would be pure marketing shit.

Also check out this personal blog post from Andy Wingo, the primary developer:


That whole blog is worth a good read through. Lots of interesting stuff, particularly if you care about dynamic language implementation.

For those excited about the "Guile's Elisp implementation" in this release. The last major Guile release was 6 years ago, and much of this GuileEmacs work is still highly WIP and from my searching on emacs-devel seems to have stalled in 2015 for lack of volunteers.

Just because Guile implements Elisp the language doesn't mean there isn't a ton of work to be done on Emacs itself to swap out its native VM for Guile, and it seems nobody's keen on finishing up that work.

Given that Guile was basically launched by an RMS FUD attack on Tcl more than 20 years ago, 6 years seems like a rounding error.


Tcl was accused of being a scripting language that is more suitable for simple scripts, so I went and took a look at the Wikipedia article to see if it too presented a similar view. To quote: "It is commonly used embedded into C applications,[9] for rapid prototyping, scripted applications, GUIs, and testing.[10]"

Is wikipedia also doing a fud attack on Tcl (as it naturally could be the case), or is RMS post simply a less diplomatic and less friendly way to say that Tcl don't have the same purpose and use case as languages like c and c++? Any turning complete language can of course be used for anything, but looking at how something is used often give a good hint on where its strengths are.

At the time, Tcl was being used for fairly large apps (and it still is, e.g. in WaveSurfer). As for large scale suitability, I would guess that both Tcl and guile nowadays are eclipsed by Lua for embedded scripting, which to me would suggest that "simple" won out over "powerful".

What is not mentioned in that thread (at least not in RMS' original message) is that what triggered his edict was that somebody extended gdb with Tcl as the extension language. This was an absolutely reasonable use case for Tcl - the lightweight syntax makes it perfect as an interactive, embedded scripting language.

This discussion led to the creation of Guile, the withering of Tcl support in gdb, and eventually the introduction of a gdb that used Guile as its extension language — in 2014, 20 years after RMS decided that the Tcl extension (which I believe pretty much worked already) was unacceptable.

Of course, as astute readers will already have guessed, Tcl was BSD licensed...

Your narrative omits that gdb added Python scripting support in 2009.

Of course, as astute readers will already have guessed, Python is permissively licensed.

Both true, but I did not want to further digress. I would argue that the fact that it took 15 years for scripting support to show up again is evidence that the original policy was counter-productive.

It's been 6 years since Guile 2.0, when Guile went from being an interpreter only to having a VM and AOT compiler.

You mean a jit compiler. There is still not AOT compiler for GNU Guile AFAIK. Except janneke work.

There is an AOT compiler from source to bytecode, and then there is a bytecode interpreter. Wingo is considering the possibility of having a bytecode-to-native JIT or a source-to-native compiler.

Guile uses an ahead of time (AOT) compiler. Someone wrote an experimental tracing JIT recently [0] but that's not part of Guile itself.

[0] https://github.com/8c6794b6/guile-tjit

I believe in theory it also uses Lightning[1], but I can no longer get Lightning to build since GCC 6 or so, so I can't verify that.

[1]: https://www.gnu.org/software/lightning/

that's fantastic context; thanks for pointing this out.

What's left to be done?

I'm not involved in Emacs/Guile/GuileEmacs development. I just searched emacs-devel / EmacsWiki before writing that comment.

Having said that it seems to be in a POC state, but is very slow and needs optimization. These two wiki pages provide a decent overview: https://www.emacswiki.org/emacs/GuileEmacs & https://www.emacswiki.org/emacs/GuileEmacsTodo

This "Preview: portable dumper," thread from November 2016 and spin-offs appear to be the most up-to-date emacs-devel discussion on the subject: https://lists.gnu.org/archive/html/emacs-devel/2016-11/threa...

The older October 2015 "In support of guile-emacs" thread seems to have been written after development mostly tapered out, discusses some work to be done, and is a call for more development (which seems to have gone unanswered): https://lists.gnu.org/archive/html/emacs-devel/2015-10/threa...

All the work on the Emacs integration appears to have been done by Robin Templeton in early 2015 with no commit since May 2015: http://git.hcoop.net/?p=bpt/emacs.git;a=shortlog

> Thanks to the work of Robin Templeton, Guile's Elisp implementation is now fully Emacs-compatible, implementing all of Elisp's features and quirks in the same way as the editor we know and love.

Robin Templeton also seems to be the one which finished off the elisp compatibility on Guile here so it's probably because he saw that as important to get first.

One big item is that that Emacs' internal string/buffer representation supports things like loading weirdly (or even erroneously) encoded data, and saving this unchanged; this is sometimes quite useful.

Guile's strings (gnulib's) do not support that, and adding the support is quite a bit of works for all the corner-cases that emacs supports.

Now, emacs and guile do not necessarily have to use the same underlying string/buffer implementation, but it would definitely be a big plus, and I wouldn't be surprised the emacs maintainers would require it.

This release has been a long time coming and I'm happy that the day has finally arrived. There's a small patch of mine in this release (my first compiler hack ever) that optimizes comparison operations for floating point numbers. If anyone is interested in hacking on compilers, I highly recommend checking out Guile as one of the easier points of entry into the space. Andy Wingo, the author, even wrote up a blog post with plenty of project ideas to improve things: http://wingolog.org/archives/2016/02/04/guile-compiler-tasks

For others looking for what Guile is: "Guile is an implementation of the Scheme programming language."

The mailing list announcement is better both for the detailed content and not being served as difficult to read on mobile "justified" text: https://lists.gnu.org/archive/html/guile-devel/2017-03/msg00...

Thank you, I gave up reading it because of the justification. There was no "reader" version available for Safari either.

To try out Guile 2.2.0 easily from any GNU/Linux distro (from the full release notes):

    Bonus track!  This release also contains a new experiment, a binary
    installation package for the x86_64 architecture.
    The GNU Guix project (https://guixsd.org/) has assembled a graph of
    package definitions (for example, GCC, glibc, Guile, and so on) and is
    able to build that graph in an entirely deterministic way starting
    from only a handful of trusted bootstrap binaries.  Guix recently
    added a "guix pack" facility that can export build products from a
    Guix system, including all run-time dependencies.
    We have used the new "guix pack" to generate an experimental binary
    distribution for the Guile 2.2.0 release.  If you are on an x86_64
    system running GNU/Linux, begin by running the following commands:
      wget https://ftp.gnu.org/gnu/guile/guile-2.2.0-pack-x86_64-linux-gnu.tar.lz
      gpg --verify guile-2.2.0-pack-x86_64-linux-gnu.tar.lz.sig
    If verification fails, then see above for instructions on how to
    import the appropriate GPG key.  For reference, the pack's sha256sum
    Then in your root directory -- yes! -- do:
      cd /
      sudo tar xvf path/to/guile-2.2.0-pack-x86_64-linux-gnu.tar.lz
    This tarball will extract some paths into /gnu/store and also add a
    /opt/guile-2.2.0 symlink.  To run Guile, just invoke:

Technical question: What are .lz files? Is this LZMA compression? If so, why not using the more popular .xz format?

Here's why: http://www.nongnu.org/lzip/xz_inadequate.html

tl;dr: Because apparently some people believe that not only can they afford to exclude some users by avoiding zip's, but they must also bike-shed some more.

That seems like a well thought out article

I'm not sure but probably because it produced a smaller file than xz.

It seems that https://en.wikipedia.org/wiki/Lzip came out at the time between lzma and xz, possibly because lzma lacked proper file headers and checksumming at the time.

These days, some distros use lz and most use xz, probably mostly due to historical reasons.

Can anyone tell me if Guile is relevant? The list of example programs written in Guile is small. EmacsLisp and not Scheme seems to be the Gnu lisp of choice. The VM is not the fastest and not the most portable. Is there any driver behind it?

0) Guile is a GNU project

1) Guile has no Global Interpreter Lock.

2) Guile is a scheme, so it is homo-iconic cf. https://en.wikipedia.org/wiki/Homoiconicity

3) Scheme (and lisp in general) are nice to write Domain Specific Languages.

4) Guile doesn't have a particular overhead for calling simple functions which makes it possibly as fast as C.

5) Guile has a very powerful object oriented programming framework beating by far Python and Ruby OO systems.

6) Guile is optimised for immutability which makes it for safer code.

7) Guile has Guix which has 'guix pack' command which is awesome cf. http://git.savannah.gnu.org/cgit/guix.git/tree/doc/guix.texi...

8) Guile has a lot of supporters cf. https://lists.gnu.org/mailman/listinfo/guile-user

9) Guile has awesome maintainers

Also, there is not a lot of packages. But programs written in Guile or using Guile tend to be of higher quality, cf. http://guildhall.hypermove.net/ and http://sph.mn/content/3e73

> 0) Guile is a GNU project

Honest question: How does that help nowadays? Certainly in the early 90s, browsing the GNU ftp site was terrific for discovery, and in the 80s, GNU tapes were even more powerful, but nowadays, GNU seems to be irrelevant for discovery. I can't think of a single software project in the last 20 years that I've discovered through its GNU association. Similarly, GNU used to be important as an infrastructure provider for open source software, but nowadays is one hosting option among many, and not a particularly attractive one.

GNU has never helped much in developing software, the individual projects always stood and fell with their maintainers. So it seems to me that nowadays people put their projects under the GNU umbrella mainly because they buy into a particular worldview of how software should be licensed.

I'm not a participant in the GNU project, but, as I see it, the most important thing about GNU is that they actually have ethics and culture, and that's what keeps the project relevant for decades. They hold to their principles and are able to follow their own way without succumbing to pop culture of the modern IT.

So yes, being part of the GNU project is probably more of a cultural thing than an immediate practical advantage.

On the other hand, the GNU project tries to build a coherent operating system, which means that Guile, being part of GNU, is the preferred choice of the extension language for other GNU projects. This makes it very likely for Guile to stay relevant for a long time.

I would actually love to hear what someone deeply involved in a GNU project thinks about this: What their big benefits are, etc..

Many of these would apply to most if not all scheme implementations though. I'm genuinely curious if there are use cases that set Guile apart from, say, Chez Scheme.

EDIT: Given that the original question appeared to imply "relevant as compared to other schemes"

I was implying that yes. Or even more so, relevant to other Lisps.

Wait...I thought Guile was slow?

Slow compared to C? Yeah. Slow compared to other Scheme implementations like Chez Scheme? Yeah. Slow compared to Python or Ruby? No... something has changed with the computer language shootout site, but IIRC on most benchmarks Guile was middle of the pack for Scheme, which is to say quite a lot more performant than Python.


What does "cf." mean?

In this case it seems to mean viz. or e.g.

Cf. is an abbreviation of "confer" which is Latin for compare and is used to introduce material that should be compared to assertions made previously.

Viz. is an abbreviation of "vidilcet" which is Latin for "namely" or "as follows" and is used to indicate a more complete statement or example of something that has just been asserted, whereas e.g. precedes examples that illustrate a point.

Viz. and cf. are often confused, as in this case.

I don't think "viz." is correct here, either, which isn't surprising given the total absence of hyperlinks in the Latin corpus. A plain "see" would be best.

It means compare. So typically you'd use it to give a contrasting example. In this case it seems to be (mis)used to mean 'see' though.

It is still very relevant as an extension language for C (and C-likes, I suppose).

If you have all these constraints:

- must be easy to use extensions languages functions from C, and the other way around

- must therefore be easy to convert values from the extension language to C, and the other way around

- must be possible to share this state in a multi thread program

- including, have several threads implemented in the extension language

- must be able to load code at runtime

Then guile is not only a good choice, it's sadly the only possible choice (or at least that was the case last time I checked few years ago). Many languages that claim to be good at extending C are lacking one of the above (usually the multithreading part; Lua, for instance, allows to start several VM in several threads, but they are isolated).

Yes, it is relevant. For example, Guile powers an entire GNU/Linux distribution[0] where Guile is used for the init system, initial ram disk, and package manager.

[0] https://www.gnu.org/software/guix/

That looks cool and/or terrifying. How does it go usability wise?

It's quite usable, but like anything could be better. I run GuixSD as my daily driver OS on a laptop and desktop and it's very comfortable for my needs. OS upgrades can be easily rolled back in case I screw something up so I feel more confident to experiment with my system.

How does it compare to say Arch?

It's easier to declare packages recipes using guile than a mix of xml/bash/config files that other distro use.

Back in 1994 rms told us all we should not use Tcl, that GNU was building their own scripting language. True believers have been using Guile ever since. http://vanderburg.org/old_pages/Tcl/war/0000.html

> One will be Lisp-like, and one will have a more traditional algebraic syntax.

What's that algebraic syntax'ed language he mentions?

Part of Gnucash uses Guile, principally parts that related to reports. I'm Gnucash user myself, but don't know how large is Gnucash userbase though. Another example is GNU dmd, originally the init system for GNU/Hurd, is also written in Guile.

Another important example is Lilypond [1], a music engraving program. It requires the user to write scores in text files, which are then compiled to PDF/PS files. The core is implemented in C++, and Guile is used as an extension language [2].

Personally, I find Lilypond much better than Sibelius and Finale. In the past I have developed several Scheme snippets that allow me to apply complex layouts to my scores.

[1] http://lilypond.org/ [2] http://lilypond.org/doc/v2.18/Documentation/extending/index

You usually don't write anything other than emacs extensions in emacs lisp -- it would be overkill to have to start emacs just to start an HTTP server. For these purposes, guile would be a better choice?


Though that does not mean that there are no people who use Emacs to run a http server. Emacs (with elnode) powers marmalade, for example: https://marmalade-repo.org/

Besides being controlled by GNU, not really. Guilemacs is probably not going to happen either.

I like Emacs and all, but I'm really not sure why we'd be excited about Guile. Andy is a really awesome guy but he's the only one contributing to Guile, so there's no way it's ever going to compete with things like Chez or Racket (which is moving to Chez soon as well).

Andy is not the only one contributing to Guile. He does a ton, for sure, but Guile is far from a one man show. There are 2 other maintainers and many other contributors.

Take a look at https://www.openhub.net/p/guile/contributors/summary

Are you saying this is inaccurate? Andy is far and away the top contributor, it's not even close. It really does seem to be a one man show.

It's almost acccurate, off by not much. There is a few contributions that don't appear here like guile-nash and guile-log which are respectively tracing JIT compiler and prolog in scheme runtime.

Go away, troll.

What? It's a serious question. It'd be great if a flagship GNU project was a serious competitor with other programming languages, but it honestly looks like Guile is being kept afloat by one or two people. Why is mentioning this trolling?

Sorry, but it seemed like you were trolling. Guile is not a one man show. I have been active in the Guile community for 5 years now, so I know. I will admit that the bus factor in the compiler portion of the code is high, driven almost exclusively by Andy, but he has done a significant amount of writing about how the compiler works and even listed several other improvements that can be done in an effort to get more people involved. I have no experience with compiler hacking and yet I was able to implement an optimization and get it into the 2.2 release. The commit history alone isn't enough to understand the group of people that push Guile forward.

Shame you're being downvoted. Your work inspired me to learn to program with guile and SICK a few years ago. Thanks for everything

By "shame you're being downvoted", do you mean you agree that I'm trolling? Care to explain why? davexunit's contributions have nothing to do with this.

Congratulations to the Guile team for the release! They're great people, and the Lua community is happy for having shared a devroom at FOSDEM with them for two years in a row.

Interesting talk on Guile 2.2:

Guile 2.2 performance notes (FOSDEM 2016) https://www.youtube.com/watch?v=fU4Tly29Tps

Anyone tried GuileEmacs with this version and noticed any differences?

IIRC there is still some unmerged elisp implementation code out there that improves certain things, but performance is still bad because things are not yet optimized.

Congrats to the Guile team. Remember -- Guile goes with everything.

It seems that guile and lua have similar goals (one of the largest being to function as an extension language). Anyone have practical pros/cons between them?

How does Guile compare to Racket?

AFAIK+AFAIU Racket does have some specific kind of macros (ported to Guile but not in Guile distribution proper), There is guile-log that has no equivalent in Racket. Both support multiple language frontends but Racket seems to have more languages and in general Racket community seems bigger.

Any word about Windows support?

If you are using Windows 10, it should work under WSL (Windows System for Linux).

I saw on IRC that someone built it with cygwin. That's all I know, though.

How does this stack up against all the other popular Lisp variants out there?

For Schemes, see http://ecraven.github.io/r7rs-benchmarks/benchmark.html — my takeaway:

- Performance: Guile 2.2.0 is halfways between the medium-speed Schemes and the high-performance Schemes. On par with Larceny, MIT, chicken and bigloo (and racket IIRC, but that’s missing in the benchmarks right now).

- Compatibility: Guile 2.2.0 is among the 10 most complete r7rs Schemes, missing only circular lists (the other failed task succeeds when allowing a longer task time).

Notably, they were able to achieve this performance with bytecode interpreter. Unlike other mentioned Scheme implementations, Guile does not use a native compiler (neither JIT nor AOT).

Why are those brackets there in the syntax? What's the need? It looks hard to read when the programs are bigger. Is there any super advantage to it?

The super advantage of Lisp (including Scheme): Its format for defining data is the same as for writing code, making macros a natural part of the syntax: You can change your source code just like you’d change any other list (or rather tree) data type.

However all this can be represented fully without parentheses — and this is possible with Guile using a reader extension without losing any of its power. This is realized, for example, in wisp — a language frontend for Guile: http://www.draketo.de/english/wisp

It’s a full Scheme, but without the parentheses (behind the scenes it simply infers the parentheses, adds them and hands the result to the regular Scheme frontend in Guile).

Wisp is also standardized as Scheme Request For Implementation 119: https://srfi.schemers.org/srfi-119/srfi-119.html

    display "Hello World!" ↦ (display "Hello World!")
In short: I think this question is justified. But there are many people who start preferring parentheses when they get over the initial readability barrier.

> The super advantage of Lisp (including Scheme): Its format for defining data is the same as for writing code, making macros a natural part of the syntax: You can change your source code just like you’d change any other list (or rather tree) data type.

I've heard this repeatedly over the years, but the explanation unfortunately always stops right there. Could you please give an example of why you'd want to change your source code programatically? It's always assumed that the reader implicitly knows why this is a good and important thing. Perhaps a practical example showing

(A) how this would work,

(B) what the benefit is, and

(C) how the added work in reasoning is worth that benefit.

Note: I personally like the parentheses, as they group everything together so simply.

Could you please give an example of why you'd want to change your source code programatically?

This is my favorite example.


[Common Lisp's] DOLIST is similar to Perl's foreach or Python's for. Java added a similar kind of loop construct with the "enhanced" for loop in Java 1.5, as part of JSR-201.

Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern.

A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably still can't use the new feature until they're allowed to break source compatibility with older versions of Java.

So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years.

>Could you please give an example of why you'd want to change your source code programatically?

To create new syntax and be your own language designer. This is what macro systems allow for. Here's a very contrived and simple example. Guile comes with no equivalent of the `++` operator that we know from C, C++, etc. So in the event that we have some imperative code that is mutating a counter, we'd have to write something like this:

    (define counter 0)
    ;; do some stuff...
    (set! counter (+ counter 1))
Quite a lot more typing! It would be especially annoying if we had many such counters. Normally, when we want to factorize code, we'd write a function, but there's a problem: We can't write a function that mutates any given variable. So, this wouldn't work:

    (define (++ n) (set! n (+ n 1)))
    (define counter 0)
    (++ counter)
This is just mutating a local variable `n`, nothing happens to the variable `counter`, it's still 0. So what do we do? Instead of a function abstraction, we'll use a syntactic abstraction instead. Here is a macro that does what we want:

    (define-syntax-rule (++ var) (set! var (+ var 1)))
    (define counter-a 0)
    (define counter-b 0)
    (++ counter-a)
    (++ counter-b)
    (++ counter-a)
Now `counter-a` is 2 and `counter-b` is 1. The `++` macro is a program that writes programs. It takes `(++ counter-a)` and expands it into the code `(set! counter-a (+ counter-a 1))`.

The reason these syntactic abstractions are so easy to make is because of the homoiconic Lisp syntax. I hope this has made sense and is helpful. Moving on from this simple example, we can create entirely new languages that are embedded in Scheme if we wanted to, adding things that are too specific to a problem domain to ever be in the standard language's syntax but very useful for the problem we are trying to solve.

Is the last line in your failed function example supposed to read:

    (++ counter)


Yup, sorry. Fixed.

> I've heard this repeatedly over the years, but the explanation unfortunately always stops right there.

Lisp macros are something like C macros, chainsaws, hydrochloric acid, or anything else that's powerful-but-dangerous. This is to say that sometimes they are the one tool you have to use, but often they are unnecessary and should be avoided.

Coming up on ten years ago, I wrote this blog post on one of the dangers of Common Lisp style macros: http://www.mschaef.com/blog/tech/lisp/defmacro-coupling.html

Put succinctly, the problem I write about in the blog post is that macros are essentially always inlined into the output of the compiler. This has the effect of more tightly coupling the modules together than is evident in the surface syntax. (Note that macro invocation sites are syntactically indistinguishable from function invocation sites, which makes this problem worse.)

The upshot of this is what you'd expect: the extent of the logic encoded in macros should be minimized, with the macros translating the code pretty much straight away into something built on more functional abstractions.

This is not to say that Macros aren't useful... sometimes you _have_ to use them to achieve a goal. Just that they probably aren't as big a deal as might be expected given the amount of 'press' they get.

All computer code is a dangerous tool that is often unnecessary and should be avoided if possible. A macro is no more or less dangerous than a function, class, variable, module, or anything else.

C macros have the issue that even when everyone involved in the creation and use of a C macro understands its pifalls, those pitfalls cannot be removed from the macro.

For instance, a certain C macro might evaluate some expression twice. Everyone knows that this is dangerous, but there isn't any way to fix it. They just document it.

ISO C itself says that getc may evaluate its argument multiple times; thus don't do things like getc(stream_array[i++]) unless you remove the macro definition wth #undef.

Lisp macros do not have issues that are unfixable in this way.

Sometimes they have issues that are difficult, though not impossible. Usually that occurs when, to be perfect, the macro would have to do a full-blown code walk. Macros are written that do code walks (for instance the iterate macro).

> Put succinctly, the problem I write about in the blog post is that macros are essentially always inlined into the output of the compiler. This has the effect of more tightly coupling the modules together than is evident in the surface syntax. (Note that macro invocation sites are syntactically indistinguishable from function invocation sites, which makes this problem worse.)

Before you apply macros, you need a well-designed (and documented, and versioned!) API against which the macros will write the code. If all you care about is what the macro syntax looks like and don't put any design into how the expansion works (beyond just massaging it so that it somehow works), then you may run into problems.

Macros don't introduce any problems that writing the same code by hand against the same API's wouldn't introduce.

If someone has to write the code, I don't see how you can get around it: it's either going to be a human, or a macro.

> a certain C macro might evaluate some expression twice. Everyone knows that this is dangerous...ISO C itself says that getc may evaluate its argument multiple times; ... Lisp macros do not have issues that are unfixable in this way.

If your macros is 'fixed' to emulate function call semantics by evaluating its arguments only once, then maybe a function is a more appropriate abstraction in the first place. The whole point of macros is that they let you break the rules of function call application in hopefully useful and predictable ways.

Another way to look at it is that repeatedly evaluating an argument is what you do NOT want for a macro like 'getc', but probably what you DO want for a macro like 'repeat'. The danger lies in the fact that it's hard to tell the difference when looking at a call site in isolation.

Whether or not that danger is an acceptable risk is, of course, situation dependent.

By the way, some 18 years ago, I came up with a system for catching the use of expressions with side effects in C macros. Basically, I introduced an API that you could use in your macro definitions to identify insertions of expressions which would cause problems if containing side effects. This API, at run-time, would parse the expressions, analyze them for side effects, and diagnose problems. (It would also cache the results for faster execution of the same macro site.)

All the programmer has to do is achieve run-time coverage to catch all the problems.

We could define a getc-like macro such that getc(*stream++) would diagnose, provided that the line is executed.

See sfx.h and sfx.c here: http://git.savannah.nongnu.org/cgit/kazlib.git/tree/

There are all kinds of macros that have to evaluate an expression exactly once, and cannot be made into functions.

  ;; cond evaluated exactly once;
  ;; then or else at most once, not before cond.
  (if cond then else)
getc doesn't have to be a macro. It illustrates just the point that the macro issues in C are so unfixable that broken macros have even been codified in ISO C.

Sure, functions can do that and more (borrowing a page from Smalltalk):

    (if* cond
        #'(lambda () then)
      #'(lambda () else))
All the macro does is eliminate the need to write out all the lambda syntax.

   (defmacro (if cond then else)
      `(if* ,cond
          #'(lambda () ,then)
        #'(lambda () ,else)))
This brings my back to my original point: "the extent of the logic encoded in macros should be minimized, with the macros translating the code pretty much straight away into something built on more functional abstractions.".

Works in C too, although not as nicely:

    void dscwritef_impl(const _TCHAR * format_str, ...);
    #define pdscwritef(flag, args) \
         do { if (DEBUG_FLAG(flag)) dscwritef_impl args; } while(0);
The funny thing is, I think we're largely in agreement.

I do not agree that an if macro stands for some specific lambda-based utterance. That isn't historically true, or in any other sense. The macro potentially stands for any and every possible way in which its semantics can be achieved.

> I do not agree that an if macro stands for some specific lambda-based utterance. That isn't historically true, or in any other sense.

Huh? Are you saying the use of lambdas does NOT give if* the ability to control the execution of 'then' and 'else'?

My point is that if you're concerned about how often you evaluate a block of code (0, 1, or n times), there are ways to achieve this goal that do not require macros. (And consequently, the macros mainly serve as they should: to clean up the syntax, if necessary.)

I never wrote that macros are required to control evaluation. Rather, what I wrote is that there are examples of macros for which evaluation is specified. In fact, most ANSI Lisp macros are like this; unless stated otherwise, those constituents of a macro call which are forms are evaluated once, and left to right. The whole point is that this sort thing can be specified, because there is a robust way to write macros to meet the specification.

I gave if as an example; it was not intended to be an example of a macro which has to go out of its way to ensure once-only evaluation.

There are common examples of macros that use machine-generated unique variables to hold the results of evaluating an argument form, in order to be able to insert that value into multiple places in the generated code. An implementation of with-slots likely has to, for instance.

In documenting a library of C macros, we cannot specify a strict evaluation order without seriously constraining which of those macros are actually implementable.

> I never wrote that macros are required to control evaluation.

I think that came from me. My point was mainly that 1) control over evaluation is a significant reason to use macros 2) there are other ways to achieve that goal and 3) those other ways should be used to the extent possible. To me at least, this diminishes the value of one of the key headline features of the Lisp family of languages.

Note that this does not mean that I don't want to use the language.'I've maintained a personal and professional interest in the language that dates back over 25 years. When I have the choice, I usually reach for Lisp (really Clojure these days) as the most effective way to write the software I have the time and interest to write. It's just that the reasons for this don't center around the idea of compile time code transformation. (As useful as that can be when needed.)

> There are common examples of macros that use machine-generated unique variables to hold the results of evaluating an argument form, in order to be able to insert that value into multiple places in the generated code. An implementation of with-slots likely has to, for instance.

I do know this, because I've written at least a few of them myself.




> ... insert that value into multiple places in the generated code.

Is that really what you meant to say? You're using a machine generated variable in a macro to 'insert a _value_ into multiple places in the generated code'? (As in, the value itself gets emitted in the generated code?)

The code I link to above does something slightly different. What it does is generate code that uses a machine generated variable to hold the result of a single execution of an expression. It then inserts references to that machine generated unique variable in multiple places in the generated code.

Macros don't "change source code". That is a serious, but common misconception: that they are somehow self-modifying code.

Lisp macros give meaning to syntax that doesn't previously have meaning. In this regard, they are the same as functions.

(foo (boonly) blarg) doesn't have any meaning because foo hasn't been defined.

We can fix that by writing a function foo. Then (boonly) and blarg have to be valid expressions and we are good.

Or we can make it mean something can by writing a macro foo. The macro foo is a function that will operate on the entire form (foo (boonly) blarg) and calculate a replacement for it. The Lisp form expander (a feature of the compiler or interpreter) will call the macro and accept its return value as the replacement for the macro call. Then, the replacement is scanned for more macros; the entire process removes all macros until all that is left is special operators and functions.

A practical example of a macro is the ANSI Common Lisp standard macro called loop which provides a syntax for iterating variables, stepping through collections, and taking various actions or gathering results:

  (loop for bit in '(0 3 7 13)
        for mask = (ash 1 bit)
        summing mask) -> 8329 

This entire looping mini-language is implemented in the loop macro. When you call (loop ... args) the macro takes over, analyzes the phrases and compiles them into other syntax.

The natural script writing syntax¹ is an example: The `Enter` Macro defines macros which use their arguments as data (no need to quote anything), but execute any part prefixed with a comma as actual code.

(Enter (Arne))

(Arne (Hello!) (Who are you?))

The above is actual code which makes Arne say two lines of text.

Essentially the Enter defines a new specialized control structure which is optimized for the task of defining lines of text for a character in an RPG to say.

This removes cognitive overhead while writing a scene: You only write what which character should say.

Essentially you invest into creating a better-fitting tool to simplify all following work. With Scheme you can take this further than with anything else — short of taking over the whole language implementation (which for example Facebook did with their new PHP implementations. With Guile you can take the path Facebook took, without first needing to be a multi-billion-dollar company). But as any other power, use this with care: You won’t want to make your code so much different from what other Schemers do that others have a hard time joining in.

¹: https://fosdem.org/2017/schedule/event/naturalscriptwritingg... — also see the video and the slides which show the difference between this syntax and examples from other methods, including one of my earlier tries with Python.

Being able to create code with code, allows you to do away with boilerplate code. E.g. if you got a lot of code with just minor differences, you can make those differences into parameters in a macro which makes each of these code chunks.

That can be utilized both at high level and low level. GOAL is a cool example: https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp. Basically you can express assembly code in LISP syntax and then you can create higher level constructs like if, while, for etc statements as macros composing these lower lever assembly instructions. Because data and code is the same in LISP that allowed Naughty Dog when they used GOAL on the Playstation 2, to swap chunks of code in and out of memory as needed easily to maintain larger levels than the competition. It is also something that allows you to change a running system. I believe there was a case of a malfunctioning satellite running LISP code, which was debugged and fixed while it was running. Shutting it down for fixing was not an option.

I would also add another benefit of only using parenthesis. It makes it really easy to make powerful tools for LISP. Have a look at s-expression based editing. Instead of working line by line, or word by word as we are used to with editors, these work with whole blocks of code at the time. Normal editor commands involve jumping word by word or line by line. with LISP editors you can view the code as a tree with keyboard commands for jumping between tree siblings, up to a parent or down to a child. Instead of selecting x number of lines or words, you can select a whole subtree and delete it, move it, duplicate it or whatever. I think they usually call it paredit. Here is an animated demonstration of how it works: http://danmidwood.com/content/2014/11/21/animated-paredit.ht...

Disclaimer: I never really got used to LISP myself. I think it is cool, but it was too big of a jump for me when I started from C++ background. However I feel more used to it now as I've programmed a lot in Julia which is LISP like but with more normal syntax. Also when I first checked out LISP I didn't understand the need to change the way you think about navigating and editing code. If you edit the way you edit normal code I think it easily gets confusing. You lose track of the parenthesis. I didn't know about stuff like paredit then.

> why you'd want to change your source code programatically?

Performance, macros are executed at compile time. Also, sometime the macro is easier to write that the equivalent non-macro code.

The syntax is mininal, here it is: (functionOrMacroOrSpecialForm arg1 arg2 ...)

That's the whole syntax. So now you know how to program in LISP.

There's no statements in lisp, only the above expression repeated, which you can nest one in another, or sequence them one after another. They're called S-expressions.

For me, that simplicity is the first advantage. It's just very consistent and quick to learn. It also allows to embed everything together, like assign a value in an if condition.

The second big advantage, is that this syntax is very easily parsed into a list and can be easily changed programatically. Lisp lets you do that by defining macros.

A) It works by defining a function which takes the parsed code as an AST, and returns a modified AST. To transform the AST you have the full power of LISP availaible and some convenient operators that make it easier.

B) There's many benefits. Conceptually, it lets you change the evaluation order of code. Normally a function evaluates its arguments from left to right and then passes the resulting values to be evaluated by the function. 99% of the time this is what you want. But what if you want to short-circuit the evaluation of the arguments when one of them evaluates to say false. This would come in handy if writing the AND function. A macro allows you to do it. In other languages, you're given special operators for this kind of thing, but they're limited and you can not add new ones.

Another practical case is aspect oriented programming. Imagine you want to print the argument to all functions and all their return values. With a macro you can do this:

(print-steps (+ (- 100 23 price) (* 12 34 56 people))).

Print steps is able to inject a print statement at multiple code points, around each arguments and around every functions. Without a macro, you would have had to re-write all this manually, and then delete it again once you're done debugging. Some languages like Java give you a complicated framework which can do this to a certain extent, but aspect-J is a pre-processor, just like macros, it's just that the Java syntax makes this really complicated to implement, so you need a framework to do most of the work, or it's not practical.

C) That's up to you to decide. You don't ever have to write a single macro, and should only use them when they are needed and make sense. Now reading code with macros is trickier, because every macro could be a custom operator you have never seen before, or an aspect that does things you're not sure about. To me, this just becomes a thing of don't write crappy unreadable code. Which is true in all languages. I'd suggest teams standardize their macros, and document them well, so new members can quickly get familiar with your team's macros. Once familiar with a set of macros though, they make you a lot more productive and do make code more concise and even easier the read sometimes.

As a simpler solution, making brackets a lighter color so they don't stand out as much. Works really well.

I use rainbow-delimiter mode in emacs, it of great help.

One big advantage is that this unified recursive representation of code allows structural editing. With things like Emacs's paredit you manipulate code structure directly by splitting, joining, and moving subexpressions instead of editing code as flat lines of characters.

Btw, of modern Lisp dialects, I do not like Clojure for undermining this advantage by not syntactically grouping everything that is grouped semantically (e.g. binding pairs in let) just to use less parentheses. In my opinion, Scheme syntax (Guile is a Scheme implementation) is much better.

I think that's a good point, not sure why Clojuse did that. I get confused because some macro expect grouping, others don't. I think it happened as an accident, different people wrote the different core macros and decided on slightly different parsing rules.

Many programming languages use braces, parentheses and brackets to convey structure.

Javascript and C do it this way:

  if (foo) {
    bar(42, zonk());
Guile/Clojure/etc do it this way:

  (if foo
   (bar 42 (zonk)))
If you didn't have the separators, programs would be harder to interpret for compilers and human readers, as it's harder to tell what the programmer meant from just

  if foo 
   bar 42 zonk

It's a LISP dialect. Brackets are essential to tell the difference between arguments and function calls... take for example in C: foo(bar(1)) would in lisp be (foo (bar 1)).

If it were to just be "foo bar 1", you couldn't tell the difference between (C style again for clarity): foo(bar(1)) and foo(bar, 1).

Some newer functional languages have a composition operator which allows you to write this without the brackets, but the brackets still are easier to follow in many cases.

> If it were to just be "foo bar 1", you couldn't tell the difference between (C style again for clarity): foo(bar(1)) and foo(bar, 1).

Or `foo(bar)(1)`.

This "problem" is not unique to Lisps. Quite often, when I look at a piece of code written in C++, especially when it uses lambda functions inside calls, I can't help asking myself why there are so many brackets (and whether a Lisp would be a better alternative to C++, syntax-wise).

The syntax of the lambda itself in C++ is sort of funny: it requires to use all the bracket types at the same time!


But they're not interchangeable, so they give an person reading the code a strong hint whether it's a array index, function call/grouping, or a code block.

Oh, that's yet another issue: in the context of the above example (which, incidentally, was not merely a list of all kinds of brackets) the meaning of the pair of square brackets is changed from 'array index' to 'lambda'.

> and whether a Lisp would be a better alternative to C++, syntax-wise

IMO it's an open question. There is not much statically typed lisp AFAIK.

It is a Lisp thing. If you have an editor like emacs with Scheme support, it is possible to format in a way that makes it quite easy to read: in fact, the formatting reveals the AST.

A good explanation is in Chapter 1 of "SICP":


In lisp/scheme, you write the abstract syntax tree, skipping the parse step entirely.

> skipping the parse step entirely.

Not strictly true... there is still parsing involved in reading Lisp data structures from a character stream. (It's just much less involved than in traditional infix languages.)

The way to think of it is this: 1) Lisp has a much more comprehensive (and read/write) syntax for its core data structures. 2) Lisp, the language, is defined in terms of those data structures rather than in terms of character sequences and grammar productions.

I think of it as serialing/deserializins the AST to text :)

I can see that, but it doesn't feel quite right. The tree you get directly from deserialized source text is still a bit more fine grained than I'd expect a true AST to be.

To see what I mean, consider this:

    (if condition 1 2 3)
This expression can be deserialized into a Lisp data structure, but it still contains a syntax error, and I don't think a true AST would be able to represent that syntax error.

Lisps take a couple months of programming in for it to start being easy to read if you're coming from only c-style languages, but it works out well in the long run. Sure makes refactoring a breeze.

Lisp nostalgia?

Nostalgia is an odd word to use here when Guile is a Scheme, and so is part of the Lisp family.

is lisp older than C?


It's the second-oldest high level programming language (after Fortran).

- Fortran: 1957

- Lisp: 1958

- ...

- C: 1972

I mean to say lisp family. so Fortran counts as part of C family.

C is part of the Algol family. Fortran was/is syntactically and semantically quite different. Algol was almost contemporaneous with Fortran and, although did learn from Fortran, was, at least partly, a reaction to its perceived flaws.

Well, contemporary Fortran is probably part of Algol family too, since newer versions of Fortran (at least 77 onwards) incorporate the block structured programming concepts from Algol.

The "family tree" analogy works imperfectly for programming languages since it is more of a directed cyclic graph than a tree structure – Fortran influenced Algol and then Algol in turn influenced Fortran.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact