> There was no latency, or waiting for the language at all. I could live in a running Lisp image and iteratively define my programs from the inside out, or in whatever way I wanted to, with immediate feedback after each change.
I am becoming more and more convinced that developer latency is the most important metric for being able to deliver value to your customer (broadly defined).
How fast does you code compile? How long for test to run? How long to deploy to an integration environment? To test environment with real users? To production?
Lisp in some ways is still unmatched in this metric, for the kinds of reasons cited by the author.
I have experienced this at work. We were working on a somewhat complex embedded system and seeing some unusual performance corner-cases. Everyone had a different theory for what was causing the problem.
So I fired up SLIME and wrote a high-level simulator for the system, starting simple and adding in the complexities. Once we were seeing the same performance profile as the embedded system, I stopped, and we now had a malleable system. Then I spend the next 15 minutes implementing each of the recommended performance fixed individually, and we found one that worked in the simulation and then implemented that one in the real environment (it took almost 15 minutes just to update the program on the embedded target, and much longer to run a full test-case).
Making it so you can check your guesses faster means you need not be as good at guessing. I should put in one additional warning though: don't fall into the trap of "make random changes until it looks like it works" and call it done. At this point we had only demonstrated that we knew where the issue was; the final fix was significantly different than the "quick and dirty" fix suggested by the simulation. The simulation I wrote definitely saved weeks
(if not months) of time because 90% of the team was convinced that the issue was in a different part of the system than it ended up being in. We found the issue only because "It will take less than a minute, so what's the harm in trying?"
> How fast does you code compile? How long for test to run? How long to deploy to an integration environment? To test environment with real users? To production?
I'm working on ultra-fast iteration development methods for C++ with https://www.youtube.com/watch?v=fMQvsqTDm3k and yes, it makes productivity absolutely unmatched compared to everything I experienced before.
Preferences seem to matter here, some people just seem to do better with longer cycles of feedback on more items than shorter cycles on fewer items. And some people really may need that "my code is compiling, it'll take a while and there's nothing I can do" break to mock-sword fight or whatever, almost like a pomodoro break between periods of intense focus. But there is some data on what could be, and you can of course measure your own latency on things.
More narrowly defined, in the Java world JRebel is a product that can eliminate many kinds of forced redeploys after code changes -- or more simply, stopping and restarting your program and waiting for everything to come back up and reconfigure any necessary state to get back to what you were doing. I saved so much time thanks to it. Their own marketing materials suggest just this waiting on redeploys thing can quickly add to 20% of salaried time. (https://www.jrebel.com/sites/rebel/files/pdfs/rw-mythbusters...) Using it almost gets within javelin distance of what CL has always had, and gives you the flexibility to develop and debug in a more interactive style without having to go TDD or rewrite in ABCL.
I haven't been keeping up (so would love to be informed things are otherwise) but I thought it was a shame that the JS front-end world seemed to finally be creeping up on where ClojureScript was in 2014 or so for interactive low-latency development, but then suddenly took several steps backwards with TypeScript -- with respect to interactivity at least, I don't doubt people's claims of steps forward on other metrics.
> And some people really may need that "my code is compiling, it'll take a while and there's nothing I can do" break to mock-sword fight or whatever, almost like a pomodoro break between periods of intense focus.
Yes, but working in a low latency feedback environment, you can still schedule breaks. But in a high latency feedback environment, it's impossible to avoid those breaks when you want to work quickly.
> I am becoming more and more convinced that developer latency is the most important metric for being able to deliver value to your customer (broadly defined).
This is generally true. Tighter feedback loops lead to greater responsiveness. You just have to make sure you don't get overwhelmed by them, too (feedback faster than you can respond to it).
I do mostly lisp these days, but your question brings me back to work I did with Smalltalk, one of the better integrated environments. It was so responsive and had effectively no compile cycle. At the end of an 8 hour day, i would be exhausted, as there was effectively no down time. (No time for sword play.)
I have been mostly a Common Lisp developer since 1982 (probably 30% professional work, the rest of my paid for time split between Java, C++, Python, Prolog, and Ruby).
About three years ago I was very enthusiastic about Julia becoming my primary language because so much of my work was in deep learning. I created a small repo of examples using Julia for non-numeric stuff like string processing, SPARQL query client, etc. Julia was pretty good all around.
What kept me from adopting Julia was realizing that Python had so many more libraries and frameworks. Right now, I split my time between Common Lisp and Python.
This is where I am stuck (albeit with drastically fewers years of experience). I want to make every piece of code in our pipelines differentiable. But ramping people up to Julia, not to mention creating and maintaining specialty libraries for in-house use, is too much just for me. But we are a small shop and no one around feels like learning a new technology.
I can see needing Python instead of Julia for the libraries, but what about using Julia for new work that you'd normally do in Common Lisp, for the reasons sited in the source article?
Right now, at Mind AI. In the 1980s most of my IR&D at SAIC (neural networks, symbolic AI, UI demos on my Lisp Machine), and lots of consulting work. A lot of paid Clojure work also.
I was looking for a new data notebook toy to ease the tedium of reporting some metrics.
I found the clojure-based https://github.com/nextjournal/clerk which I like the look of and remember clojure being rather pleasing interactively.
That lisp and Julia might have some commonality made me look for something similar (that isn't jupyter) for Julia. Seems like https://github.com/fonsp/Pluto.jl is such a thing.
Does anyone have any experience of either they could share?
My problem with notebooks is that I don't get to use any of my highly-customized Neovim extensions. I'm sure Emacs users feel the same way.
That aside, I still have not found my ideal notebook client.
I don't particularly like the "notebook" format anyway. I prefer having my code and output side-by-side. I want to be able to toggle among the "code | output" vertical split layout, code-only, and output-only, on a per-cell basis. I also want the split to be adjustable in each cell, so I can make the code take up 90% of the width or 10% of the width if I want it to. I of course want cells to be vertically collapsible. And I want an automatic table of contents in a sidebar "drawer". I want reactivity between cells. I want an extension ecosystem with things like a nice file picker and a built-in Git GUI. I want to connect to remote kernels (this is supposed to be possible with the Jupyter ecosystem but practically it's not well supported). And most of all, I don't want to run it in a damn Web browser!
And if the text editor box was also an embedded Neovim frontend, I wouldn't complain about that.
So far I have little experience with Julia, but it looks to me very promising. Personally, I like a more settled environment and prefer languages defined by a standard, not by a single implementation.
"Infact, the first version of my most recent project saw an order of magnitude difference when ported from Common Lisp to Julia, without any effort or attention to optimization techniques."
That might have been the case, but one mustn't expect that generally to be so, see e.g. https://benchmarksgame-team.pages.debian.net/benchmarksgame/... which shows much closer results (and multiple attempts for either language for any given benchmark to get better results).
Seems that Julia struggles with the binary-tree benchmark. Then again, run-time performance is for many (and over time, more and more) applications not the most relevant criteria (otherwise Python wouldn't be so successful).
That benchmark in particular has different rules for GC languages vs non-GC languages. Absurd, but true. Non-GC languages are allowed to pre-allocated a memory arena, GC languages are not. For the naive implementation, Julia beats Rust in this benchmark.
> That benchmark in particular has different rules for GC languages vs non-GC languages. Absurd, but true.
Fair to point out, but since that benchmark is "A simplistic adaptation of Hans Boehm's GCBench", those rules are perhaps not quite so absurd. It also doesn't explain why Julia should be so much slower than Java or SBCL. I rather expect Julia to do better here in the future.
Yeah. The problem here is that Julia gives you a bunch of tools to avoid GC by aggressively stack allocating or preallocating so our GC algorithm is pretty meh. We're about to merge a parallel algorithm, and are looking into switching to MMTK, but if anyone from the Java/lisp/Go world wants to help us out, we would love people who can help improve our GC. There are a few things Julia does that makes it a bit harder (the stack allocation+LLVM makes getting roots a little more complicated, and we make it easy for C to take Julia memory so we can't do a compacting collector).
My personal experience tinkering with Lisp is that it can sometimes be very hard to actually achieve the legendary high performance of SBCL, without your code getting verbose and ugly. Yes, you could ostensibly wrap that ugliness up in macros, but then you're inventing your own DSL and a compiler in the macro engine, which I guess is what the true Lisp nerds live for, but I have too few brain cells and too little time/energy to shave such big yaks.
For one thing, Cython isn't a Python implementation.
As for the rest, of course not. Especially not those two in particular. But PyPy and Pyston are doing pretty well on compatibility. Not sure about Stackless, but I think it's mostly been subsumed by PyPy anyway (even though it's still being developed independently).
The main problem for alternative implementations of Python is the C extensions, much like with Ruby. But the Python community has started to take ownership of the problem and a new C interface is being developed, called HPy, that is easier to implement for non-CPython implementations, and library developers are taking an interest in targeting HPy instead of the traditional CPython interface. With that in place, Python will be in a pretty good place when it comes to standardization.
On the flipside, I was thinking of learning CL as a Julia developer and this post has somewhat discouraged me to do so. What can learning CL do for me apart from realising that S-expression syntax is superior?
For context, I very often find myself building small libraries from scratch to solve very specific scientific problems which is somewhat performance critical. Julia has worked very well for me for this, but I recognise that moving outside of your comfort zone is the best way to become a better programmer.
It's not that S-expression syntax is superficial better or worse, it's that S-expression syntax acts as a vehicle for linguistic abstraction. What I mean by that is that Lisp lets you seamlessly integrate new constructs into your programming environment that didn't previously exist. Lisp doesn't have a "parallel for-loop"? Well, it's easy to add one in a few lines of code.
That, combined with the unrelenting support for interactive and incremental development, make Lisp at least a novel experience, and hopefully a transformative one.
Thing is, by now, we have plenty of languages with macros that let you do things like implement a parallel for-loop as a library - and it doesn't require keeping all your syntax that primitive. It does mean that macros are a bit harder to write, but I would argue that more readable syntax saves more time overall (since you aren't writing macros most of the time).
Lisp has a readability issue, where the syntax makes the authors intent not immediately clear with all the brackets. I think it shared too many of the issues RPL and Forth had to be sustainable outside academia.
When it comes to implicit parallel code, Julia hid a lot of the gruesome details in a friendly interface similar to Octave, Matlab, and Python. However, what surprised me most of all was its math ops are often still faster on a CPU than Numpy and many native C libs.
Julia still has a long way to go with its package version compatibility, but it is slowly improving as the core implementation stabilizes.
Lispers have been using structural editing since at least the 80s. Structural editing essentially allows modification to Lisp programs to be made as edits to a data structure with tokens. It sounds radical but it's really quite simple, at least in a language like Lisp, where
(setf x 5)
is a piece of code that is truly and immediately represented by a data structure (a list containing the symbol SETF, the symbol X, and the number 5).
Let's do a routine example. In your editor, how would you get from
(f x (y|)) z
to
(f x y| z)
if the '|' was your cursor? Maybe you think you should click around and delete the right parentheses, and carefully balance them. With structural editing, a Lisp programmer might first do a "splice":
(f x (y|)) z
;; splice: press Alt+s
(f x y|) z
and then they will do a "slurp" (slang for moving the next thing after the cursor into the current enclosing parentheses):
(f x y|) z
;; slurp: press Control-<right>
(f x y| z)
We've managed to get to our destination by using only two structural commands that makes a parenthesis-mismatch error fundamentally impossible.
These are examples written with a package called "Paredit" in mind, and it actually makes editing Lisp a distinct joy compared to character-based editing of other languages. Some Lisp programmers prefer slightly different styles of structural editing, but they all reduce to the same motivations: Edit a Lisp program's structure, not its characters.
So, contrary to popular belief, Lisp programmers do not count parentheses. Lisp programmers' eyes do not glaze over at the sight of Lisp. Syntax errors are probably a <1% occurrence out of all of the errors one encounters. Though purely informally empirical, I'd be willing to get that syntax errors are far less common in Lisp compared to syntax-rich languages like C++, Haskell, etc.
People who dabble with Lisp for 15 minutes in notepad.exe invariably run into syntax errors, but that is not a reflection of what serious hobbyists and professional Lisp programmers do.
which I interpreted a statement of generality, but I think you meant
Lisp is difficult for me to read
which I have no business arguing against, except to offer you guidance on what I believe are best practices for reading and writing Lisp. (In short, it's to use an editor specifically equipped to write Lisp. It gives you canonical indentation and structural editing. Emacs is the most popular choice, but there are extensions to Vim and VSCode; as well as full-blown commercial IDEs (plural!) by LispWorks and Franz Inc.)
1. How quickly can you understand what others have written in a large lisp project. I found I was spending a lot of time interpreting what was going on rather than the design implementation process itself.
2. unless some clown thought conditional tail recursion was super awesome for the next guy.
3. All code is terrible, but some of it is useful... and if it is only useful to a few than it fails as a language.
Keep in mind I learned on an RPN machine, so understand the sirens song lisp sings. ;)
1. In an average project Lisp would rank fairly high for me. In fact for making meaningful changes in an average quality large code base I'd rank it at the top, my order being (best first):
Lisp, C, Java, C++, Python
It all depends, e.g., C++ would rank higher in a very high quality code base.
Tail recursion is non-idiomatic in Common Lisp; both because the standard does not guarantee tail-call elimination (as opposed to, say, Scheme) and because dynamic-binding, which is idiomatic in Common Lisp reduces the number of calls that are in a tail position.
Also, tail-recursion is entirely divorced from syntax; perhaps you meant semantics? For example GCC will optimize tail calls in a rather broad number of contexts.
If the 172,000,000 figure from Google were not a brazen lie, it would indicate that for every developer on the planet using any stack, there exist several web pages mentioning Python syntax errors.
After a few years of seeing code build abominations, I am biased to believe that number may be plausible. Especially with automated testing frameworks dumping logs onto public areas of the web for the spiders to crawl.
I like data, but the hate in this place is unbelievable.
Have a great day, and happy computing my friend. =)
Like any language, Lisp has readability issues when you’re not familiar with it. You find it hard to read because it’s alien to you. To my eyes – because Lisp is what I’m used to – every non-Lisp language has readability issues. These days I do a significant amount of work in Python, Rust and Julia, and although there are things I love in all of these languages, I always miss s-expression syntax when using them.
Infix notation can be useful for arithmetic; as a "Lisp person" I acknowledge that.
S-exps can be good for arithmetic too though.
S-expressions allow n-ary operators:
(+ (* 3 x y) (* a b z w) (* r s t u v))
There is a systematic way to break up large expressions across multiple lines, which helps readability. Infix doesn't have a satisfactory way to do this:
(+ (* 3 x y)
(* a b z w)
(* r s t
u v))
We see just from from the
+ *
*
*
pattern that we have a sum of three products.
S-exps are unambiguous. You never misread an arithmetic (or any other) S-exp due to precedence, unless it's a macro that has created precedence rules among its arguments (vanishingly rare).
N-ary math functions associate left to right in Common LIsp: (+ a b c d) means we add (a b) first, then c, and d. This makes a difference when you have mixed types, which have to undergo conversions, and when you have non-associative types like floating-point.
If I'm cribbing some numeric calculation to use in my code (whether that code is Lisp or not), and have examples in half a dozen languages, none of which I use, I will crib it from that one that is a Lisp dialect, just because of the vanishingly small opportunity for getting it wrong due to some ambiguity issue.
This is a comment that I like, acknowledging the obvious unsophisticated but no less reasonable objection from the bystanders, and offering them an alternate viewpoint.
My reservation would be that most applications of arithmetic operators are binary and not ternary. A typical expression in one of my C programs at least has some function call, some subscript, some member access, some addition / subtraction / multiplication / division. But very few have more than one of each. Overall arithmetic is actually quite rare by itself in most code.
My windmill is simply I still can't think in prefix for math.
It's trivial for me to write x = (-b + sqrt(b^2 + 4 * a * c)) / (2 * a).
But writing (setf x (/ (+ (- b) (sqrt (+ (* 4 a c) (* b b)))) (* 2 a))).
That's a mind bender. Were this my source code, I'd probably put the (* 2 a) on the next line (properly indented) so it stood out as the divisor.
All that said, honestly, the physical task of writing Lisp code is just so much better than anything else. There's something satisfying closing up all of the parens. And I just love using-hyphens-for-identifiers vs CamelCase or using_underscores.
And if you're really desperate to write infix notation in Lisp, you can use CMU-INFIX [1]:
#I( (-b + sqrt(b^^2 - 4*a*c))/(2*a) )
That is completely valid Lisp if you need it. In fact, you can change your editor to highlight #I(...) if you want it to stick out a little more since it's "different" than usual prefix notation.
Suffice it to say, a binary expression like
(+ b (* m x))
probably doesn't warrant bringing in a library, but if you're calculating orbital dynamics of the solar system, then perhaps having the above library would be useful.
(setf x (/ (+ (- b)
(sqrt (+ (* 4 a c)
(* b b))))
(* 2 a)))
i however think the latter is computationally clearer. also i think the difference is most striking when we use well known infix formulas such as the quadratic formula. you would not have this effect with the cubic equation for example - https://en.wikipedia.org/wiki/Cubic_equation#Cardano.27s_met...
This is a really interesting point. What I'm seeing is that infix lends itself to chunking subterm "nouns" of the whole expression, where prefix/function is "verb oriented" and feels more like trying to remembering a series of dance steps rattled off by an instructor who knows them and can't remember what it's like to be a newbie.
I don't know how much of this is not being used to "dancing" but being entirely used to dealing trees of infix terms.
> Were this my source code, I'd probably put the (* 2 a) on the next line (properly indented) so it stood out as the divisor.
And that's exactly how written math does it (big bar for division of complex terms). Infix code notation has an extra set of '() around the numerator and denominator which helps in this specific case.
I can agree it's not the same, but what's the point? A more interesting disagreement is that I wouldn't say it's a downside (though yes, there are tradeoffs). Especially in Current Year when open source is fashionable and pretty much every language has a package manager to make pulling in or swapping out dependencies pretty easy, I don't see the issue. It's also interesting to note that of all the things Clojure did to "fix" shortcomings of past languages with a more opinionated (and often more correct I'll admit) design philosophy that users are forced to use (even when it's not more correct), infix-math-out-of-the-box wasn't one of them. I don't think that specifically really hurt Clojure adoption. (But of course Clojure is reasonably extensible too so it also has a library to get the functionality, though it's more fragile especially around needing spaces because it's not done with reader macros.)
I've brought the library up many times because CL, unlike so many other languages, really lets you extend it. Want a static type system? https://github.com/coalton-lang/coalton/ Want pattern matching? No need to wait for PEP 636, https://github.com/guicho271828/trivia/ If all that keeps someone from trying CL, or from enjoying it as much as they could because of some frustration or another, due to lacking <insert feature here> out of the box, chances are it is available through a library. (Or heck, even just tooling setup. Tab completion for symbols and jumping to function definitions are just two great things modern programmers enjoy in all sorts of languages, CL too, if someone doesn't have that setup I wonder if they must enjoy pain.)
I think you're writing it off as not actually being valuable in practice, which I don't think is fair.
Should we consider Python's NumPy as not a feature of Python because it's not a built-in library?
A commenter says they prefer to write infix math. Lisp has a completely ordinary, stable, well-known, 30-year-old library to do it. Thus the commenter's concerns are largely allayed.
Not having it in the standard library means people have a choice to use it or not, and it also means one can't expect all Lisp code which has infix math to use it, but it certainly does not means the commenter will be perpetually hamstrung in reading and writing complicated math formulas.
Macros are a cure that's worse than the disease. Unless everyone agrees on which set of macros you're using, you can't trust any code to mean what it looks like it means. (And if you could agree on which set of macros you were using, or what sanity-preserving restrictions you'd put on your macros, surely you'd just build those into your language).
In my experience, this is hyperbolic and not reflective of actual Lisp code people write or read. A lot of code is pretty straightforward: functions and classes. Some domains (like quantum computing) benefit from macros that introduce syntax that is broadly known to that domain's community, and would otherwise be opaque and inscrutable were it not for macros.
Have people written bad, useless, nearly antagonistic macros? Sure, but that happens with any abstraction in any language, including humble functions.
Sure, because people have an intuition for what a macro should or shouldn't do. In practice you end up with an ad-hoc, informally specified, etc. implementation of half of a domain-appropriate language. Actually even (especially?) the general-purpose parts end up the same.
Functions (at least in languages where they really are functions) are a lot safer; they're inherently compositional, and they follow the rules of the language. Most people have an intuition for what a macro should and shouldn't do, but there are very few guardrails that actually make that concrete, and often different team members will disagree about where the boundary between "sensible syntax sugar" and "inscrutable" lies.
"alien to you" Nope, but I do classify lisp/scheme next to Haskell, Icon/Unicon, and snowball... as ivory-tower languages. However, you wouldn't be the first person to spend 3 years of their lives failing to prove the assertion false.
Is there utility assuming everyone who doesn't share your opinion is an Alien? I find it just causes the 103rd purple proboscis to itch. ;)
I don't think it's the end of the world, but I'm pretty sure one can make solid physiological arguments why the first is a little easier to read.
If it wasn't, how come math notation relies heavily on infix operators and even subscripts, superscripts and more? It's because you want to optimize notation for the frequently used stuff.
No idea why you suddenly converted the values a, b, c, d to full-blown forms. But of course, baby duck syndrome will prevail and Minsky's comments are painfully true to this day: "Anyone can learn it [Lisp syntax] in a day, unless they know Fortran, in which case it's three days."
P.S. Making general comments about "math notation" is pretty dishonest argument. The math I studied was all commutative diagrams and blobs to represent manifolds; very little "infix operators" as far as I could tell. Mathematicians in general are flexible with notation (and definitions!) and will normally default to whatever is the loose consensus in their field :)
> P.S. Making general comments about "math notation" is pretty dishonest argument. The math I studied was all commutative diagrams and blobs to represent manifolds; very little "infix operators" as far as I could tell. Mathematicians in general are flexible with notation (and definitions!) and will normally default to whatever is the loose consensus in their field :)
It is exactly the point of what you are answering: mathematicians too use a flexible, ad-hoc notation. Presumably it is better for them. I don't see how it is dishonest, I find it quiet on point.
I guess it needs less ink/chalk, and gives more visual clue about the structure and the content of the equation, than a more uniform, lisp like notation would.
> But of course, baby duck syndrome will prevail and Minsky's comments are painfully true to this day: "Anyone can learn it [Lisp syntax] in a day, unless they know Fortran, in which case it's three days."
I'm only finding the quote, no source. But here's something we can be sure he said:
> The complete syntax of LISP can be learned in an hour or so; the interpreter is compact and not exceedingly complicated, and students often can answer questions about the system by reading the interpreter program itself.
> how come math notation relies heavily on infix operators and even subscripts, superscripts and more?
"Mathematical notation" is an ad-hoc compilation of a huge number of historical accidents, conventions and personal preferences. For each function in math, there's at least 3 different notations for it, and somehow each of your professors ends up using a different one.
And to counter your argument, prefix operators appear very frequently in mathematics, including the most common function notation (although of course there's also a postfix and an infix notation), sigma notation, quantificators, roots, and many other common operators.
By the way, your second example should be (print a b c d), I'm not sure why you made a,b,c,d functions there.
I agree you can make arguments, I like your explanation for the final form further downthread. For the second form, another choice could be (.x foo) or (. foo x). Or if you're trying to write something like System.out.println("x"), Clojure's .. shows it could be written as (.. System out (println "x")). Or, if you're using CL, you can use the access library (https://github.com/AccelerationNet/access) and write things like #Dfoo.bar.bast or (with-dot () (do-thing whatever.thing another.thing)).
In trying to further steelman a case where random Lisp syntax can be more difficult to read than, say, equivalent Python, two other areas come to mind. First is the inside-outness order of operations thing, it trips people up sometimes. Like the famous "REPL" (with a bad printer) is just (loop (print (eval (read)))), but in English we want to see that as LPER. Solutions include things like the arrow macro (Clojure did good work on showcasing it and other simple macros that can resolve this issue in many places) and if you write/pull one into CL REPL becomes (-> (read) (eval) (print) (loop)), how nice to read. But even the ancient let/let* forms allow you to express a more linear version of something, and you can avoid some instances of the problem with just general programming taste on expression complexity (an issue with all languages -- https://grugbrain.dev/#grug-on-expression-complexity shows one form).
The second area is on functions that have multiple exit points. A lot of Lispers seem to just not like return-from, and will convert things into cond expressions or similar or just say no to early exits and it can make things a bit more awkward. The solution here I think comes from both ends, the first is a broader cultural norm spreading in other languages against functions with multiple return statements and getting used to code written that way, the other is to just not get so upset about return-from and use it when it makes the code nicer to read.
Your second example isn't right, you've added an extra layer of parens. It's either (print a b c d) or your pre-translation should be print(a(), b(), c(), d()).
My thought here is that a, b, c, d are expression placeholders (and assuming they don't contain commas in the ALGOL syntax version). It's (print a b c d) if all arguments to the ALGOL version are already terminal symbols. If instead it was print(foo.a, foo.b, foo.c, x + y) it's now (print (get-a foo) (get-b foo) (get-c foo) (+ x y)).