Hacker News new | comments | show | ask | jobs | submit login
Just what does “code as data” mean anyway? (2014) (adambard.com)
149 points by jxub 6 months ago | hide | past | web | favorite | 171 comments



Code is data does not come from lisp or macros, those are simply affordances.

It comes from computer hardware architecture.

The von Neumann Architecture is named after the mathematician and early computer scientist John von Neumann. von Neumann machines have shared signals and memory for code and data. Thus, the program can be easily modified by itself since it is stored in read-write memory.

Any self-modifying program, even if it is written in Borland Delphi, expresses this code is data capability.


It's even deeper than that. It's not just how code is handled in a von Neumann architecture, it's how code is created.

Data is just a sequence of bytes. Source code is also just a sequence of bytes - it's just a data file with a particular structure and intent. And then an executable is just a sequence of bytes - it's another data file, though on Unix it's one with the executable permission set. A compiler reads in a data file and writes a data file - they just happen to be source code and executable programs, respectively.

(Now, when Lisp people say "code is data", they mean much more than compile time. For other languages, that is less true, other than self-modifying code...)


The hardware is more fundamental. (Unless you want to invoke mathematics e.g. term rewriting)

The compiler/seq of bytes example requires von neumann, because in that case it is the behavior of the whole system that is self-modifying.

You could not have the beautiful executable and interchangable files of bytes without von neumann.


I don't think that's right. There have been systems that were Harvard architecture (Motorola 88000, I think, and definitely others). But the code to run on them still started out as a sequence of bytes in a text file, that is, as data, and a compiler still took that data file and wrote another data file that was the executable image.


Sure, I guess


Code is data.

In Java, I could use a bytecode manipulation library to read in a class file, manipulate it by using the library's API to add, modify, delete members of the class. Then write the class back out as a file.

Code becomes data and is manipulated as date and written back out.

In Lisp 'code is data' is so much more natural because code IS the most elementary data structures of the language. No need for a library to manipulate code behind an API. The API is the language primitives. The code is the data structure you manipulate using language primitives.


And, as I wrote in another thread yesterday, it allows you to work with your text editor at a higher level of abstraction than non-lisp-coding people may have seen before (https://news.ycombinator.com/item?id=16386380).

Rather than thinking in letters, words, lines, or paragraphs, editor modes like paredit let you edit in terms of the structure of your code. I find this really hard to give up after lispy sessions.


I really value C syntax. I'm convinced it's much easier to navigate with the eyes than "oatmeal with fingerclips mixed in" (quoting Larry Wall).

And >90% of the time the relevant syntactic unit is on its own line or ranges of lines, and that's really super easy to handle in vim. I don't think there is much to improve by building more complex abstractions on top.

If you use vim as a C programmer, you should know the basic line operations like dd, p, and Shift+V (line range select). Also, I often use just { and } to navigate to the next/previous empty line. These keyboard shortcuts get those >90% covered.

Beyond that, you can use Control+V (block select), and XaY where X can be c (change) or d (delete) or v (visual select), and Y can be things like " (string literals), w (identifier), { (braced block) or ( (parenthesized expression).

I don't disagree that if all you have is parens, then you want something like paredit. But for a C programmer there is no need for such a thing.


For a while in early 90's I was paid to mainly write Lisp (along with C and PostScript) having previously mostly done C.

My initial reaction to Lisp syntax was pretty much exactly what you describe. However, within a few months I was preferring Lisp over C.

It's really just what you are used to - both C and Lisp are great designs.

Edit: More recently when I started using Python I thought the significant whitespace thing was awful - I went on to completely change my mind (a bit like drinking G&T).


> I don't disagree that if all you have is parens, then you want something like paredit. But for a C programmer there is no need for such a thing.

If that's a good selection of tools for editing code, then why would Lisp programmers need Paredit? Most of those commands work just as well for editing Lisp. The advantages that Paredit gives over that model are semantic commands rather than character based ones, things like easily slurping and splicing things into parent lists (a block, the arguments to a function call, whatever) without having to think about the syntax needed for that transformation.

There's nothing about Lisp that makes the vi commands you use for editing C less useful for editing Lisp; people use things like Paredit because they allow for a much more semantic style of editing.


But there is no such thing as "parent lists" in C. There are much less parentheses. You don't make nested functions or data. You write simple, procedural code, and therefore you simply move lines up or down, and at most change the indentation level. (To fix indentation in vim, select a block of lines and use =. To shift left or right, use << or >>).


Sure there is. Imagine you had this function:

    int f(void) {
        int x = 1;
        return x * 2;
    }
Then pretend that later you wanted to call f and provide your own value of x for it to transform, but you realise that f doesn't take a parameter, so you go to move the x declaration in f into its parameter list. What you really want to do here is delete the first element of the parameter list, and then move the first expression in its next sibling list (the function body) to the end of it. C syntax doesn't make this easy for editors to do, but the operations can still be thought of in terms of lists.

You get nested expressions whenever you use if or a loop or call a function. C programs might be flatter on average than Lisp programs, but there's still plenty of structural editing you could do. Another example is moving an expression out of or into a loop, without concern for how many lines it takes up, whether it's a function call or a loop or anything.

The difficulty that C syntax creates for editing operations like this isn't C not having the need for Paredit-style editing, it's C not having good support for it.


Regarding the loop: that's exactly what I described, you just move lines and fix the indentation. It's super quick. There just isn't significant nesting in good code. It's almost always bad style to nest function calls. In statement-blocks you don't have the "trailing parentheses" problem that you have in LISP, since the closing brace is on its own line - so it really boils down to cutting and pasting lines >90% of the time.

Aside: in original C, this is how you wrote functions:

  int f(x, y)
          int x;
          int y;
  {
          int z;
          ...
  }


> Regarding the loop: that's exactly what I described, you just move lines and fix the indentation.

The experience you get with Paredit is not the same experience you get with C and vi, because you have to be concerned with things at the character level to make the transformation. What single command do you use to move the function call, for loop, label, or do-while loop after the closing brace of the current block into this block, whether it's 1 line or 100? What if you have something like f(x, y * 2) and you decide that you want to put y * 2 into a variable instead? Do you write every argument to every function on its own line?

Again, if you want to do things the C way in Lisp, it's not like Lisp makes it hard to do those kinds of edits. Things like Paredit are popular because people find them more efficient to use. Editors can support Paredit-like editing in C, but it's much more difficult to accomplish because of limitations with C's syntax.


Tooling is getting better for non-Lisp languages. In C#, with Visual Studio and ReSharper, for example, you could highlight the "int x = 1;" line, and hit your keyboard shortcut for ReSharper:Introduce_Parameter, which would do pretty much exactly what you're saying. Same for turning a chunk of code into a method, etc.


The tooling is there, it's not bad, and it is constantly getting better.

I use Cursive in IntelliJ to do all my Clojure programming. A lot of others also use Emacs but Cursive is an amazing plugin imho. It does the vast majority of things I want from an IDE, like jumping to definitions, some refactorings, switching easily to relevant test files. Paredit took a while to get used to but I immediately miss it when programming anything else.


So instead of lots of parens, you get lots of {}, [], and () spread across more lines, and curly braced blocks are often nested.


I find C syntax awkward, bloated, inconsistent, and gross compared to lisp syntax, but that’s an aesthetic preference.

Beyond the personal preferences of individuals (individuals who probably started with one kind of syntax or another) there are characteristics that have some objective utility.

I appreciate that a lot of people start with C-ish syntax, and a similarly large number of people prefer that syntax, but I’ve never heard anything that demonstrated an intrinsic benefit of C syntax.


"I appreciate that a lot of people start with C-ish syntax, and a similarly large number of people prefer that syntax, but I’ve never heard anything that demonstrated an intrinsic benefit of C syntax."

Short stuff is easier to type and maybe to read. There's that. Far as its "design," it was made by tweaking BCPL to make it run on a PDP-7 and then PDP-11. The assignment change was admitted as personal preference. BCPL itself was an ALGOL with LISP features that had every feature for safety, maintainability, etc chopped off to compile on a terrible piece of hardware they were stuck with. There was little to no design: can't overemphasize they literally just kept what that one machine could compile. Far as C, even structs originally weren't in it but got added after their failed attempts to port UNIX from assembly. Presentation below has proof from historical papers written by BCPL and C inventors.

https://vimeo.com/132192250

People just assume there was sensible design because of all the C code out there (argument from popularity). The brain then starts rationalizing attributes about it that were designed or hacked in for totally different reasons in a past of constrained hardware lacking knowledge or tools of modern, language design. That context no longer applies to most users of C. It's just myth-making by users reinforcing use of it.


> The brain then starts rationalizing attributes about it that were designed or hacked in for totally different reasons in a past of constrained hardware lacking knowledge or tools of modern, language design. That context no longer applies to most users of C. It's just myth-making by users reinforcing use of it.

That's just _your_ rationalization. I don't think it's commonly claimed that all was set in stone from the beginning. The history is there for everyone to read.

Another possible explanation is that C is so minimal (in spirit) and doesn't get in the way, people are able to pull of impressive things, which makes them love C.

And now why exactly isn't (the gist of) C sensible design? I fail to see your argument. By the way, please enjoy this cool video: https://www.youtube.com/watch?v=khmFGThc5TI


It's obviously a lot shorter and more distinctive. It's highly optimized for assignments, pointer and array dereferences, address-of operations and data definitions. If that's what you do most of the day, you want C syntax.


> "oatmeal with fingerclips mixed in" (quoting Larry Wall)

I still chuckle at that quote and Larry Wall is great, but given the look of traditional Perl I wouldn't put much stock in Wall's language syntax aesthetic ;-)


>>I still chuckle at that quote

That quote isn't one statement. Its basically a part of talk/essay and he actually says a lot of nice things about Lisp in that essay, and then mentions this as a joke to close it all.


> oatmeal with fingerclips mixed in" (quoting Larry Wall).

That exact phrasing only seems to show up in 2 places: this comment, and a comment of yours from about a year ago. So I suspect you're paraphrasing...but my purpose for looking was to find that talk/essay/whatever. Would you happen to have a link?



> Rather than thinking in letters, words, lines, or paragraphs, editor modes like paredit let you edit in terms of the structure of your code.

Is there a single programmer on this planet who thinks about code in terms of letters, words, lines, or paragraphs?

Newsflash: it's 2018, not 1960, and code editors are capable of things most lisp "editor modes" can only dream about.


I wish people who downvote my comment would explain what they disagree with.

The entire discussion in this subtree centers around paredit (oooh, it can slice and slurp lists!) and vi commands.

Have you ever ever seen and experienced an actual modern coding environment? IntelliJ IDEA? Visual Studio? Hell, even Visual Studio Code.

The full power of those tools put both vi and paredit to shame when it comes to actually working with code.


Since you were replying directly to me, I couldn’t downvote you. But your use of “newsflash” made it come across that you weren’t interested in a civil discussion. I frequently downvote comments that make a point in an unnecessarily hostile or patronizing way. I suspect others do as well.

As for full IDEs, they absolutely have a place. They can even infer things about the structure of your code fairly well. I still find the productivity of being able to work deterministically and consistently with my code at a structual level a pleasure.

I’d appreciate that even in an IDE, if for no other reason than that the structure editing could be done efficiently and leave me some extra system resources for other things :-)


You were downvoted because what you said shows that you do not understand what paredit is.

I'd recommend you read https://en.m.wikipedia.org/wiki/Structure_editor to better grasp the concept.

If you've ever used the XML structure view in eclipse, its the Design tab, you get a better idea. In contrast to its source tab.


Can you explain it? For instance in IntelliJ I can refactor and have it automatically replace only instances of the word "call" that are actually invocations of the function call(). That's structured editing, not text-based.


Yea, that's true, but your editor pane is a text editor non the less. The refactoring in this case is structured, even more so, its semantically accurate to a specific language. But your editor isn't. A structural editor does not allow you to make edits that would go against the syntax rules of your language.


It sounds like you're saying a tool can only be a structural editor if it doesn't do anything else. I want an editor that can operate structurally AND as a text editor - I don't write new code as syntactically complete units in one keystroke.


I'd argue that IntelliJ is a structured editor. If you look under the hood the entire system is build around a tree-representation of the code. The "text editor" is a thin varnish on top of that syntactic representation.


> your editor pane is a text editor non the less

And the editor pane with paredit somehow becomes a magical structural editing machine?

Paredit knows exactly zero things about your code. The only thing it can do is match brackets/parentheses and move code between them. That's it.

In an actual code editing tool that actually understands code structure and semantics, I can:

- select and move semantically and structurally valid blocks of code

- extract parts of code into a separate variable

- extract parts of code into a separate function

- extract parts of code into a separate module

- safely rename variables and functions across multiple files

- simplify/split/extract/generify code

- ... many more ...


Let's see. Here's the original text:

> Rather than thinking in letters, words, lines, or paragraphs, editor modes like paredit let you edit in terms of the structure of your code

So. My question is: which modern programmer thinks in terms of thinks about code in terms of letters, words, lines, or paragraphs?

The truly antiquated tools (like vi and emacs) may still handle code as if it was just text. The actual code editing tools have long been able to deal with the structure of the code. And in ways that paredit may only dream of.

> If you've ever used the XML structure view in eclipse, its the Design tab, you get a better idea.

Erm. The only "think semantically" that paredit pretends it does happens only because Lisp has a rather regular syntax. It doesn't take much thinking or work to move parts of code in and out of parenthesis, or to be able to close a matching bracket.

Actual modern tools know the structure of the code and offer much greater editing and code handling capabilities than that.

Edit:

As a trivial example. Here's IntelliJ and PHP code (yes, PHP). It understands the code, it's structure, and offers context based editing help based code structure and semantics: https://dmitriid.com/i/gm2tmnztha4tenjq.png

Speaking of semantics. Unlike the dumb "move things in out of brackets" the actual tools understand semantics of code. Once again, this is PHP (!), and the tool knows about the semantics of code: https://dmitriid.com/i/gm2tmnbtga4tanjs.png

Editing capabilities available to some other languages would blow your mind.


You were probably downvoted too because you keep assuming other people talking about structural editing have never used IDEs for Java, php, C++, C#, etc.

Nothing is blowing my mind, structural editing is different from what you're talking about, and enables different benefits and also downsides then those.

I think the wikipedia article I linked does a good job at explaining the distinction:

> editors in some integrated development environments parse the source code and generate a parse tree, allowing the same analysis as by a structure editor, but the actual editing of the source code is generally done as raw text.


> You were probably downvoted too because you keep assuming other people talking about structural editing have never used IDEs

coming from a person who also wrote this:

> You were downvoted because what you said shows that you do not understand what paredit is. > I'd recommend you read ... to better grasp the concept.

So, people keep assuming that I for some reason have never tried paredit (or written anything in Lisp). So why don't I give you a taste of your medicine.

I suggest that you go and read some documentation on the IDEs I mentioned and try and use them to better grasp the concept.

Paredit is a very dumb tool that only works because Lisp's syntax is regular. There's nothing semantic or structural about the ability to move some words in or out of parentheses or to close matching brackets.

Other languages might not have the same wondrous ability simply because their syntax is more complex. Their tools though clearly allow much better actual structural editing and actual semantic reasoning about code.


> So, people keep assuming that I for some reason have never tried paredit (or written anything in Lisp)

If you have, its surprising that you believe IntelliJ's PHP editor to be a full structural editor then.

> Paredit is a very dumb tool that only works because Lisp's syntax is regular. There's nothing semantic or structural about the ability to move some words in or out of parentheses or to close matching brackets.

Yes, it is a very dumb tool, but because Lisp's syntax is so simple, it makes implementing a structural editor for it trivial like that. So in return its a benefit. That's why a lot of simple regular syntax languages like XML often have structural editors for them too, because of how easy it is to make one.

> Other languages might not have the same wondrous ability simply because their syntax is more complex. Their tools though clearly allow much better actual structural editing and actual semantic reasoning about code.

I'm not saying that certain IDEs for certain languages don't offer great features which allow edits to be made in ways that are semantically aware and maintain syntactically valid structure. I'm saying that those languages don't offer full on structural editors of their code. So when writing and editing the code, you do so as text, without taking structure into account. Yes, your syntax errors will be highlighted, yes you can perform certain structural refactorings, but the editor is textual and not structural. If you have experienced both, it should be pretty obvious that they feel and are very different.

I don't claim one to be better then the other, I think both are great, but it also depends on the language. If Java had a structural editor it might be more annoying then it'd be helpful. For Lisps it is amazingly useful, more so then what IntelliJ does for PHP. With Lisps, I'd rather have paredit then a background AST which allows me refactorings, auto-complete and error highlighting. But off course, those are fortunately not mutually exclusive and I have those also.


> Yes, it is a very dumb tool, but because Lisp's syntax is so simple, it makes implementing a structural editor for it trivial like that.

Tell me if it helps you use the loop macro correctly? Paredit only helps you with brace-matching, it doesn't (can't) understand about the semantics of your s-expr and hence not that useful without additional tooling. For other languages the brace matching problem isn't that bad that you need paredit-like tools.


Correct, structural editing is about structure and not semantics. Paredit does help with the loop macro, it helps you write it in a structurally correct way based on the language's syntax rules.

Structural editors just make sure your code will parse, not that it is semantically correct. It doesn't know the meaning or behaviour of the loop macro, but it does know how to properly structure it so it can be parsed. And so it gives you ways to add, remove and restructure elements of your code such that it will always be valid for your language.

Lisp syntax is really simple, so yes, you don't need much out of your structural editor. Just make sure everything is balanced at all times and allow you to move symbols in and out of parenthesis, and change their order and nesting. That's all you need to guarantee proper structure of Lisp code.

Now, in Lisp, if you used a normal text editor, you'd quickly start to have wrongly balanced forms, you'd be annoyed at how you keep having to shuffle so many things around just because you want to now divide a whole form by 5 or change the order in which two things are performed. Using a structural editor will remove all these pains, and suddenly, its even easier to do these things in Lisp then it is in say Java. Whereas without structural editing, it was way easier in Java.

So Java + text editor = Never had issues structuring and restructuring code.

Lisp + text editor = Omg, I hate all the parenthesis and nesting and prefix notation because it makes structuring code painful.

Lisp + structural editing = Everything I found difficult with a text editor is now not only easy, I find that its even faster and easier to restructure the code then when I use Java with a text editor.

Java + structural editing = I don't know, because Eclipse, Netbeans and Idea do not provide structural editors for Java that I know off. I suspect it would not be as game changer as in Lisp, since java + text editor is already satisfactory.

Everyone who says that structural editing isn't useful, and what is way more useful is code refactoring tools is just missing the point. Code refactoring is useful, but in Lisp, so is structural editing. You could even debate it is more useful.

Different language have different pain points, and so require different tools to address. I agree, I'm not looking for a Java structural editor, but its also hard to go back to editing Java after having mastered structural editing of Lisp code. The same way that its painful to rename a variable in Lisp once you've done it in Java using a good code refactoring tool.


> If you have, its surprising that you believe IntelliJ's PHP editor to be a full structural editor then.

Because, unlike paredit, it actually knows about the structure of my code.

> That's why a lot of simple regular syntax languages like XML often have structural editors for them too, because of how easy it is to make one.

Yeah. It's not "structural". It's just parens-matching and a few very basic actions like "surround with parenthesis". Lispers praise it like a gift from god only because there are no proper tools.

> So when writing and editing the code, you do so as text, without taking structure into account.

I see this bullshit repeated again and again. All paredit does is: select this word, select this thing in matching parentheses, move them around. Somehow this makes it a magical structural editor unlike the "just text" of other editors.

Once again: no programmer in the world thinks or works with code in terms of words and paragraphs. The only thing paredit does is select words between matching symbols. It doesn't make it more structural or semantic than editing PHP in IDEA. It's just a somewhat convenient way of editing text with regular grammar.


> Because, unlike paredit, it actually knows about the structure of my code.

I feel like you mean semantics here maybe. Anyways, knowing about the structure of code does not make an editor structural. It has to provide editing mechanism that are based around the structure.

Its hard for me to explain, but basically it would be something like if you wouldn't be allowed to type code out of a class or function. You'd need to first add a function, which would always insert a fully valid signature with open and closing bracket, name and all, even if placeholders. So snippets sometimes do that, but they really do it as textual convenience, and its not enforced in any way. You would not be allowed to move half of an assignment by itself, you'd need the full assignment, things like that. So that every edit performed would result in valid syntax after they are performed. Valid syntax, not working code, not code that semantically make sense, just the syntax is valid within the syntax rules.

Most IDEs do not provide this. What they do is they syntax check your text on every text edit. So as you add or remove characters, they re-run a syntax check, and highlight your mistakes. But your edit commands are addCharacter(), removeCharacter(). Its not addFunction(), switchAssignment(), wrapInConditional(), deleteVariable, etc. Some of those are available as commands on top of the text editor, but the editor is still textual, and you can not fully write and edit code structurally. Again, I'll quote the wikipedia article which makes this pretty clear:

""" However, most source code editors are instead text editors with additional features such as syntax highlighting and code folding, rather than structure editors. The editors in some integrated development environments parse the source code and generate a parse tree, allowing the same analysis as by a structure editor, but the actual editing of the source code is generally done as raw text. """

> Yeah. It's not "structural". It's just parens-matching and a few very basic actions like "surround with parenthesis". Lispers praise it like a gift from god only because there are no proper tools.

You're just trying to pick a battle here. I'm making no claim in terms of Lisp vs OtherLanguage. Paredit does meet the criterion for being classified as a structural editor for Lisp code. Yes, Lisp syntax makes meeting the criterion really easy, and paredit is not a complicated piece of engineering marvel, but its a complete structural editor for Lisp code none the less. Eclipse with Java does not meet the criterion. IntelliJ PHP editor does not meet the criterion. This does not mean that Lisp is superior to PHP or Java, or even that it has better tooling. It just means they don't have structural editors, and honestly, they probably don't need one.

In Lisp, the benefit of a structural editor is mind blowing. That's why Lispers praise it like a gift from god. Without it, I would probably dismiss Lisp's syntax as too painful to work with. Yes, it provides the most powerful meta-programming of all other language, but the small things, like messing up your parenthesis balancing, or having to juggle forms in and out of each other is enough to throw off a lot of developers. So paredit is like a godsend, because it completely alleviates those pain points, and turns them into strengths. In that regard, it is unfair to judge Lisp's syntax before you've mastered and used it with paredit or similar structural editors.

> It doesn't make it more structural or semantic than editing PHP in IDEA.

It makes paredit a structural editor for Lisp code, whereas IDEA is not a structural editor for PHP code. That doesn't mean Lisp is superior to PHP, it doesn't mean paredit is superior to IDEA. Structural editor is not a vague abstract concept, its a concrete kind of editor, read the Wikipedia page. If you tell me, hey go use IDEA for PHP and it has a structural editor. And I buy a license, I'd be like, what the hell, this is not a structural editor. Its as simple as that. You can't just redefine what a structural editor is or isn't. They've existed for more then 30 years now, there's a common notion of what it is and what to expect.

> It's just a somewhat convenient way of editing text with regular grammar.

Yes, and that's what a structural editor is. Its when instead of editing text as free form, you edit it with respect to the grammar. I quote the wikipedia page again:

""" structured editors allow the viewing and manipulation of the underlying document in a structured manner """

That's all.

Having said that, yes, it can be confusing what is the difference between that and having an editor which syntax check as the text is edited. That's why there's a mention of this on the wikipedia page which I have now quoted many times.

Now, Eclipse has the UML editor for Java, that is actually a structural editor to some extent, but most Java devs hate it.


> So that every edit performed would result in valid syntax after they are performed. Valid syntax, not working code, not code that semantically make sense, just the syntax is valid within the syntax rules.

As I said. Dumb tool for basic syntax manipulation is elevated to godlike status. After which you end up with dumb statements like these:

- editor modes like paredit let you edit in terms of the structure of your code.

- your editor pane is a text editor none the less.

- the editor is textual and not structural.

- structural editors just make sure your code will parse

and other stuff that means exactly one thing: "our language doesn't have any other/proper tools, so we pretend our parens matcher is the best thing since sliced cheese".

It's just a dumb thing that selects words and matches parens. That's it.

> In that regard, it is unfair to judge Lisp's syntax before you've mastered and used it with paredit or similar structural editors.

Spare me your condescension


All true and factual statements. It doesn't matter if you judge them dumb or not.

> and other stuff that means exactly one thing: "our language doesn't have any other/proper tools, so we pretend our parens matcher is the best thing since sliced cheese"

You're operating in the hive mind mentality of us vs them. I'm talking objectively about syntax families and types of editors. You'll probably keep being downvoted as long as you continue that kind of discourse on HN.

Given Lisp syntax, a structural editor adds tremendous value. Most users of Lisp syntax languages find it an invaluable tool, and would not trade it in for code refactoring tools or linters. Its okay if you disagree. Depending on the Lisp language you pick, you can also have code refactoring tools and linters if you wish. Or maybe you just don't like the features of Lisp, that's okay too, use PHP. The fact the tool is simple and easy to implement does not change its value add. Think of how awesome the invention of the wheel is, even though its quite primitive.

So at this point, its hard for me to understand your disagreement. Idea's PHP editor is not a structural editor. This is not a criticism of PHP, or Idea's editor. Emacs offers structural editing of Lisp code. I personally dislike Emacs. That doesn't change the fact it supports structural editing of Lisp code.

If you are curious, for example, given Clojure as the Lisp syntax language, Emacs does offer code refactoring and linting as well as structural editing. You can extract functions, auto-complete imports, rename variables, fold code blocks, have syntax errors highlighted, have certain code errors reported on the fly, jump to definition, see source and documentation, find all usage, auto-complete, snippets, auto-format, continuously run tests, debugging with breakpoints, etc.

I personally don't use Emacs though, because I like mouse support and modern GUIs. So for Clojure, I prefer IntelliJ Cursive which has all those features also, and Eclipse CounterClockWise which has most of them, minus refactoring. So I'm just pointing out that even given a Lisp which has all the linting and refactoring features you talk about, how structural editing is still highly valued and one of the best feature of all of those when wanting to code with the Lisp syntax.


> If you are curious, for example, given Clojure as the Lisp syntax language, Emacs does offer code refactoring and linting as well as structural editing. You can extract functions, auto-complete imports, rename variables, fold code blocks, have syntax errors highlighted, have certain code errors reported on the fly, jump to definition, see source and documentation, find all usage, auto-complete, snippets, auto-format, continuously run tests, debugging with breakpoints, etc.

Basically the stuff Lisp IDEs had in the 70s.


I'm unfortunatly not old enough to know. But I've heard a lot of praise for the era of Lisp machines. It's unfortunate that most of that legacy has been lost. Common Lisp doesn't actually offer that much in terms of tooling, at least, in the free department that I've tried.


Pre-Lisp-Machine IDEs were already more advanced. See for example BBN/Xerox' Interlisp mid 70s.

Generally Lisp still offers a lot, but if you don't know what to look for, you might not see it even if it is before your eyes. Even though the Clojure community reimplemented much of SLIME.

It also might not be important to you. Stuff like having a Lisp compiler written in Lisp, having an actual interpreter, being able to dump images, low startup times by default, break loops, readable stacktraces, type checking compilers, resumable error handling, embedding in C applications, whole-program compilers, compilers which can create C code, ...


I downvoted because your comment sounds argumentative and condescending. So does this one.


When I was first learning C, I wondered the point of macros; especially with inline functions.

It wasn't until I taught myself Racket and Clojure a few years later that I realized the utility, and it was immediately one of those "the world is different now" moments; I could actually augment the language without having to contribute to the core compiler.


It's very nice, but with great power comes great responsibility. Some people program so many things with C macros that it becomes very hard to read their code.


That's the beauty of Lisp: it's all very readable, because it's a real language, not a string-substitution preprocessor.

Yes, any idiot can write unreadable Lisp — but good Lisp is a work of art. The Art of the Metaobject Protocol should be required reading: it builds an entire object system, complete with classes and generic functions (multimethods), written in the language itself. It's a tour de force.


And I'm just pointing this out for those who aren't already aware -- C macros and Lisp macros aren't really comparable.

C macros are implemented as string substitutions in a separate step before compilation. Lisp macros can make use of the rest of the language features and allow for operations on the abstract syntax tree of the code, opening up things like adding new control constructs, generating boilerplate code, string substitution, and the creation of domain specific languages.


I'm still wrapping my head around macros, but I have been able to come up with a couple use cases that I quickly realized were already covered by existing macros, the most recent one being a less flexible -> macro.

Anyways, I kept reading places that lisp macros and c macros were completely distinct, which seems untrue after a couple years of trying clojure.


> I kept reading places that lisp macros and c macros were completely distinct, which seems untrue after a couple years of trying clojure.

They're similar in that they're both compile-time functions that generate code, but they're very different in practice, because C macros are written in a text substitution language that knows virtually nothing about C, which naturally makes writing even simple macros very error-prone. Lisp macros, on the other hand, are written in plain old Lisp and receive regular Lisp data structures that you can use the whole language to operate on.

A lot of people learn from C that macros are very difficult to get right, so it's important when they move to Lisp that they give them a second look, because Lisp is a much more capable language for code transformation than cpp.


Right, that makes sense


>Anyways, I kept reading places that lisp macros and c macros were completely distinct, which seems untrue after a couple years of trying clojure.

I haven't used Clojure macros but as a programmer that uses both C and Common Lisp, I can say with good authority that C macros and CL macros are very different.


Yea, not the same. Similar motivations though, right?


The Elixir getting started link is now at: https://elixir-lang.org/getting-started/introduction.html


M-LISP: a representation-independent dialect of LISP with reduction semantics

In this paper we introduce M-LISP, a dialect of LISP designed with an eye toward reconciling LISP's metalinguistic power with the structural style of operational semantics advocated by Plotkin [28]. We begin by reviewing the original definition of LISP [20] in an attempt to clarify the source of its metalinguistic power. We find that it arises from a problematic clause in this definition. We then define the abstract syntax and operational semantics of M-LISP, essentially a hybrid of M-expression LISP and Scheme. Next, we tie the operational semantics to the corresponding equational logic. As usual, provable equality in the logic implies operational equality. Having established this framework we then extend M-LISP with the metalinguistic eval and reify operators (the latter is a nonstrict operator that converts its argument to its metalanguage representation). These operators encapsulate the metalinguistic representation conversions that occur globally in S-expression LISP. We show that the naive versions of these operators render LISP's equational logic inconsistent. On the positive side, we show that a naturally restricted form of the eval operator is confluent and therefore a conservative extension of M-LISP. Unfortunately, we must weaken the logic considerably to obtain a consistent theory of reification.


PigPen is an amazing wrapper over Pig that shows why and how macros are important and useful.

https://github.com/Netflix/PigPen


See what I did there?

Despite the good intention, no, because you're trying to explain lisp using lisp. If I knew lisp, I wouldnt need you to explain what code-as-data means.

Maybe it's time for a different strategy. As a matter of fact, I've tried to explain code-as-data using OOP just this morning: https://news.ycombinator.com/item?id=16389514


Your linked comment is great but it sweeps one thing under the rug.

There is a reason lisp didn't win. We put up with it (I use emacs, so I have to write it at times) but in general it's harder to reason about transformable lists of code than it is about plain old objects and their methods. Almost every problem elegantly solved in lisp has a parallel elegant solution in Ruby (using blocks, dynamic method definition, or otherwise) and I generally find the resulting code both more maintainable as well as more readable. Sometimes with lisp things get "spooky" in a way that just doesn't happen in practice with Ruby, even if it theoretically possible to say redefine String#inspect. Though I completely agree that inheritance gets clunky and I strongly favour composition.


> has a parallel elegant solution in Ruby (using blocks, dynamic method definition, or otherwise)

> Sometimes with lisp things get "spooky" in a way that just doesn't happen in practice with Ruby

Over here the long run practical experience with Ruby on a couple or three significantly involved codebases and assorted dependencies taught me over the years that such solutions are elegant for a month then end up being a hell to maintain, debug, and trace through. We definitely encountered my lot of "spooky" things in Ruby-land. For maintainable code on the long run we now vastly favour boring, non-dynamic, explicit, down-to-earth code. Dynamic solutions, DSLs or dependencies making use of those now therefore have to sustain solid justifications and heavy scrutiny before being greenlighted.


>There is a reason lisp didn't win. We put up with it (I use emacs, so I have to write it at times) but in general it's harder to reason about transformable lists of code than it is about plain old objects and their methods.

Sounds like your experience was with Emacs Lisp, a very clunky Lisp.

>Though I completely agree that inheritance gets clunky and I strongly favour composition.

Check out CLOS (Common Lisp Object System), which is based on multimethods/multiple dispatch. It is very different to "traditional" OOP (most other languages except Julia and Dylan)


>>Sounds like your experience was with Emacs Lisp, a very clunky Lisp.

Emacs lisp is actually a very practical, easy to use and fun lisp to do a lot of great work in.

Richard Matthew Stallman notes:

It was Bernie Greenberg, who discovered that it was (2). He wrote a version of Emacs in Multics MacLisp, and he wrote his commands in MacLisp in a straightforward fashion. The editor itself was written entirely in Lisp. Multics Emacs proved to be a great success — programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.

From: https://www.gnu.org/gnu/rms-lisp.en.html

Of course if secretaries and regular office people with no prior exposure to programming or STEM education can program Emacs using Emacs Lisp and do useful productive stuff. Programmers most certainly can.


Please do not conflate MacLisp and Emacs Lisp, they are different languages. GNU Emacs is not directly derived from Multics Emacs, it's one of the many reimplementations of the original Emacs. The closest thing to MacLisp today is actually Common Lisp, and it's much more advanced than Emacs Lisp, which is not surprising considering that Stallman was not interested in creating a state-of-the-art Lisp implementation with his limited resources but wanted to merely provide a minimal foundation for a programmable text editor.


Thanks for mentioning this.

I was just trying to point out the fact that its not hard to code in Lisp. Especially given the fact that non-programmers were doing it long back, in an era where there was nothing like Internet available to ask for help.


> The closest thing to MacLisp today is actually Common Lisp

I think you could make the argument that Emacs Lisp is as close. Emacs Lisp has some major similarities with Maclisp that CL doesn't share (like in variable scoping and obarrays and such). Elisp and CL are both very close relatives of Maclisp; Elisp was designed as a mini-Maclisp for a text editor, whereas Common Lisp was designed as Maclisp's successor.


> Sounds like your experience was with Emacs Lisp, a very clunky Lisp.

The lisp that didn't win is not an acceptable lisp.


So we have only unacceptable lisps. That doesn't really have any bearing on the fact that Emacs Lisp is a bad unacceptable lisp to use as an example if you want to talk about the potential of the Lisp family of languages.


I agree, OO is ultimately good enough. Problem is, coding using OOP is like printing ideas on paper: you dont wanna do that because it's hard to discard ideas that's been printed on paper. You wanna use post-its instead because post-its are easy to discard. Refactoring is hard using OO and refactoring is the key difference between waterfall and good software development. Refactoring is easier, even fun, using functional programming because the medium is more malleable.


Refactoring lisp gets harder the more complex the program gets. Approachs that are clever and reasonable for a 1000 line program can become hell in a 100,000 line program.

Effectively programming languages get less powerful the longer the program becomes. Breaking up programs into more powerful little pieces is never a clean separation and you need to make real trade offs between more separation and more power.


> Refactoring lisp gets harder the more complex the program gets.

I have found that to be true in dynamically or duck typed languages.

I have found it to be not true in strongly typed languages with good tooling. Example: refactoring large java program in, say, Eclipse.

> Breaking up programs into more powerful little pieces is never a clean separation

I would tend to disagree, to some extent.

In Lisp(s), there are lots of small, sometimes obviously correct functions that do powerful operations on abstract data structures. Part of the reason for this is that unlike OOP (and I write Java daily) which has lots of data structures, Lisp has very few. In OOP you model your domain with new classes. In Lisp you model your domain with combinations of a few basic structures. (In Clojure...) List, Set, Array, Map.

Maybe you have layers of separation. With one layer being between primitives that manipulate your domain objects, and another layer being the logic that decides how and when to do the manipulations. I want to solve a puzzle. The puzzle board has various states, pieces, and operations that transform a board into a new board state. Then the logic layer is an algorithm or algorithms written using the layer of primitive manipulations. That logic layer may, in fact, use canned algorithms, because the data structures are common primitives. Say, sort, search, A* search, depth first, breadth first, reduce, map fn over a sequence, etc.

You can end up with a large program. But it can be a lot easier to reason about than, say, a large Java program. It seems to take more deliberate effort to keep a large java program easy to reason about.


> layers of separation

Layers are exactly the kind of power trade-off I am talking about. If code X can't(1) impact code Y, you have less freedom in code X because it can't do some things, but can more easily reason about code Y.

(1) either as an actual limit or a self imposed one.

Granted, in theory you may be able to find perfect points of separation such that you don't lose power by doing so.


>Refactoring lisp gets harder the more complex the program gets. Approachs that are clever and reasonable for a 1000 line program can become hell in a 100,000 line program.

Can you substantiate this claim? One of the reasons Lisp was used in the past was because it was particularly suited to complex, big projects... Like autopilot for spaceship, wind tunnel simulation, CAD/CAM systems, etc.

Lisp (as 'Common Lisp', the traditional dialect) has this concept of "packages" and "systems".

You can divide your code in "packages", each package contains separate namespaces for classes, function names, variable names, symbol names, keyword names and others. Thus each package doesn't collide with another package and names don't collide at all.

Many packages are combined into a "system" (by defining, for example, the required order for compilation, etc).

Thus you cleanly separate your code in packages and systems. The 100,000 line program becomes a combination of many shorter programs.

Of course this is nothing new, this is just standard practice for keeping big programs manageable.

>Breaking up programs into more powerful little pieces is never a clean separation

Care to explain? Well, i'll continue with my post. And we'll assume that what you say is true, so i'll list other advantages to keep complex systems at check.

Now, here is a big plus for complex systems: In such a Lisp, the system is hot-patchable. You can redefine functions while your program is running. Without stopping the program. We're talking about doing this even on a production environment, if necessary.

You can also redefine classes while your program is running. Without stopping the program.

Another big plus is the condition-restart system, which very few languages implement. Lisp has not only the notion of "catching exceptions", but also after catching the exception, provide alternate ways of recovering of this exception, including being able to resume operation at the point where the problem was found.

This is another big plus for building reliable systems -- and complex systems benefit from being composed of reliable parts.

I could go on with other big pluses as well. For example, multi-paradigm. Some parts are more readable (and thus easier to understand, maintenable) if written in the functional style. Others map better to the OOP style. Others would map better to the imperative/procedural style. Other would better be expressed as logic programming (think Prolog.)

Lisp allows you to use all these paradigms. So the code stays clear. I say this is very good for complex systems.


> Care to explain?

Some of this comes down to what you consider a 'large' program. Suppose you want to build an MMO in lisp, now you need to consider things running on both a client, and server, as well as rendering, path-finding etc. Choosing reasonable primitives becomes difficult as you get pulled in several directions. Should I be running the same code on client an server or should I have multiple different models communicate etc etc.

My point is you start dealing with trade offs and no approach is universally better. Even with a clean API between different pieces of code you still get conceptual dependency's.


This looks more like software design problem and not a Lisp problem.

Building anything complex requires, apart from tools, getting organized in a lot of other things as well.


That's part of it.

The is an old saying they enterprise software ends up mapping the organization that built it. If you have N teams you tend to break a large project into at least N parts.

Sharing code between teams works, but it's harder to share complex lisp macros vs functions. Which is really closer to my point, the larger the project the more tools get set aside too being to ~complex/risky to safely use.


>I agree, OO is ultimately good enough.

And Common Lisp has arguably the most flexible and powerful OOP system around, CLOS.

Why don't write OOP in Lisp? It is done successfully by lispers around the world.


>>Why don't write OOP in Lisp?

Because most people understand OO as some kind of namespace scheme. Basically to bunch together functionality under a name(class).

When people talk of OO, most of them, are actually talking about modularizing code.


That's modules or packages. OO includes inheritance, polymorphism and treating objects as black boxes (little computers in Alan Kay's mental model) with messages they respond to, or APIs for working with that kind of object.

It's a lot more than simply being a namespace. It's also that idea that an object has state and behavior, with appropriate semantics for dealing with both (depending on the language).

Objects (with classes or prototypes) act as a new kind of data type bundled with functionality for handling that data, which is more than just putting a bunch of functions and variables into a namespace. So you can treat your class/prototype as a built-in complex data type. You're extending the type system in an encapsulated manner.

It's not called Namespace Oriented Programming


You'll have to substantiate that, because refactoring Java is pretty easy if you have a good IDE. In many other languages, you're lucky if you can reliably rename a function without having to manually check for false positives, let alone something complicated like change signature or extract superclass.

Also, in any statically typed language, you can fall back on changing the function and using compile errors to tell you all the calls that need to be updated.


refactoring Java is pretty easy if you have a good IDE

rename a function

Refactoring in software development is not the same as refactoring in IDEs. Refactoring in software development is changing existing code. Specially the architectural parts.


Could you explain what you mean? As far as I know it's the same process. Changing the architecture often involves a large number of small, safe code changes, many of which can be handled by IDE's and other software tools.


>>in general it's harder to reason about transformable lists of code than it is about plain old objects and their methods.

Well actually no. Other languages make it easier for beginners to get started, but they just keep you there. Because its harder to do that the advanced stuff in those languages.

>>Almost every problem elegantly solved in lisp has a parallel elegant solution in Ruby (using blocks, dynamic method definition, or otherwise) and I generally find the resulting code both more maintainable as well as more readable.

Its other way around. We solve trivial library stitching problems in other language, and the we try the same in Lisp and don't see the point. But then that is also a kind of mental handicap. You can do things beyond a point because tools are making it harder for you to keep going forward. And then you use Lisp with the same philosophy and don't see the point.


Ultimately, I assert the truth is that we don't know why it didn't succeed more. So, speculation is just that. Take mine that is about to follow as more of it. I would be delighted to know of a way to test some of these ideas.

Lisp didn't win because it used to cost money to get a good implementation, and there was no company throwing tons of marketing at it.

Now, I specifically don't think it is a case of worse ones winning. The reality is they are all impressive. Many newer languages got to make tradeoffs that were not viable years ago. They are still nicely done and I'm glad they are here.


>Lisp didn't win because it used to cost money to get a good implementation, and there was no company throwing tons of marketing at it.

This was one of the reasons.

In the late 70s, cheap computers (IMSAI, Altair) were almost unable to run Lisp for useful purposes. Too little memory.

Minicomputers (DEC PDP /etc) were able to run full Lisp implementations but it worked slower than using other languages.

Lisp Machines (MIT CONS, etc) were able to run Lisp fast but those were dedicated, specialized hardware. And expensive.

Enter the late 80s; Lisp ran great on personal computers but implementations AFAIK were mostly commercial, so reserved to big budgets. I'd say Common Lisp did have success on the industry, used for many things (3D, CAD/CAM, simulation, etc).

Meanwhile the rest of the world was on Pascal, C, and C++.

Nowadays things are different, there are many Common Lisp implementations that are free and are good (SBCL and CCL, for example, are lightning fast).

But additionally, it is difficult to understand what advantages would Lisp bring. For this, the developer would have to grok (completely understand) the enormous value of metaprogramming, and to understand as well how flexible is CLOS.


This is a thorough version of what I was saying, thanks for expanding. It would actually be neat to see an even more expanded version of the history. If you have a solid link, I'd love to read it.

And I agree that things are different nowadays. My main point is that this seems pretty compelling evidence to me. But, what it isn't, is a testable hypothesis. (Is it?)


Also, metaprogramming is no longer exclusive to Lisp, other languages are adding advanced features all the time, so there's fewer reasons to move to Lisp.


>Also, metaprogramming is no longer exclusive to Lisp, other languages are adding advanced features all the time, so there's fewer reasons to move to Lisp.

You'll see many people in such languages say things as "please don't use macros", "metaprogramming makes things difficult", "metaprogramming is hard", "macros make the code hard to read" and so on.

Metaprogramming in all other languages is usually difficult to implement OR is limited to text substitution (example: C), which makes them a mess!

Languages with true macros mean the code gets transformed to an AST first; your macro needs to transform this AST. Thus you need to learn a whole host of functions/methods/classes and to learn the structure of this AST.

While in Lisp, macros are written using plain regular Lisp. There is almost nothing new to learn, really. This is because the language is formed of s-expressions, which are really easy to manipulate. So doing metaprogramming is almost no more difficult than writing a regular function. In Lisp, metaprogramming is not an advanced topic; quite the opposite, is a beginners' topic.

If such "other languages" wanted to make metaprogramming easy, then the language would have to be written in s-expressions.

Which means the language would probably be labeled as a "Lisp dialect", or as a "Lisp-like language."

>other languages are adding advanced features all the time

Yes, but in Lisp you can add them yourself, you don't need to wait for Google or Oracle or your Benevolent-dictator-for-life to approve the features you need, and don't need to wait for the next version of the compiler.


Most language have token meta programming features. Which most of the times are unusable. Things like C macros and source filters break all the time, and are largely useless for things apart from Global variable declarations.

So sorry, most languages don't have the features lisp has. Not even remotely close.


I theorize lisp isn't used more because of the massive amounts of parenthesis.

I think the concepts are great, and it's a great language for certain task, but my right pinky finger hurts just looking at it.

New programmers and developers look at that and compare it to something like Go, Python, or JavaScript, and all those look easier to write (although deceptively complex in places) and are 1000x easier to read.

A prettier lisp would do the world good, but things like sweet expressions are confusing for absolutely new devs to implement and veterans feel like they don't need them.

Short version: Lisp wallows in obscurity thanks to it's "look"


>New programmers and developers look at that and compare it to something like Go, Python, or JavaScript, and all those look easier to write (although deceptively complex in places) and are 1000x easier to read.

Are you sure?

Reading a file in Go

    func read(f) {
        
        var text string
        var scanner *bufio.Scanner
        var err error
 
        file, err := os.Open(f)
        if err != nil {
            log.Fatal(err)
        }
        scanner = bufio.NewScanner(file)
        for scanner.Scan() {
            text += scanner.Text()
        }
        err = scanner.Err()
        if err != nil {
            log.Fatal(err)
        }
        fmt.Println(text)
    }
    
in Common Lisp

    (defun read (filename) 
        (with-open-file (input filename)
           (loop for line = (read-line input nil)
              while line do (print line))))
              
Now tell me which one is easier to read. And the lisp example you can understand what it does without even knowing any kind of Lisp language; it is almost english.


Not very convincing. Yes the words are simpler, but the density and mental paren parsing are roadblocks.

Perhaps a lisp with python-style indentation for blocks would be more easily read?

    def read (filename) 
        with-open-file (input filename)
            for line in (read-line input nil)
                while line do
                    print line


I find this the least compelling reason, to be honest. Especially when I check how many brackets and other decorations I have in a typical java file. The majority of the code, actually, has a similar number of brackets, and way more commas. :) (Specifically, most java could be a lisp if you just move the function name into its paren and remove all of the commas.)

Granted, the bike shedding that goes on in our industry is hilarious. I think many people blame semicolons as one of the major hurdles to learning c nowadays. The things we fixate on...


This is not true for many reasons. Firstly every time you begin a block of code in any language you add parentheses, So the parentheses count comes out same in all languages.

Secondly, parentheses are a non-issue in lisp. Its almost like reading english text with spaces. After a while(quite quickly in-fact) they are a total non issue.

>>New programmers and developers look at that and compare it to something like Go, Python, or JavaScript, and all those look easier to write

Arguably the biggest problem today is programmers who are stuck at early intermediate level all life.


Sweet expressions are a bridge too far IMHO. I would be perfectly content with merely getting rid of those dangling ))))) thanks to simple parentheses insertion† based on a single very simple indent/dedent rule:

> if the next line is indented, anything from the beginning of the current line starting at the indent till a dedent matching the current line indent is wrapped in one set of parentheses.

Seriously, what about this?

    defmacro deftag [name]
      list 'defn name ['& 'inner]
           list 'str "<" (str name) ">"
                '(apply str inner)
                "</" (str name) ">"
    
    deftag html
    deftag body
    deftag h1
    deftag p
Now what about the DSL that now downright looks like slim-lang?:

    html
      body
        h1 "Hello World"
        p "How's it going?"
Or look at that:

    defn qsort [coll]
      if (empty? coll)
        coll
        let [pivot (first coll)
             remainder (rest coll)]
          concat
            (qsort (filter (partial > pivot) remainder))
            [pivot]
            (qsort (filter (partial <= pivot) remainder))
(even the qsort ones could be removed, but they balance [pivot] somehow)

I'm perfectly happy with writing (+ 1 2 3) or whatever, the only parentheses that bother me are those that are perfectly delineated by indentation already, just as newlines perform the same duty as semicolons. They're just duplicated noise††.

† Other languages perform similar semicolon insertion, with varying degrees of success and ambiguities (JS==terrible, Go==terrific).

†† I know, you're supposed to get used to it and "unsee" them. I know, "paredit!". But seriously if you have to write a tool solely to produce noise, and you have to get accustomed to ignore said noise, well, somehow, something's off.


Dylan. Dylan offered something like what you're asking for here. You had much the same power as traditionally structured lisps (it even has an alternative s-expression syntax).

https://en.wikipedia.org/wiki/Dylan_(programming_language)


You might want to take a look at Haskell. It comes from a family of languages called ML. And ML was known at one point as Lisp without parenthesis.

Modern incarnations of the same like F# are attractive too.

But Lisp is what it is, List processing.


A lisp program will tend to have fewer parentheses than an equivalent Java program has parens + braces. They just stack up at the end of the expression.

For example, see https://gist.github.com/coding4food/1248505

Part of this is because lisp tends to be more concise, so there are fewer expressions required. There are relatively few places where lisp adds parentheses, rather than just moving them - mostly just basic operator expressions: (+ a b) vs a + b. Adding parens (a + b) doesn't make it any harder to read.

For function calls, Algol-descended languages already use parentheses and prefix notation, they just put the parens around the arguments instead of the whole expression:

  (function argument list)
vs

  function(argument, list)


I don't think that's the reason lisp didn't win. I'd say it's more because of Price, Slowness and Unfamiliarity.

I've used Clojure professionally now for a year, and I'm not finding the problem you mention to exist. Maybe its exclusive to emacs lisp?


> in general it's harder to reason about transformable lists of code than it is about plain old objects and their methods.

I would say it's more fundamental, simply a large majority find it difficult to interpret and reason about trees, rather than sequential operations.


I didn't understand those concepts in Ruby until I learned clojure and then I found them to be overly complicated.


I actually added in that other post that party of the appeal is that you can easily explain most lisp using lisp.

This is a strong reason to keep the common syntax tiny. Learning to eval and what it means to apply code gets you ready to understand pretty much all of it.

That is, I suspect many of us like that you can often think of code in terms of other code. In that language.

Don't fall into the trap that this is the end game, though. I am not arguing that. Just that it is a helpful step along the way.


>>because you're trying to explain lisp using lisp.

There is no other way to explain it.

Simply put:

    (1 2 3 4)
Is a list

    (add-all 1 2 3 4)
Is also a list

    (add-all 1 2 (add-all 3 4))
Is also a list

In the above example the nested list is data to the main list. And there is all there is to it.

In the last example, code just went in as data to another list(code as data). Which is also code.


This is a terrible example. I see no reason what you just gave isnt isomorphic to

    sum(1,2,sum(3,4))
Which is not "code as data". There's no reason for me to believe that the data isn't evaluated before being passed to the outer list.


> I see no reason what you just gave isnt isomorphic to

> sum(1,2,sum(3,4))

> Which is not "code as data"

Says who? In Prolog it would be. Whether or not some syntax is a data structure literal depends entirely on the syntax of your language. Your example is isomorphic if you make the corresponding change to your language's data structure literals.


Well but that's my point. The given example doesn't take advantage of any code as data features. It's entirely user transparent whether code is data or not.

As a motivating example for why one should use lisp, or what lisp brings to the table, it's very poor, because it is wholly unmotivating and doesn't appear to demonstrate anything unique.


True, but.

    sum(1,2,sum(3,4))
isn't a list. so sum() isn't a list member and things passed to it aren't list either.

Lisp code is more consistent.


No, it's equally consistent. In lisp, everything is a list. In the example I just gave, everything is a scalar.

Like I said, this is a very bad justification for lisp.


Everything isn't a scalar in the example you mentioned. sum() is a function and other things are numbers.

In lisp there is only rule, everything is a list and the first element of the list is a operation to the remainder of the list. This is basically the whole language.


>In lisp there is only rule, everything is a list

Well no. `print` is not a list. `4` is not a list. `(print 4)` is a list containing the scalars `print` and `4`. The mechanics of lisp then say "apply the thing in the first position to the remainder of the list", but that's just notation.

In python, `sum` is a function that can apply to a splatted sequence of scalars (note that it can also apply to a list, but this is different), so `sum(1,2,3,4,5)` works. It returns a scalar value.

In lisp, `+` is a function that can apply to a list. It returns a scalar. `(+ 1 2 3)` resolves to `6`, not `(6)`. This is no different than python (or C actually, for this example).

I understand lisp, I understand why "code is data" is a powerful tool. Your example is a bad one, it does not demonstrate why "code is data" is a powerful tool, because it does not actually demonstrate that code is data.


You seem to be suggesting

    (sum 1 2 3 4 5)
and

    sum(1, 2, 3, 4, 5)
are the same because they both return 15. That's just one way of looking at things. And that is true only if, that one is the only way of looking at things.

Structurally (sum 1 2 3 4 5) and sum(1 2 3 4 5) are NOT similar. They are not even close, in fact they represent opposite schools of thought.

There is a reason we are insisting on everything being list and them being uniform that way. That's because eventually we can use structures to modify themselves(recursion and macros).

Which is why Lisp's (sum 1 2 3 4 5) and Python's sum(1 ,2, 3, 4, 5) can't be judged on what they return alone, they have to judged on the structure they represent.

There is a interesting macro in Clojure named '-->'.

This basically helps:

    (task3
      (task2
        (task1 work)))
to be

    (--> work
         task1
         task2
         task3)
These sort of transformations are not easy with sum(1,2,3,4,5) kind of syntax or even possible in many cases.


    task3(task2(task1(work)))
can, with the function

    apply(data, *funs):
        return reduce(lambda d, f: f(d), [data] + funs)
be rewritten as

    apply(work, task1, task2, task3)
Its possible in exactly the same set of cases, namely those where each task takes a single argument/is provided as a partial.

I expect that `->` is implemented in clojure in much the same way (and [1] implies that, modulo some scaffolding and error checking, it is, although with a recursive, inlined impl of reduce)

[1]: https://github.com/clojure/clojure/blob/08e592f4decbaa08de57...


Sorry but again, using a workaround doesn't mean they are the same. When I mean same, I mean structurally.


But they are structurally the same that's my point. I can implement -> practically directly. There is no structural difference that you've demonstrated.

That is not idiomatic and common in Python didn't mean that there is some structural difference. Basically the transformations you're describing are doable with regex, which is the opposite of powerful.

What you've done is take code is data, which is powerful, and confuse that with "supports varags", which is less powerful, and claim that the second is unique and powerful when it is not unique, and not uniquely powerful.


sum(1, 2, 3) and (sum 1 2 3) are just different read/print notations that can map to exactly the same data structure and therefore "do" everything the same way.

The --> macro invocation would just look like -->(work, task1, task2, task3).

There are good reasons for considering f(x, y) to be a bad notation compared to (f x y); but this isn't one of them.


That was just an example. In general, those representations are not the same. Structurally.


They're only not the same when one is in Lisp and the other isn't.

E.g. you load "infix.cl" into your Common Lisp so that you then have the #I read macro that gives you #I( f(x, y) ), then they are the same. This just denotes (f x y). It would be suboptimal for that to be any other way:

https://www.cs.cmu.edu/Groups/AI/lang/lisp/code/syntax/infix...

Now let's think about how silly it is to try to write a let block using this syntax:

  (let ((a 1) (b 2)) (+ a b))
becomes

  #I(  let(a(1)(b(2)), +(a, b)) )
In the ((a 1) (b 2)) part the (a 1) becomes a(1). That then then looks like a function applied to the argument b(2). Things that should be on the same level aren't.

It's like using Roman numerals instead of decimal.

(f x y) is one size fits all; perhaps not optimal for anything in particular, just for everything.


This was exactly what I was trying to say. The list notation just fits perfectly with everything else just so nice. And though we could talk about the possibilities in the mathematical sense, its hard to many things you do in lisp in non lisp languages.


> you're trying to explain lisp using lisp

You may prefer this[1] explanation that uses XML (ant) to explain the basic idea.

[1] http://www.defmacro.org/ramblings/lisp.html


I think there’s a much simpler example of how code is data:

    (hello world)
That’s a list containing two symbols. So it’s data. However, if I evaluate that list, it will call the function “hello” with the argument “world”. So it’s also code.

Incidentally, lists and function calls are identical in lisp, hence code is data.

I think the article just complicates it by introducing macros.


    '(hello world)
is a list

    (hello world)
is a function call


Typically when writing about lisp code, it is also correct to talk about how "read" sees the expression, to avoid some confusion about what happens after "eval". That's how I interpret the parent comment when it says that (hello world) is a list of two elements.


Well, technically they're both lists, it's just that the ' prefix makes it a list literal rather than having the compiler try to macro expand it. A bit pedantic, but of massive importance when macros are in play.


Evaluate it, not macro expand it ;)


That example was great - I've been inspired by LISP and have been playing around with my own idea of an HTML-building set of functions already this week: http://staticresource.com/html-js-4.html

It's great to see such a relevant example :D


See also Mark Miller’s excellent blog post on the same topic, which really turned on a few lightbulbs for me: https://tekkie.wordpress.com/2010/07/05/sicp-what-is-meant-b...


For me, "code as data" means that the code I write provides a particular structure (whether to create a report, or go through an editing task or find particular information in the computer). I then use a table-driven "data" approach to finding the particulars that are significant for this particular class of information. That might mean that I have the unique code stored in a table which I lookup using a relevant key, hence "table-driven code", or I have "setup code", "one iteration", and "teardown code" accessible through an indirect call because the structure of the code to do the task can stay the same. It is also common to have a general "event" fire at the end of the code to inform any other programs that need to know this data structure or real-world event has been accounted for. I think this approach is amentable to better testing, and can be viewed as a higher-level abstraction similar to LISP macros. (Even though my language of choice - MUMPS) doesn't have LISP-style syntactic macros.


As someone who uses Clojure for only side projects, macros seem to get a lot of attention, both good and bad, for something I very rarely write. Maybe I'm missing something though, and programmers using these languages professionally resort to them more often than I do.


code-is-data also enables structural editing–instead of typing strings we can transform data–, there are gifs of that here: https://cursive-ide.com/userguide/paredit.html


Does anyone have some examples of great products that have been built largely by leveraging lisp macros?

emacs?


>Does anyone have some examples of great products that have been built largely by leveraging lisp macros?

Common Lisp itself.

The Common Lisp language is not only made up by data types, functions and control constructs: Many of the typical keywords a Common Lisp programmer would use, like defun (for defining a function), are macros themselves.

Many control constructs are macros as well.

So a good part of the language is built using macros as well.

The compiler itself (compiling Lisp to machine language) is also mostly made up of macros in many Lisp implementations.

Thus, the answer is: Common Lisp implementations are built largely by leveraging lisp macros. And i'd say those are "great products."



Would it be possible to do the same thing in Haskell using partial function application, where the first argument is the name of the tag and the second argument makes the "<" ">" tags and applies string concatenation?


Absolutely. However, I will note that, while Haskell makes partial application very easy, there's also nothing Haskell-specific about it: you can write curried functions in many languages, possibly by using helper functions or just manually transforming your code:

    def tag(tag):
        def render(*body):
            return '<{tag}>{body}</{tag}>'.format(
                tag=tag, body=''.join(body))
        return render

    html = tag('html')
    body = tag('body')
    h1 = tag('h1')
    p = tag('p')
I also should note that, while the macros in this case could be obviated by approaching the problem slightly differently, I don't think that's true of Lisp macros in general. I don't want to be be too harsh to this particular blog post—coming up with a two-or-three line motivating example that's accessible to a general audience without being contrived is very hard! But one consequence of the slightly-contrived nature of the example in this post is that it's very easy to come up with non-macro ways of solving the same problem, as demonstrated above.


You could get most of the value in Haskell using higher-order functions, but note that the macro-based code generates a complete function definition from (deftag tagname). Since definitions aren't values in Haskell (code isn't data), you can't just write

  deftag "html"
but instead would have to write something like

  html = tag "html"
I'm not sure whether you could get around that limitation using Template Haskell (never tried it).


Haskell comes from a family of languages called ML. And ML was known as Lisp without parentheses.

So its again Lisp all over again.


I don't know that I've ever seen someone call ML "Lisp without parentheses". Regardless, even with some familial relationship and a few principles in common (like a commitment to functional programming) the two lineages are far more different than they are alike: Lisp has a powerful dynamic core and intricate metaprogramming capabilities, while Haskell builds on a powerful type system and non-strict evaluation. Saying that Haskell is "Lisp all over again" is sort of like saying that cars are "trains all over again": a statement so reductive, it's somewhere between wrong and nonsensical!


>>I don't know that I've ever seen someone call ML "Lisp without parentheses".

My bad. It's called Lisp with types. From: https://en.wikipedia.org/wiki/ML_(programming_language)

Now. Haskell does look like a impractical scheme. Beyond that, today if you want it, you have typed racket.

For a lot of people you could just go ahead use Racket/Lisp instead of Haskell instead.


This is exactly what we mean as "code as data" and why it is so profound and has such exciting potential:

vimeo.com/208899228/b9bc9eaaa4#t=13m50s

"The Future of Programming and Databases" ^ JSRemote Conf / NodeJS Italy


Best language to understand this is prolog.

dog(fido).

This is data and it's also code.

It's data and code that state's that there's a dog called fido.

I could have written it as

exists_dog(fido).


Why call `(map eval inner)`?


In Java, you have code as string. That is, your code is represented using a big string. In Lisp, your code is represented using the list data structure.

So its really code-as-data-structure.

A string is difficult to parse and modify, inserting things in the middle, removing elements, changing the order of the words, that's all really difficult with a string. So if you want to transform Java code, its going to be hard and error prone.

A list is easy to manipulate in contrast. Inserting elements in the middle is trivial, so are deletes and swaps. So in Lisp, if you want to transform code its pretty easy.

Meta-programming is when you write a program that writes a program. An example is say you want to you wanted to add a semi-colon at the end of all your lines of code. You need a macro to do it for you. A macro is a program that acts upon your code to transform it. Eclipse has them. So now its really easy to add a semi-colon at the end of each line. What if you wanted to add a comma between all words in a selection? Now its trickier if your macro operates over a big string, you might need a regex for example. This is meta programming though. Instead of adding the commas yourself, which are required for the program to run, you write another program to add them for you.

Now if all the words were elements in a list, that macro would be a lot easier to write.

This is in essence what code-as-data(structure) means. Its in contrast with code-as-string. You don't have to choose lists as your datastructure either, as long as its something that allows you to represent a turing complete program and is easy to manipulate.

Now, homoiconicity is the fact that your text of code looks like a data-structure too. Making it trivial to parse it into one. So back to my example, you could parse the java code string into a list, and then add commas, and then convert it back to a string. But Java code doesn't map logically into a list. Some construct don't nest like lists, and how do you define what goes into each node of the list? Do you group public and String together? In Lisp, the syntax is an unambiguous AST already, its thus trivial to parse into a list of lists. So you can easily get that AST you need to easily add commas where it make sense.

Finally, there's a third aspect. Code-as-data also implies that your language can accept code as argument in the form of raw data. The best way to think of it is, how would you send a function over the wire to a program and have that program run the code I sent? You need a way to serialize that function, which is code, into raw data that can be transmitted over the wire. The receiving program doesn't have that function defined, so it needs to know how to deserialize it, but also at runtime it must be able to take this raw data, which represents code, and be able to parse it, compile/interpet it and execute it.

Think of SQL, SQL is often used in a code-as-data way. You want "select * from %s". Now you'd take this as a string, and you'd use a string replace, and replace %s with something the user picked in a drop down. At runtime, you are dynamically creating the SQL code, and once you have it, running it. You might have methods that accept SQL and return SQL. Now again, your SQL is a big string, which isn't ideal. But this is still an example of code as data. Now in Java, you can not do that with Java code. Java does not have this concept of code-as-data. In other words, there's no eval.

So when you combine all these three aspects, a homoiconic syntax that parses easily and logically into a data-structure which is easy to manipulate, and where you can then execute data which represent code at runtime and pass it around to other functions, even over the wire, you get a very powerful combo that turns into a Meta-programming powerhouse. This is the strength of Lisps, the one strength all Lisps share.


One of the failures of conventional programming libraries is that parsing libraries such as yacc work in only one direction. In 2018 we could easily have libraries that work both ways by default, but we don't. Bidirectional parsing is great for code generation and opens up a lot of things you could do easily, but because common parsing libraries are unidirectional, people aren't aware of what you can do and don't clamor for bidirectional parsing.

It is very possible and practical to parse conventional languages down to an AST tree, work on the tree, and run that code. See

https://github.com/lihaoyi/macropy

Sometimes I wonder if the LISP cult is just trying to pretend Noam Chomsky was never born.


>It is very possible and practical to parse conventional languages down to an AST tree, work on the tree, and run that code.

Yes, of course.

>Sometimes I wonder if the LISP cult is just trying to pretend Noam Chomsky was never born.

Life is great at our cult, you see?

Now, seriously, what you propose is to transforme "conventional language" (say, Java) to AST, work on it, and then spit out conventional language again. This is fine.

The problem is that in those cases, when you write a "macro" (an AST->AST function), you then need to learn:

* the semantics and structure of the AST

* all the functions/methods/classes your Conventional Language tells you to use for manipulating the AST

The point of Lisp is that you are writing the code in what is an AST as well.

And this AST is written as Lisp lists.

And Lisp is very good at manipulating lists, it has a ton of built-in functions for them.

And thus, for writing the AST->AST function ("macro"), you don't need to learn anything new, if you already know Lisp.

An additional bonus is that your macros are mostly clear, easy to read. Because they are written in a mostly similar way to the rest of your code.

Another bonus is that Lisp was created with AST->AST transformation in mind from the ground up. Macros work almost transparently; you can specify them to work on read-time or on compile-time. They can also work at run-time if necessary. You can do AST->AST at runtime and have the Lisp implementation compile it to machine language at runtime.

This isn't so easy (or practical) to do with conventional languages...


In Lisp, my understanding is that while you can manipulate the AST directly, you will likely introduce a bug unless you use the special functions for handling hygienic macros? And each flavor of Lisp has its own way of doing hygienic macros.


If you don’t know what you’re doing, you can introduce bugs regardless of programming language.


>>use the special functions for handling hygienic macros? And each flavor of Lisp has its own way of doing hygienic macros.

There are two main flavors of Lisp: Common Lisp and Scheme;

Scheme already has hygienic macros as default; and writing hygienic macros is trivial (and easy) in Common Lisp.

>you will likely introduce a bug unless

You won't "introduce a bug" if what you need is specifically an unhygienic macro. They have their uses, so it's good to be able to write unhygienic macros as well.


You can use gensym in common lisp to introduce a new symbol that is guaranteed not to exist already. If you do this your macros become hygienic. In my experience this isn't that big a burden. And sometimes you want to access existing symbols through your macros.


You have the liberty to write anaphoric macros in CL and with it comes the responsibility not to write them by accident.


The way I look at it, it is difficult enough to do "1d" programming correctly. This is "2d" programming, in a sense - programming at multiple levels, if I understand correctly. I would absolutely hate to try to parse code from someone else written in this manner. In my mind, this is wasted brain cells. Professional use of software favours simple, clean maintainable code. Simple style code, saves brain cells for the actual problem, rather than the implementation of said problem. Or do I misunderstand what you mean by bidirectional parsing?


This is absolutely correct, and considerate Lisp writers make a point of using macros only when necessary for exactly this reason.

That said, Lisp writers also use macros every day -- for example, in Clojure, only `if` is defined as a special form, and other conditional operators (`when`, `cond`, `case`, `condp`, `if-not`, etc.) are implemented as macros that use `if` under the hood. Good macros are almost necessarily difficult to implement, because they're deployed to solve problems that functions can't, but if done well they can be easy to read and use.


I totally agree. I'd love to see more languages supporting bidirectional code transforms.

I've used jscodeshift [0] to refactor large JavaScript codebases, and it's absolutely amazing. It's incredibly empowering to write code to update your code. For example: unhappy with your test assertions library? Safely migrate all your tests with a quick script.

The React team has a react-codemod [1] repo which has scripts to help you update to newer APIs. That means they can make changes and iterate on improvements without leaving people behind. Quite frankly, I don't see why more tools, libraries, and frameworks wouldn't do the same.

[0] https://github.com/facebook/jscodeshift

[1] https://github.com/reactjs/react-codemod


If by code you mean actual text (as opposed to macropy, which generates ASTs that get directly compiled), what does that open up? I've found code generation to be mostly annoying, unless it's just fixing up code written by humans. What's the point of generating a bunch of code, so that it can be re-parsed to be executed? Might as well skip the intermediate step.

To me it seems like having the computer physically press its own keys rather than just generating virtual events.


For many of the databases I use I've written libraries that can parse the query language (SQL, SPARQL, Arangodb) do analysis and transformations on the queries and convert them back to text to send on to the server.

Sometimes you are working with objects that are intrinsic to your runtime and there is no need to generate text, but sometimes you are modifying somebody else's configuration file and you need to. Or maybe you are coding in Python and you want to generate C, etc. Or maybe you are busting HTML down to a DOM tree so you can template with a set of modification operators like you do in JavaScript as opposed to various forms of text templating that have the same problems as C macros.

In Java you can load classes from bytecode (even written in Clojure!) which opens up a huge opportunity for runtime code generation. The commonly used Spring framework uses it heavily.

Other times though you want to package some objects in a JAR to give to some system that doesn't have your code generator. The generated Java files work great with an IDE in terms of cross-referencing, autocompletion, documentation lookup, etc.


Can you please elaborate a bit more on what was such a nuisance when working with this method. I'm currently building something out that is using this technique to (hopefully) save lines in code, dynamic user form generation based on with permissions, etc...


I oversimplified a bit, but the usual flow I've experience was:

The programmer runs the generator, which outputs some file(s) with code, which the programmer proceeds to edit: some areas, removing others, etc. Then for some reason the input to the generator (in your case, the permissions) or the generator itself change, and so it has to be run again. Now the programmer is left with the task of having to merge all the changes made to the original files with the new generated output. Alternatively, the programmer says "to hell with that" and updates the code manually instead of using the generator, in which case the tool was useful exactly one time.

As PaulHoule correctly pointed out, though, there are many exceptions to this; generally, if the programmer has no reason to manually edit the files, then the problem is avoided. In some of those cases, though, there's no point in generating textual code, which will then be converted to some other format; you can output that format directly.


What do you actually mean by "bidirectional parsing" - re-generating the original input, or something equivalent to it, from the AST?


Yep.

"Something equivalent" is pretty easy.

To be able to regenerate the original input (spacing and other foibles) is a bit harder but also possible. Commercial vendors developed tools that can do this in the 00's to support UML and UML-like tooling.

Related to this is getting sensible output structures. For instance, many parsing libraries based on parsing combinators output a "lispy" combination of lists/dicts/scalars which is dependent on the details of the grammar, for ex.

https://github.com/neogeny/tatsu

it is particularly obnoxious when operator precedence is implemented by creating extra nodes in the parse tree, and even more obnoxious when you have to copy those already over-complicated structures in your grammar to deal with complex context.

Even in 2018 people are still using some compiler compilers that use a "callback hell" interface that dates back to yacc. That might have been OK in 1978, but it is one of the many unergonomic things that put compiler compilers out of reach for many application programmers.


You want a decompiler that produces the original language, not just marked up assembly. For pure VM or interpreted languages, you're right this doesn't seem impossible but has unacceptable tradeoffs for most people. For starters, most companies like that compilation is a one-way action.

Whitespace is easy, btw. Just enforce code style in the language like gofmt. You don't need to preserve what you can derive programmatically. As a bonus, gofmt is awesome.


Not a decompiler, but what would otherwise be called a "pretty-printer" of the AST or intermediate representation.

I've definitely got a use case for an XML version of this - a means of modifying a large file without affecting its indentation or re-ordering attributes in areas that have not changed.


Check out react-codemod [0] for practical React / JavaScript examples. To give an example with explanation, in the v15.5.0 release they deprecated React.createClass [1] in favor of regular ES2015 classes, with a fallback to a module.

[0] https://github.com/reactjs/react-codemod

[1] https://reactjs.org/blog/2017/04/07/react-v15.5.0.html#migra...


> clamor for bidirectional parsing

I get this from Lua. "Everything to a bytecode and back again" ..


(2014)

"Code as data" is a wonderful thing but I also enjoy writing Lisp, so take that with a grain of salt.


> "(2014)"

Thought it was odd when I saw the name "Nimrod", that explains why.


One of the greatest lies that Lispers have is: "Lisp has no syntax". Syntax is defined as "the structure of statements in a computer language."

What Lisp has, and is, is a syntax to describe an AST. If you get the syntax wrong, your program won't run. And even that syntax isn't uniform across the various Lisps (some will throw in weird chars and constructs here and there to make dealing with common structures easier etc.)

It does make it somewhat easier to manipulate code as data if you wish to. And it does somewhat make reasoning about some parts of your code somewhat easier. Is it that good as it's glorified to be?

The author links to Korma as an example of the power of code as data, and macros:

    (select users 
      (where {:active true})
      (order :created)
      (limit 5)
      (offset 3))
:-\

To me, it's a chain of 5 functions which are neither shorter to write nor better than Java's jOOQ [1]:

       create.select(a.FIRST_NAME, a.LAST_NAME, countDistinct(s.NAME))
             .from(a)
             .join(b).on(b.AUTHOR_ID.eq(a.ID))
             .join(t).on(t.BOOK_ID.eq(b.ID))
             .join(s).on(t.BOOK_STORE_NAME.eq(s.NAME))
             .groupBy(a.FIRST_NAME, a.LAST_NAME)
             .orderBy(countDistinct(s.NAME).desc())
             .fetch();
Look, it even has a similar number of parentheses ;)

[1] Example from https://www.jooq.org/doc/3.10/manual-single-page/#sql-buildi...


That particular select instance could indeed just be an ordinary function, whereby the (where ...) and (order ...) are just evaluated argument expressions, also calling ordinary constructors for objects that influence the query. It doesn't really demonstrate the ability to manipulate syntax.

Things start to get more interesting when some of the inputs need to be lambdas. Still, the sugaring of those can be in the arguments, and select can remain a function. Now suppose that a clause can, say, bind a lexical variable that a later clause can somehow usefully refer to; things like that.


Unless I’m missing something everything you said applies to jOOQ code.


> One of the greatest lies that Lispers have is: "Lisp has no syntax".

It's wrong, Lisp has a lot of syntax.

> What Lisp has, and is, is a syntax to describe an AST.

That's wrong, too. Lisp syntax does not describe an AST.

> To me, it's a chain of 5 functions which are neither shorter to write nor better than Java's jOOQ [1]:

That's not the point, the Lisp version is on another level:

* it does not expose its implementation

* the macro allows to translate this code at something like compile time

The Java code is a bunch of function invocations which create a string. Usually at runtime.


Oddly enough, this code is pretty much completely paste- compatible with Lua, too. (Except the ; of course ..)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: