Hacker News new | comments | show | ask | jobs | submit login
Homoiconicity isn’t the point (2012) (calculist.org)
75 points by jxub 8 months ago | hide | past | web | favorite | 76 comments



In my opinion, this article puts too much emphasis on reading, and too little emphasis on actually reasoning about programs in homoiconic languages like Prolog and Lisp, and due to this imbalance the conclusion is not sufficiently justified.

It is true: Being able to "read without parsing" is definitely nice.

But that is only a subset of those advantages that a homoiconic language gives you. An at least equally important advantage is due to the fact that programs in homoiconic languages are typically very easy to reason about by built-in mechanisms in that language.

For example, Prolog programs are readily represented as Prolog terms, and can be easily reasoned about by built-in mechanisms such as unification.

Since I regard it as a key advantage of homoiconic languages that their abstract syntax is completely uniform and can typically be easily reasoned about within such languages, I disagree with the main point that the article is trying to make.

One interesting fact about homoiconicity is that extremely low-level languages (like assembly code) and extremely high-level languages (like Prolog) are homoiconic, yet there is a large gap "in the middle", where there are many languages (like Java, C, Python etc.) that lack this property.


Agreed; I've written Python code that processes Python code (both using ast and redbaron), and the result was quite opaque, unlike my metaprograms in Prolog.

In fact, the macropy project[1] offers the "read" step (by abusing the import system), and while using them is pretty cool, I don't think the implementation of the macros is very nice.

[1] https://github.com/lihaoyi/macropy


> One interesting fact about homoiconicity is that extremely low-level languages (like assembly code) and extremely high-level languages (like Prolog) are homoiconic, yet there is a large gap "in the middle", where there are many languages (like Java, C, Python etc.) that lack this property.

There are languages like picolisp and guile that try and fill that gap a little :)


I'd like to know more examples of this



Have you not heard of Clojure?

https://clojure.org


> very easy to reason [sic] about by built-in mechanisms in that language.

Languages allow you to express your reasoning, but they don't do the reasoning by themselves. Also, there is no conclusive evidence that homoiconic languages have simpler semantics, especially of the denotational kind.


I think the meaning is that fewer things are implemented by bringing in outside capabilities. In lisp, everything's can be expressed in terms of other lisp code. Typically directly so. In other languages that are not as macro friendly, most keywords get the explanation of "this is how the computer will act" and then explanations of new behaviors.


> In lisp, everything's can be expressed in terms of other lisp code.

Um, and how exactly are primitive forms defined?

> most keywords get the explanation of "this is how the computer will act" and then explanations of new behaviors.

Have you heard of Hoare logic? The meaning of ordinary ALGOL-style imperative programs can be given in terms of relating preconditions to postconditions. Suppose that you have the Hoare triples:

    {P} foo {Q}
    {Q} bar {R}
Then you can derive the Hoare triple:

    {P} foo; bar {R}
Note that `Q` is not mentioned at all. Hence, any implementation is free to translate the program

    foo; bar
into something that doesn't have `Q` as an intermediate state.


You seem to be arguing past me. If you are just upset that I said everything instead of most things, yeah, obviously some things are defined elsewhere. To that end, how things are actually implemented takes a trip to assembly. (I mainly blame that I'm writing in my phone. Often while in the bus.)

My point was that you don't typically see c constructs explained in terms of other c constructs. This is quite common in lisp. To see lisp constructs explained in terms of other lisp constructs. In large because there are few constructs.

You showing me that you can explain using other logic is kind of my point. It is awesome that you can do this. I recommend the skill. It is still not showing c or Java or Haskell or whatever in terms of themselves.

Note that I think you actually can do this, in large. It is not typically done, though.


> To that end, how things are actually implemented takes a trip to assembly.

Don't confuse “defined” with “implemented”. This is the entire point to having an axiomatic semantics!

> My point was that you don't typically see c constructs explained in terms of other c constructs.

Languages can't be entirely defined in terms of themselves. At some point you need to drop down to something else. If most of Lisp is defined in terms of other Lisp constructs, that is perfectly fine, but, for my purposes, i.e., proving things about programs, there are two mutually exclusive possibilities:

(0) The semantics of Lisp is the semantics of its primitive forms, and derived forms are just Lisp's standard library.

(1) So-called “derived” forms have an abstract semantics of their own right, and their implementation in terms of so-called “primitive” forms is, well, an implementation detail.

So, my answer to “most of Lisp is defined in terms of Lisp itself” is “that's cute, but how mathematically elegant is the part of Lisp that is not defined in terms of itself?”


I said explained. Not defined. Not implemented. Definitely not proved. Just explained. There is typically exposition, as well.

So, by all means, keep arguing points I'm not making. I was offering what I suspect the parent post meant by it being easier to reason using the mechanics of the language. Nothing more.


> I was offering what I suspect the parent post meant by it being easier to reason using the mechanics of the language.

And it still doesn't make sense. “Reasoning about programs” is making inferences about their meaning, i.e., deriving judgments from prior judgments. How exactly do homoiconic languages make it any easier to make inferences about the meaning of programs, given that homoiconicity is largely a property of how concrete and abstract syntaxes are related to each other? (Not that homoiconicity makes things more difficult either. It is just completely unrelated to reasoning about programs.)


I learned a lot of math from other math. Even better if it was in the same symbology that I was already used to.

Specifically, learning algebraic manipulation is typically taught by showing the basic math that you are abstracting over. Multiplication is often taught in terms of addition.

Are there deeper understandings? Of course! I am again just saying that I see the appeal for this method and suspected that was the point that sparked confusion.


For one particular example where homoiconicity makes reasoning about programs easier, consider an important reasoning method called abstract interpretation:

https://en.wikipedia.org/wiki/Abstract_interpretation

Using abstract interpretation, you can derive interesting program properties. The uniformity and simplicity of Prolog code, as well as its built-in language constructs like unification and backtracking, make it especially easy to write abstract interpreters for Prolog.

Here is a paper that applies this idea to derive several interesting facts about programs and their meaning:

Michael Codish and Harald Søndergaard, Meta-circular Abstract Interpretation in Prolog (2002) https://link.springer.com/chapter/10.1007%2F3-540-36377-7_6

Abstract interpretation is also applicable to other programming languages. However, it is much easier to apply to homoiconic languages like Prolog.


Abstract interpretation is entirely about the semantics of programs and has nothing to do with their surface syntax.


Yes, indeed!

Please note that what makes this reasoning method so easily applicable in this case is uniformity of the abstract syntax, not of the surface syntax, which is also called concrete syntax.

Homoiconicity is a relation between the concrete and abstract syntax tree (AST) of programs and the language's built-in data structures.


No idea about you, but when I reason about programs, I spend very little time manipulating syntax. Most of the time is spent manipulating semantic objects, like predicates on the program state, whose representation is independent from even the abstract syntax of a programming language.


Julia is homoiconic but has much more complex syntax than Lisp. I personally find Julia macros harder to think about than Lisp or Scheme macros, partly for that reason.


Please note that the creators of Julia no longer call it homoiconic [1].

The fact that you can access the AST in a language is not sufficient to make it homoiconic. There are several programming languages like Julia that let you access the AST yet are not (conventionally) considered homoiconic.

[1] https://stackoverflow.com/questions/31733766/in-what-sense-a...


Oh, I didn't realize that. Thanks!


> You simply can’t produce an AST without expanding macros first.

False; the syntax before expanding macros is an AST, so is the one after. It's an AST-AST transformation.

If anything, the one with macros is more abstract: because it, like, has the abstractions in their original abstract form!

Also, non-Lisp languages perform AST-AST transformations; just usually not with Lisp-like macros. For instance, the AST node for a while loop in a C compiler might be replaced with a combination of if, goto and statement label nodes (with generated labels: analogous to a Lisp macro's gensyms). That's a form of expansion. The input is an AST with while nodes; the output is one without.

So, no the advantage really is that with Lisp we are reading rather than parsing. Or, alternatively, that the parsing is very simple and uniform, and that the language of the parser over-generates: it produces a large space of forms which do not have a meaning, but serve as arbitrary data or can be given a meaning with new abstractions.

In, say C, we have a syntax in which there are numerous lists: lists of declaration specifiers in a declaration, lists of parameters, lists of structure members in a struct declaration, lists of global definition, lists of statements in a statement body. These all have their own grammar productions with their own quirks. And none of them have an object model to which they correspond.

In Lisp, the analogous things are all the same list type with the same syntactic representation. It corresponds to an object, and is operated upon by the same access and construction methods.

Homoiconicity isn't the point, because that just refers to storing procedures in the form in which they were entered ("code is characters, and nothing but"). Code is structured data is code is the point, with a nice, straightforward printed representation for working with the data textually.


> False; the syntax before expanding macros is an AST, so is the one after. It's an AST-AST transformation.

I mean, yes, in the same vacuous sense that the flat stream of tokens output by a lexer technically qualifies as an AST. If you like, your lexer could output a "tree" of 1+N nodes: a Parse node, and within it, a list of N arguments (the lexer-tokens.) You would then apply an AST-AST transformation that responds to the "parse" node by parsing its contents.

When we talk about an AST in the context of programming, we usually mean to refer not to any airy CS concept, but specifically to the output of an LR(k) parser—that is, a bottom-up, context-free parser. To parse the lexer-token stream in an LR(k) parser, you need to be able to output ("produce") a structure given a sequence of tokens, without having any context of the greater rule you're executing "within" (i.e. above on the call-stack) other than the fact of what the current rule is.

Homoiconicity is exactly the property of a programming-language grammar that allows code containing macros to pass cleanly through this initial "lexical parsing" step. Usually, this requires a separation of the grammar from the syntax of the language, such that the "lexical parsing" grammar will no longer directly produce any of the "special forms" of the language itself, but these will rather be later handled by a similar (or the same!) process as macro-expansion—consuming AST subtrees to produce other, more specialized AST subtrees.

Or, to put that another way: macros require top-down parsing. If you want to avoid using a full-on top-down parser, you can instead apply a traditional bottom-up context-free parser followed by applying a folding transformation to the generated tree to do "the rest" of the parsing. But, to achieve this parsing strategy, the property you have to imbue your language grammar with is called "homoiconicity."


Lisp macros (that the article is talking about) work with the same input and output language, not with two different languages/representations as is written. Macros can expand to other macros and even to themselves. It's an iterative process in which the intermediate pieces of tree have fewer and fewer macros until none are left. Other than not using macros, the target code is in the same language.

(In C, the preprocessing phase also works with the same input and output language. It's not a tree data structure, but rather a sequence of preprocessor tokens. Tokens in, tokens out. Parsing is then done in the next phases of translation. Macros have to do some light parsing in order to delimit the argument lists, of course, and to identify the operators like the token pasting ##, stringification # and whatever, plus to handle the preprocessing directives and #if expressions with arithmetic.)


I'm a little rusty remembering my compilers course, but do you mean separating syntax from inflection? So that you can parse just based on a "syntax grammar" and then you can trivially assume the inflection based on where things fell in the AST?

Can you explain more why this makes macros AST -> AST transformation only vacuously true? I didn't follow the argument. Also why macros require top down parsing?

Perhaps more interesting, would the property you described still be called homoiconic if the the parser were context sensitive?


False; the syntax before expanding macros is an AST, so is the one after. It's an AST-AST transformation.

In context, this is about producing an AST in a "base" language without all the language extensions the actual user program uses.


In context, the article says exactly this What’s this intermediate syntax tree? It’s an almost entirely superficial understanding of your program: it basically does paren-matching to create a tree representing the surface nesting structure of the text. This is nowhere near an AST, but it’s just enough for the macro expansion system to do its job..

That is completely false. If the code happens not to use any macros, they are the same. One is exactly as "superficial" as the other.

Macros write code in the same language that the human user; they use other macros and even recursively themselves sometimes.


What systems have you been using where reading involves a more than superficial understanding of the program? Normally, a reader doesn't even know how many arguments ought to be given to each function or syntactic form, nor does it know the difference between macro invocation and a function call. There is no need for the reader to know what the syntactic forms in the base or extended language are. The value of s-expressions (or any notation that explicitly encodes the tree shape) is that the reader can function without that knowledge.


This is the most pedantic post I have ever read on the subject. The point is that code is data, enough said. You manipulate your functions the same way you manipulate lists, because they are the same thing.

Stuff like this is frankly why so many programmers shy away from lisp and s-expression based languages.


> The point is that code is data

That's the popular slogan, but there'a actually quite a lot more to it than that. After all, strings are data too, and C programs are represented as strings, so "code is data" in C too. But that is obviously missing the point.

What's really going on is that, in Lisp, code is a particular kind of data, specifically, it's a tree rather than a string. Therefore, some (but not all) of the program's structure is represented directly in the data structure in which it is represented, and that makes certain kinds of manipulations on code easier, and it makes other kinds of manipulations harder or even impossible. But (and this is the key point) the kinds of manipulations that are easier are the kind you actually want to do in general, and the kind that are harder or impossible are less useful. The reason for this is that the tree organizes the program into pieces that are (mostly) semantically meaningful, whereas representing the program as a string doesn't. It's the exact same phenomenon that makes it easier to manipulate HTML correctly using a DOM rather than with regular expressions.


Why would code being a tree rather than a string necessarily differentiate it from other types of data? Surely, one could have non-code data that were also trees (think, just to pick a concrete example, of coded natural language data which are naturally represented as trees).

I thought the point in lisp was that the syntax for non-code data and code are the same so treating code as data (and thinking about code as data) is easier than in many other programming languages.


Please re-read the comment you are responding to. The answer is there.


The reason for this is that the tree organizes the program into pieces that are (mostly) semantically meaningful, whereas representing the program as a string doesn't. It's the exact same phenomenon that makes it easier to manipulate HTML correctly using a DOM rather than with regular expressions.

Mind. Blown. Your whole comment is incredible. I've never thought about it this way or realized what the value proposition was. Thanks!



Wow, thank you. Maybe I should write my own blog post. :-)


This is a very good comment and in my point much more valuable then the actual blog post.

I ask myself based on this world view if there are useful other representations beyond the popular choices of lists and strings. Apparently, lists (like in lisp) are already so universal that they can represent any kind of structural data.


Thanks for the kind words. Yes, lists are universal, but so are many other kinds of data structures (including strings, obviously, since there is a 1-to-1 mapping from lists to strings). So mere universality is not the killer feature.

In fact, there is no one "killer feature". It's a confluence of lots of little details. Two of the details that turn out to matter most is having symbols, and not having comas as separators in the surface syntax. Those two things are the difference between S-expressions:

(defun foo (x y) (baz (bar x) (bing y)))

and JSON, which can represent the exact same thing:

['defun', 'foo', ['x', 'y'], ['bar', 'x'], ['bing', 'y']]]

but obviously you wouldn't want to write your code like that.

You might be able to improve on linked lists as a code representation. You could, for example, use vectors instead of linked lists (i.e. cons cells) or maybe associative maps (i.e. dictionaries) as a core data structure. It's not hard to try out things like this in Lisp, so if you really want to know what happens, just grab yourself a Lisp interpreter (or even better, write one yourself) and try it.


Well said. And it got me thinking. -I wonder if this article is well known:

http://www.defmacro.org/ramblings/lisp.html

If not then people might enjoy a deeper dive into this kind of thing! For anybody who's not read it - since it's pretty long I'll try and sum it up with a quote from about 2/3 of the way down:

"Lisp is executable XML with a friendlier syntax"

This bit always comes to mind when I think of lisp and "code as data / data as code".


Code is ... nicely structured data that's easy to manipulate, general so that it represents anything and not just the idioms of that code, and with a straightforward text notation so we can conveniently work with any instance of that data, whether or not it's an existing idiom of the code.


>The point is that code is data, enough said. You manipulate your functions the same way you manipulate lists, because they are the same thing.

I'd just like to interject for a moment.

To add on to what you have said: More precisely, Lisp programs are just lists; and in Lisp lists are first-class data structures (that is, there is a ton of functions for working with list). Thus, in Lisp, Lisp programs are first-class data structures as well.


> Stuff like this is frankly why so many programmers shy away from lisp and s-expression based languages.

I do it because the editors are terrible.

I love s-expressions, but it will be a cold day in hell before I waste more of my life with Emacs, Vim or DrRacket.


But you can do this without S-expressions.

DLang, for example, has free-form macros that they weirdly call "mixin" (as in, mixing in some text or declarations into the AST, syntax-wise).

Each mixin must be a valid AST subtree on its own, which gives the same guarantee that paren matching prior to macro expansion gives you.

Then, the compiler can interleave:

     parsing

  -> evaluating mixin strings
  -> resuming parsing of the mixin subtrees
  -> evaluating the deeper mixin strings
  -> ...
You get the power of Lisp macros, but without the Lisp syntax that is unattractive to many.


The languages I know typically have a fairly standard way of implementing functions - either values or references to values are passed as arguments and then the function does something. Maybe it returns a value. That means a construct that short circuits, such as:

`if is.open(file) && read(file) { ... }`

can't actually be implemented by a function. That is, you can't write a function:

`special_and(is.open(file), read(file), ...)`

Because the `read` is automatically evaluated before the function is called.

This means a programming language with lisp-style macros can very easily implement constructs that behave like `&&` and short circuit, because they are implemented as a macro instead of a function. This opens up new (& fast) ways to control program flow that aren't available to a lot of people. The profound impact is that lisp libraries can tack on new control structures in a way that, say, C can't.

I'm no language designer, so this is basically out there to see if I get corrected.


Elixir has macros which work by accepting AST as data and returning transformed AST structures. It's difficult to work with them, but there are plenty of languages that have first-class macros with more complex syntax. From [0]:

  defmacro unless(clause, do: expression) do
    quote do
      if(!unquote(clause), do: unquote(expression))
    end
  end
[0] https://elixir-lang.org/getting-started/meta/macros.html


> This means a programming language with lisp-style macros can very easily implement constructs that behave like `&&` and short circuit, because they are implemented as a macro instead of a function.

With code blocks as values and syntax for unevaluated arguments, you can do this with normal functions without macros being a special, different thing; Rebol/Red do this, for instance.


Yep. Short circuit is a great example. Pretty much either macros or lazy evaluation are the only ways to implement it. You can fake lazy evaluation with a closure in other languages.


For some technical discussion of the order of evaluation of arguments: https://en.wikipedia.org/wiki/Evaluation_strategy#Non-strict... (Short-circuiting is specifically mentioned as having relevance)

Note that even in a language which is generally strict, laziness can be provided selectively for individual values, and vice versa. Haskell's laziness has proven to be a cause of troublesome performance problems, sometimes needing to be solved by "strictness annotations"; the newer language Idris is strict, but offers laziness as a type.

http://docs.idris-lang.org/en/latest/faq/faq.html#how-can-i-...

EDIT: phamilton beat me to it, maybe the links are still useful.


Scala also has lazy values and arguments as an option.


I wonder if this could be retrofitted by adding a 'defer' keyword, just as C# has 'out' and 'ref'.


Totally agreed on this. Twice I've set out to develop DSls only to discover I've invented lisp again and again


Yup. And if that's the kind of thing you enjoy control over, you might like Mathematica :)


I would not call that 'syntax tree', but something like 'token tree' or 'nested token lists'.

For the Lisp reader the form (+ 1 2) in

  (first '(+ 1 2))
and

  (* (+ 1 2) 3)
looks the same. It has no idea that the first is data or code represented as data. It has also no idea that the second is actual code and which syntax (here some prefix syntax) it uses.

All we represent is a bunch of tokens in nested lists.


Short but interesting thread at the time: https://news.ycombinator.com/item?id=3854262.


If I understand correctly, homoiconicity isn't really about the symbols used in the source file to represent code are the same as the symbols used in the source file to represent data structures. It's really about the parse tree of the program using the same data structures that programs use for data, and therefore you can operate on the parse tree just the same as you can on regular program data.


This is probably the raisin d'être of homoiconicity but it's more specific a concept than that. It's a conceptual property on the grammar of the language: there need to be control characters/words in the grammar like spaces and parentheses so that the parser can correctly generate nested lists, or trees matching the intended AST, even when it doesn't necessarily have full information about what the languages set of operative keyword are. Basically the parser has to support adding new, user defined operators.

And then in addition to being able to parse user defined operators, then you have to have a decently organized, simple system for applying AST transformations and modifying tree objects. And there again homoconicity in the grammar can be useful if it makes it easy to textually serialize and set object attributes.


To distinguish homoiconic languages from others (where you may also "operate on the parse tree just the same as you can on regular program data"), a bit more qualification is needed though.

For example, in typical cases, reasoning about such data structures (lists in Lisp, terms in Prolog, bytes in assembly code etc.) is very convenient in homoiconic languages, and in fact I think one could rightfully regard this ability to conveniently reason about a program's abstract syntax tree via built-in language mechanisms as a key advantage of homoiconic languages.


The parse tree of Python uses the same data structures that programs use for data: objects. And you can definitively operate on them. Yet it's not homoiconic.


> None of this is to say that it’s impossible to design a macro system for languages with non-Lispy syntax

For me this was most poignantly demonstrated by https://chrisdone.com/z/


I'm starting to think that the reason Lisp has never taken off outside a minority of users is that homoiconicity is a downside for the human readers of the language; computers are happy to count brackets, but humans prefer either different types of brackets or other separators like ';' or 'newline'.


Brackets in lisp are like the stick shift in a car. It looks scary and dreadful to anyone who has never learned to use it.

Then, it turns out to be the easiest thing about the entire endeavour, and you forget about it in all of your first 10 minutes on the job.


That's your opinion, but I spent weeks with the Clojure book and various exercises and could honestly never get past the syntax.

Because of this I have zero interest in working with a lisp day-to-day, but there are multiple C-style langs I'd be happy working with for a day-job.

I think GPP is right in asserting that most people just won't ever get over it, and you shouldn't be so quick to hand-wave their opinion away.


I introduced Clojure to a Java shop. We took 30 Java developers from zero experience in lisp to 100% Clojure developers. I have to say that my experience agrees with the parent. People didn't struggle with the syntax after a few days.

They struggled with functional programming and immutability. At least to start with. I actually found it amazing how easily people moved over and how enjoyable they found it. Out of 30 people only one didn't take to Clojure. He moved to a C# team for a while but eventually decided to rejoin and pick Clojure back up.

Obviously everyone is different but this was quite a good sample.


> They struggled with functional programming and immutability.

So, i.e., they were struggling with just the stuff in Clojure that makes it a non-Lisp.

Those programmers should have been informed that there are real Lisps out there in which you don't have to do functional programming, and things are mutable.


Glossing over the "real lisp" part ;-)

Struggled to start with, sure. But ultimately for me those are the best parts. Immutability particularly. It takes a few weeks learn how to solve problems again but I do think it makes things simpler and removes a nice category of bugs.

Actually maybe Java interop was the best part. Without that I doubt we could have picked up Clojure. It's far less true today but this was almost 6 years ago. Back then knowing there would be a library, even if we had to quickly wrap it was essential.


Of course any true Scotsman that wants to program in a real lisp wouldn't use Clojure.


> true Scotsman

I'm not defending a generalization against counterexamples by trying to exclude them with a moving-goalpost definition.

Mutation and pure procedural programming are part of Lisp. They are part of Lisp when they are bad, and part of Lisp when they are good. I've never shifted a definition of what is Lisp to exclude or include these characteristics in order to suit an argument at hand.


Parentheses are no issue at all...dead serious there. Use something like parinfer (easy for beginners!) or paredit, and that problem goes away right now. You won't even see the parentheses--trust me. I agree that not having other special symbols to break up the code and make easier to pick apart is kind of crappy. Enter Clojure... it elegantly supports other symbols in a non-eccentric (looking at you, Haskell) sort of way, which I find helps a lot. [] vector, {} map, #{} set...you get the idea.


I like Scheme, dislike noisy syntax (including semicolon and underscores) and I'm not counting parentheses. I let my computer take care of keeping the parens matched, e.g. with Emacs paredit.


Yes, but most people seem to prefer it the other way round. This is like the "cilantro soap taste" thing, you can't say people ought to like something if they just don't.


It took me a while to understand how to read lisp, compared to a C syntax it feels inside out and upside down (read code from the bottom up and nested forms outward). But once you get it it's easy.

I've also struggled with understanding header files in C++, with templates and operator overloading and all the different meanings of const. But C++ has a massive user base. My point is C++ syntax is hard to grok too. Maybe popularity is an accidental thing related to inertia and winner-take-all effects?


In my experience, that is an overgeneralization, though definitely a tempting one that is frequently encountered.

That being said, I find that Prolog code is often more readable than Lisp code. An important reason for this is that Prolog supports prefix, infix and postfix operators that can be defined as part of the concrete syntax. On the level of the abstract syntax tree, all terms conform to the inductive definition, so this is only a notational convenience.


I think there is an over-emphasis on the parens in Lisp. Yes, they are vital, obviously, but the real magic for me is that it's consistent all the way down. The prefix notation is incredibly powerful once you get a hang of it.

Consider this in Python (or any other language):

a = [1, 2, 3]

Now sort it.

In lisp, everything is:

(function [arg] ....)

You've conquered the entire syntax of the language. Now we can move on to getting things done and not have to worry about order of operations, variations on the basic syntax, and so on.

You can do newline type formatting with paredit and sane indentation habits, which aren't any different than other languages.


In Lisp not everything is (function arg ...).

Take LET and LAMBDA. Both don't follow above pattern.


>I'm starting to think that the reason Lisp has never taken off outside a minority of users

Believe me, there are many reasons why Lisp hasn't gained more traction, and little of this has anything to do with the syntax or with the homoiconicity.


Guys it's simpler than that I think. First, homoiconicity means one representation for code and data. Code and data are represented by the same data structure (in lisp it's list).

So what's it good for? Well, it's a generalise way of doing objects. In OO code, when you're given an object, say in parameter of a function, you're given data and, joy, you're given code as well. That's super handy because now data and code-that-runs-on-those-data come in the same package. You dont have to know the details (and more importantly, you dont care about the details) of how that piece of code-and-data was made - Im looking at you polymorphism - you can interface your algorithm to it and things will run the way they are supposed to. Notice how your programming has become more powerful. You've decoupled things here: now you dont need to know how the code works, but you still can interface to it. Other teams can supply piece of code-and-data, and, as long as you've agreed on the interface, things will run. That's classy.

Now you could go one step further. You could go literally matrix on this concept, and by changing virtually nothing. Let's just represent an object in a different, yet identical way: as an ordered list of members and methods - which it literally is. What's that cool for? Well now you have a list, you can splice it. You can add and remove code-and-data at will which is what you were doing when using polymorphism (you were swapping methods, adding members, that kind of stuff).

What's it good for? Step back a minute, what is polymorphism good for? We mentioned it before, it allows you to decouple implementation from execution. Well then, homoiconicity is good at the exact same thing. That's it, there is nothing more to say. If you understand what inheritance and polymorphism are good for, you understand what homoiconicity is good for: it's tools for representing and manipulating code-and-data. Notice how polymorphism and inheritance are tools from compile-time. Homoiconicity is the most usefull at compile time too, yet can be used at runtime as well.

All in all, that's why coding in lisp will make you a better programmer. OO languages and Lisp have the same goals. Only one is the nerd version of the other. Code in lisp and you'll come back to OOP thinking "this looks like BASIC now".

Ultimately, OO is good. The only thing that's bad with OO is that it's clunky in practice (what a pain to change a class hierarchy) and therefore gets in the way of refactoring. Refactoring is the key difference between waterfall and good software development. I had a friend who used to say "you should be refactoring 30% of the time" and I believe he's right. So while OO features are arguably good enough, programmers tend to waterfall with it and that's a killer.


I have to admit I've never heard of the term homoiconicity before... Parsing it out, I clicked on the link thinking it might be an article about making icons or emoji on different platforms look similar.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: