Hacker News new | past | comments | ask | show | jobs | submit login
Don't Say “Homoiconic” (2018) (expressionsofchange.org)
88 points by dmux on Aug 11, 2019 | hide | past | favorite | 68 comments



Like most people, I'm going to keep using the word "homoiconic" because it is a useful term.

The article just shows that if you insist on an absurdly strict definition you end up with nonsense. Even the "strings" based definition doesn't work if you're being especially literal, since there are no strings, there are just electrical fields that may or may not be in physically nearby places.

Everything in software is, at some level, an abstraction. And that's okay.

S-expressions are merely a textual representation; Lisp systems don't normally store exactly the text representation. So what? The upshot is that homoiconic representations make certain kinds of transformations easy to use, understand, and display... and that's what people mean today.

I agree that homoiconicity is not strictly boolean, but that doesn't make it a useless concept.


> The article just shows that if you insist on an absurdly strict definition you end up with nonsense.

I've sometimes caricatured this kind of argument as:

  1. "Blue" is Pantone™ 292.
  2. The sky is not Pantone™ 292.
  3. Therefore, the sky is not blue.


In language as long as the other people know what you mean it's totally fine. Context is all that matters.

It's only a problem if there was already a meaningful term closely related to what you mean and you're simply using incorrectly which might confuse people. In that case it's okay to challenge the word use.

Otherwise I never understood some people's obsession with not reusing words in new places. Other than to be "right" about something.

It's then really only useful as a historical look at the term.


There is a single context for the (ab)use of "homoiconic": the context of classifying programming languages/features.

Now, of course, there can be smaller subcontext contexts, like "Bob's paper about classifying programming languages". In such a thing, Bob can define black to be white, if Bob wants; however, that definition ends when we turn over the last page of Bob's paper and return to reality.


I have no idea what it means. You have to buy the 1960 paper "Macro instruction extensions of compiler languages" from Doug McIlroy to learn the exact definition; I can't find a free copy anywhere online.

Since the word was born with a definition, we should respect that rather than inventing one.

Note that Lisp systems don't necessarily store any representation at all, other than compiled code. To work with the source code, you have to intercept it while it is passing through compile time.

Intercepting a representation of the code at compile time can be done with technology on the level of the C preprocessor.

If that is homoiconic, then C is homoiconic.

Quite possibly, McIlroy's definition will in fact cover that.

GNU Bash meets the "homoiconic" definition that the TRAC authors adopted. At any time, you can type the "set" command to view the definitions of all functions. They look the way they were entered, modulo reindentation, and stripping of # comments.

Homoiconic may continue to be useful; but as a word that possibly extends to low-brow hacks like C preprocessing, it is is not useful for describing the advantages of Lisp and its ilk, that's all.


How about "defining" homoiconic to mean that it's easy to manipulate programs? More specifically, say a programming language is homoiconic if (1) the standard library for the language includes a parser, and (2) the parser returns data structures that are easily manipulated by functions in the standard library.

Point (2) can be achieved by having the parser return generic data structures such as lists, arrays and hashtables for which the standard library will probably have lots of convenient functions (this what Lisps do); or by having a dedicated AST data type, but including a bunch of AST manipulation functions in the standard library (like Template Haskell).

Of course, this isn't a very precise definition because it relies on the undefined terms "parse" and "easily manipulated". Does read_file_to_string() count as a (trivial) parser that produces strings, which can be easily manipulated by standard string functions? The idea is this doesn't count, but I'll leave the problem of defining what counts as a parser to other philosophers of programming languages. :)


Unless I’m missing something, there should be (3) these structures can be passed for evaluation immediately at runtime, linked to that runtime, including access to current activation record’s data, global symbols and so on. Without that, (1,2) is simply an ast transformer, and as far as I understand it, doesn’t count.

E.g.

  var st = new IfStatement({
    cond: new ArgExpr(1),
    body: [
      new AssignStatement({
        dst: new LocalVar(‘a’),
        src: new Constant(5),
      })
    ]
  })
  st(true)
  console.log(a) // 5
Both code and data may be a list, but also may be a struct/array hierarchy. With modern attitude though, eval(str) and non-local goto would seem as cute kittens when compared to this power.


Oh, you want the homoiconicity to serve a purpose. I guess that's reasonable. :) Although, I'd say that a dialect of Lisp with no runtime eval, but with compile time macros would still count as homoiconic and your suggested (3) doesn't seem to allow it.


But... macro compile time may be seen as just one pass of reading, evaling in almost the same runtime and saving an image for later reuse. If we got these (var st = x), (var y = 1), (function main), etc in column 1 and then just run one pass of js, we would be able to “compile” it, i.e. (exit (save (optimize (eval (read))))), and then run main() with no IfStatement defined. I mean, isn’t compile-time a little vague term when macros are ‘evaluated’ in essentially the same rich environment, and it’s only wanted coincidence that the point separating compile/run time is the same as the execution stage where no macros remain unevaluated? (ed: but still no io performed except for source codes and static resources)

When we write say heavily templated and const_expr’ed C++ to output the results of these exprs, do we pretend that we then run a binary or that we ran a C++ program once and cached results in an executable format?


(Follow up)

the parser returns data structures that are easily manipulated by functions in the standard library

But is there a limit on how powerful macro has to be? Think meta-meta-meta programming where a macro constructs a macro for own use somewhat imperatively. Like compile-time for another compile-time for... etc

I mean, if we can easily manipulate code at compile-time AND have a macro evaluator at hand, then don’t we have enough time to do something maybe more ambitious than the resulting binary chunk would ever do?


A language really should have some kind of quotation syntax, so rather than having to write out AST constructions by hand as in your example, you could instead do something like this:

  var st = @(arg1 | if (arg1) { a = 5 })
  st(true)
  console.log(a) // 5
So that way you don't have to manually construct the AST objects like in your example, the language parser does it for you. The @(...) expression in the above code would be compiled to the same object construction expressions as in your example. That is getting much closer to LISP, while still working for non-LISP syntax languages.

(My suggestion of @(args...|body...) as quote syntax is totally arbitrary and besides the point, pick something else if you don't like it.)

Where it gets even more powerful, is where you have some kind of escape syntax inside the body of @(), that is evaluated immediately, and returns either a constant value, an identifier, or another AST object. (A constant value or an identifier would be wrapped in the appropriate AST node type.) That's basically LISP backquote/quasiquote.


Agreed, since json is js’s POD, it would be better to write {jsonhere}, but I had a fear that I couldn’t type as much of plain json from my phone to include enough meaning. new SomeClass() expands into much more data, but is easier to tell.

Edit: removed @ before {, was a mistake.


You could serialise a JavaScript AST into JSON format. But writing/editing code in that syntax would be unpleasant compared to writing/editing it in the normal JavaScript syntax. That's why my suggestion was having some syntax extension that lets you write JavaScript in its normal syntax, but instead of e.g. compiling an if(){} statement to some kind of test-and-jump opcode(s) in the underlying VM bytecode (or equivalent machine code if JITing), it would instead compile it to opcodes that would construct the corresponding AST. (That AST could be just plain-old-data ala JSON, or objects as in your original example.)

(The point I'm making is nothing specific to JavaScript, just a general point that languages with very un-LISP-like syntax, such as those with C-derived or Algol/Pascal-derived syntax, could still implement facilities similar to backquote/quasiquote in LISP, just with some small but powerful syntax extensions.)


That's exactly how Elixir and Julia do it (with quote and unquote operators), for example in Julia:

clause = :(3 > 0)

expr = :($(clause) ? 0 : 1 + 3)

eval(expr) # 0

Though Julia goes further than Elixir, and not only the quoted element is a struct (Julia's base data type) but also everything in the language is an expression that's represented by it, which lead to people arguing that Julia is homoiconic:

expr.head # :if

expr.args # [:(3 > 0), 0, :(1 + 3)]

Though if you're strict on the "homo"-"iconic" term definition it certainly isn't, it's "dual"-iconic with the surface syntax and the surface AST having different formats with one to one mapping and the AST being valid code (or even "tri"-iconic since Julia SSA IR is also accessible as a data type on the language).


This is probably wrong. You’re basically saying that a statically compiled LISP isn’t homoiconic.


Easy is a side effect of the 'homo' part. You have one tool for any layer you want to manipulate.


I think that including a parser in the standard library is either too high a bar or too low a bar, depending on the scope of the language's standard library. Either way it's somewhat arbitrary.

It would perhaps be better to say that a (language idiomatic) parser is available for the language that is maintained and used by the core language team and easily importable by third parties.

That way you avoid the minimal vs. batteries included standard library philosophical arguments.


That's a good point. I'd get behind this amendment.


How about leaving "homoiconic" as is, referring to a language that stores textual or tokenized function definitions that can be regurgitated, edited and re-applied? For instance, GNU Bash.


The word you’re looking for is “reflective”, not “homoiconic”.


I disagree. Reflective is more about querying runtime properties (like data types, structure of data structures, etc.) at runtime.


> "In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself."

I believe the author is misreading this definition.

> "First, consider that most languages have a primitive datatype for strings of text"

The defintion says "data structure", not "datatype". A string is a datatype, not a datastructure.

> "Second, for most languages, it is possible to write a parser in that language itself, that stores the resulting abstract syntax tree in a primitive type in the language."

This is completely besides the point as the definition references the "primary representation", not a represenation that can be built using a parser.


I would reword it as "the primary representation is also a data structure literal in the language itself". I think "literal" is basically what they mean by "primitive type", i.e. there is first-class syntax to create the structure.

Another example of a homoiconic language is XSLT, where the syntax of the language is based on XML, but it also has XML literals.


I think it’s less about strings vs not-strings and more about practicality of metaprogramming with native data types. You can cobble some oddities together with Python strings and eval, but it’s entirely inconvenient and impractical. As such I’m not inclined to cal Python “homoiconic”. I’m OK with homoiconicity simply being a subjective and slightly imprecise term.


“I believe the author is misreading this definition.”

Agreed. Author has tied self in knots from overthinking it. Homoiconicity has nothing to do with external [text] representation and parsing (deserialization).

Homoiconicity in a programming language means that every complete program is a composition of the language’s core datatypes, and nothing more.

OP’s suggestion that homoiconicity could be a sliding scale is nonsense. A language is either homoiconic, or it isn’t.

Here’s a simple test: many languages have first-class functions/procedures/handlers, but most of them do not first-class commands. Any language which has commands which are not values cannot be homoiconic.

A Lisp program is composed entirely of Lisp’s atomic types (`number`, `string`, `symbol`) and fundamental collection type (`list`). Lisp, being an exercise in extreme parsimony, does not define a discrete `command` type but instead overloads its `list` datatype to operate as either a command or a list according to context. Thus commands in Lisp can be created, manipulated, passed around, and evaluated at any time, using the exact same toolset as is used to create, manipulate, pass around, and evaluate Lisp lists.

Or my own kiwi language defines six core datatypes—`null`, `text`, `name`, `list`, `command`, `tag`—from which all kiwi programs are composed. Command values can be created, stored, passed around, and evaluated just the same as any other value. e.g. Here’s a slightly contrived example which stores a list of arbitrary commands, then later retrieves and passes them as the argument to another command (`vowels`), which passes them on to a primitive procedure (`apply to pattern`) where they’re finally applied to the input data:

    R> store value (x, (case (upper), bold))
    R>
    R> define rule (vowels (format), apply to pattern (“[aeiou]”, {$format}))
    R>
    R> ^^ hello world
    R> vowels ({$x})
    #  “hEllO wOrld”
Compare and contrast to a C-family language such as Python. While some language structures (integer, string, list, function literals) do have native datatype representations, other structures (operators, statements, function calls) do not. While it’s still possible to manipulate Python program structures using Python’s `ast` APIs, or pass around sequences of hardcoded commands by wrapping them in lambdas, everything is second-class, with all the additional indirection and complexity that entails.

Still, look at how popular and useful Python is, despite such expressive limitations, while Lisp, for all its meta-circular power, is stuck firmly on the fringes.

.

So how could homoiconicity be useful? Well, consider this thread: <https://news.ycombinator.com/item?id=20662232>, discussing the current challenges of building programs by voice dictation. Voice-coding languages such as C and Python require lots of special “control words” to input complex syntactic structures—operators, statements, punctuation, and so on. A language where all code structures are primitive data types only needs words to describe those data structures, and since fundamental data structures like quoted strings and lists are inherently self-describing, tedious mechanical details like precise punctuation can probably be inferred automatically from a combination context-aware autocomplete and voice cadence (e.g. meaningful pauses, end-of-sentence pitch changes).

In the land of blind-dumb algorithms, Compositionality may yet be King.


Spot on. Even if you allow the string to be considered a (rather simple) data “structure”, when we talk about homoiconicity we (explicitly or implicitly) exclude strings from the set of data structures under consideration.


Unless we're talking about Tcl. Its homoiconicity is based on strings.


TRAC also. Probably why it was so slow, but a friendly language.

See Ted Nelson's Computer Lib (p18) for a nice description of TRAC.


A discussion of it is also linked from the main article: https://dl.acm.org/citation.cfm?doid=800197.806048 .


That feels a bit like cheating though.


"Well, the whole source file is a single string" just seems like a vacuous gotcha. It's not an interesting enough point to settle the matter.


I haven't found the term particularly useful either. What I do like to say is that in Lisp, the AST is a public interface. That is, the AST is defined by a specification and is not going to change, so that you can write code that manipulates it and know that the next compiler version won't break your code — indeed, it's portable across implementations. Also, the AST's structure is relatively simple to explain (though there are some "gotchas" you have to watch out for when manipulating it, like inadvertent variable capture), and macros provide a convenient interface for receiving an AST from the compiler or interpreter and supplying back a transformed AST.

Other languages could provide similar facilities, and indeed a few have done so. But the more complex the syntax of the language, the more difficult it is to do that in a usable way.


I agree with the conclusion, that what we call homoiconicity is a scalar, not a boolean. And I agree both in this case and in general that people should talk about languages more directly (“the syntax is easy to parse”, “the semantics of the language are completely defined on the single page of a book”) rather than label them as homoiconic or not.

If you'd like good argument for writing in languages that call themselves homoiconic, you can see Bawden's Quasiquotation in Lisp. Its section 2.1 compares writing a program that generates programs in C, versus Lisp, first without quasiquotation, then with.

https://3e8.org/pub/scheme/doc/Quasiquotation%20in%20Lisp%20...


Nitpick, but booleans are perfectly ordinary scalars. Maybe the author (and you) might prefer something like “homoiconicity falls on a spectrum” or “homoiconicity isn’t a binary property” or some other ordinary way to make this statement.

I quite liked your link. Instead of deciding what the word “homoiconic” ought to mean, it just showed what you can do in a homoiconic language with quasiquote syntax. Delightful and to-the-point!


> Nitpick, but booleans are perfectly ordinary scalars.

I think most people use 'scalar' to mean something like "real number" (maybe "complex number", or maybe "real number in a specified interval"). In this context, a Boolean is not a perfectly ordinary scalar. That is to say, it is certainly easy to coerce a Boolean to a real scalar in [0, 1]; but this is a transformation, and not the identity—the Boolean True is not the scalar 1, nor is the Boolean False the scalar 0.

As evidence for this, I'd point out that there is no reason that we couldn't coerce the Boolean True to the scalar 0, and the Boolean False to the scalar 1. But, you'd point out, I'm wrong; with these coercions, the nice identities x AND y = xy and x OR y = x + y - xy would fail. I agree! But these nice identities show once again that Booleans, even once coerced, aren't scalars, because the algebraic operations on them differ from the algebraic operations on real scalars.

(Or maybe your point was that Booleans are perfectly ordinary scalars in the field with 2 elements. I guess I can't argue with that, except to say that I don't think that most people envision "perfectly ordinary" scalars living there.)


Yeah—technically, a scalar is just whatever type can be used to multiplicatively scale a vector in some given vector space but, at least in computing, it’s pretty clear that people just mean “some kind of integer or real number, definitely not a collection/enumerator type”


Wasn’t clear to me at all. A scalar is any kind of singleton number, but it doesn’t exist in contrast to boolean values; rather, it includes them. “Exists on a spectrum/scale” or “is a real number” would be universally understood ways of communicating that particular meaning.

The notion of “fuzzy logic” is especially relevant: https://en.wikipedia.org/wiki/Fuzzy_logic


> A scalar is any kind of singleton number, but it doesn’t exist in contrast to boolean values; rather, it includes them.

But this was exactly my point: scalars, regarded as 'singleton number's, don't include Booleans, because Booleans aren't numbers, singleton or otherwise. (Unless, again, you wish to regard them as belonging to the field with two elements.) Booleans can be coerced to numbers, but the way to do so, even if we agree that the two numbers are 0 and 1, is arbitrary—unless we require that certain algebraic identities hold, identities that indicate that the (usual) algebra of Booleans is not the (usual) algebra of real numbers.

It is a nitpick, to be sure; but (a) part of the point of types is to facilitate, and even automate, exactly this sort of nitpicking, and (b) it was in response to a nitpick, too.

> The notion of “fuzzy logic” is especially relevant: https://en.wikipedia.org/wiki/Fuzzy_logic

Agreed! It indicates that coercing Booleans to numbers suggests fruitful analogies, but, again, says to me that they are not actually numbers. (Once again, I point to their operators, which, from a computer-science as well as a mathematics point of view, are part of what a scalar, Boolean, or whatever is: https://en.wikipedia.org/wiki/Fuzzy_logic#Fuzzy_logic_operat... .) These indicate that perhaps one should think of Booleans as belonging to a continuum of values coming from a tropical-type structure (https://en.wikipedia.org/wiki/Tropical_geometry) rather than from a field as 'usual' scalars do.


"... the original definition: languages which have the same external and internal representations are homoiconic."

I would re-phrase that as: "Languages whose external and internal representations of their programs are (to a great degree) ISOMORPHIC, are, and should be, called homoiconic."

What does "iconicity" mean? It means isomorphism, meaning the structure of two things are similar, one can be taken to be a picture, an "icon" of the other.

Lisp clearly has this property. It is clearly homo-iconic. And most importantly most other languages do NOT have this property. Therefore "homo-iconic" is a very useful property which can be easily used to divide programming languages into two groups, those that are and those that aren't.

The original definition is bad: "which have the SAME external and internal representations". "Sameness" is not what homo-iconicity is about. It is about the structures of the external and internal representation being SIMILAR enough that one of them could be considered an ICON, a PICTURE of the other. Perhaps we should stop using words like "similar" as well? They are not "boolean" are they?

Clearly 'homo-iconic" has a sound well-founded meaning, even if not everybody gets it. It is a subtle concept similar to understanding why something can be used as a metaphor for another thing. Not everybody gets metaphors either.


> Lisp clearly has this property. It is clearly homo-iconic.

Firstly "Lisp" is a language family; which member are you talking about?

ANSI Common Lisp only hints at a form of homoiconicity, but leaves all of it implementation-defined. Namely, the function ed:

http://clhs.lisp.se/Body/f_ed.htm

A Lisp implementation which compiles everything and throws away the source will not support function editing with ed.

Compiled functions are not similar to the source; they cannot be edited as source and recompiled.

I haven't heard of anyone organizing their Lisp workflow around an ed function, which would then count as relying on homoiconicity.

Bash is homoiconic: the set command with no arguments dumps out all function definitions, which are reformatted as text from a tokenized form. There isn't a nice feature to edit a function though, other than copying and pasting out of the output of set, or else recalling the original definition in the history.

Speaking of which, interactive environments, including command lines with history recall, provide a form of homoiconicity. The command histories or integrated edit buffers hold functions in their original form, allowing them to be edited and redefined. (But those stored forms are outside of the language; they are not the function definitions themselves. It's not the language per se being homoiconic.)

When people say "Lisp is homoiconic", they are certainly not talking about the obscure ed function in ANSI CL. Well, that's what they're not talking about. What they are talking about is anyone's guess, but probably they are using the wrong term for the concept of a language being written in terms of a generic tree data structure that expresses all possible syntax tree shapes, and which is manipulable in that language.


If a Lisp program is "compiled" the internal binary representation is no longer "Lisp", it is machine-code. So I think that is besides the point. What matters is the viewpoint of the programmer, what kind of "language" they read and write.

From a Lisp-programmer's point of view they write both their data and their functions as lists. For programmers of a homoiconic programming language the programs have the same general structure as data.

What better name for such property than "homoiconity"?


The compiled function is a particular end product. But there are stages between source code as text and a compiled function, where the source code as data plays a role: interning of data via the reader, macro expansion, etc... In an implementation with a Lisp interpreter its obvious: the executing code is still the source code as data format.

> From a Lisp-programmer's point of view they write both their data and their functions as lists. For programmers of a homoiconic programming language the programs have the same general structure as data.

Right, that would be one aspect. Another one is that that source code as data property is actually used in the toolchain. For example PPRINT (pretty print) will create an external representation of a program as text, from the source code in data form. EVAL will take the internal representation and will execute it somehow. The macro expander gets the internal representation and does transformation of those.

> What better name for such property than "homoiconity"?

Don't know, but 'homoiconic' makes no sense to me. 'iconicity' is a complex concept (or several related ones) and its not clear to me what the obvious meaning of homo-iconicity would be...

One can define a meaning for homo-iconicity, though. But I haven't seen a clear and or formal one yet.


I can manipulate my Lisp-program as a set of nested lists, which is the same structure as I see in my source-code-editor.

What I see in the editor is the same structure of nested lists as when I program and execute my macro-definition. My source-code has a 1-to-1 correspondence to the data-structure which is the internal representation of my program (when I execute the macro).

The internal (what I manipulate in my code) and external (what I manipulate in my text-editor) representations have the same structure. They are not the "same". But they have the same structure. That is a pretty fantastic property for the language since it makes it simple, I don't need to do a mental translation from the source-code to the structure I can manipulate in my program.

Now it seems to me that someone somewhere came up with the name 'homoiconity' for this property. I think that is a much better name than anything else I can think of.

'homo' refers to the self/man/human in Latin and icon refers to “likeness, image, portrait”. So source-code of a Lisp-program is a "picture of the program itself, a picture of itself". Very homo-iconic.

> One can define a meaning for homoiconicity, though.

It's not like we have words for which we must "define a meaning". It is that we have important concepts which deserve to have a word denoting them. We don't need to "define" a meaning for the word 'homoiconic', we need a word for the generally interesting property of languages like Lisp discussed above. Ah but we already have a word for that: "Homoiconic".


> structure

I don't see the connection of this kind of structure, with 'iconicity'. Iconicity is usually more concerned with the form of a symbolic shape and the connection to its meaning. Structural aspects can play a role, but they are still connected to some meaning (which connects it to the human understanding).

Source code in a text editor is not a 'picture' or 'shape' or 'icon'.


Source-code in a text-editor has a shape.

Lisp source-code in a text-editor has a structure expressed with parentheses. 'icon' refers to “likeness, image, portrait”.

The structure of Lisp source-code is the same as the structure of the internal representation of the same program produced from that source-code which your Lisp program can see and manipulate. There is "likeness", meaning iconicity, between the two representations, because the two structures are "alike".


> Source-code in a text-editor has a shape.

not an iconic shape

> 'icon' refers to “likeness, image, portrait”.

source code is none of that. it's text.


Source-code is not only text. It has structure which must follow specific syntactic rules. It is not free-form text like what we are writing here. It has structure which is the same thing as saying it has 'shape'. And that shape "forms" an image, is iconic, for the internal structure of the program which you see when you manipulate it within your Lisp-program.

An image is not the same thing as the thing being imaged. Ceci'nest pa une pipe. But you can see the structure of the thing being represented by looking at the image, by the "icon", why, because the structure of the image is (to a large degree) same as the structure of the thing being imaged. Similarly the syntactic structure of Lisp source-code let's us see the structure of the program that is generated by that source-code.

It is iconic. It is iconic to the thing it represents, the program, which in a sense is the same thing, just a different, but structurally isomorphic representation of the program itself. It is homoiconic.


> It has structure which is the same thing as saying it has 'shape'.

Not really. Shape is a distinct visual quality.

> Similarly the syntactic structure of Lisp source-code let's us see the structure of the program that is generated by that source-code.

I find that questionable.

> It is iconic. It is iconic to the thing it represents, the program, which in a sense is the same thing, just a different, but structurally isomorphic representation of the program itself. It is homoiconic.

No, 'iconic' is concerned with different things.

> but structurally isomorphic representation of the program itself

That has very little to do iconic representation. Seeing textual source code as an iconic representation of the machine representation is stretching the concept beyond its intended meaning (about shape, form, graphical representation, ...).


Even if that fits under Doug McIlroy's definition of "homoiconic" (which sits in a paywalled 1960 paper), the C preprocessor probably also meets the definition, as do function definitions in Bash. "Homoiconic" is not a word that denotes something special about Lisp that separates it from low-brow hacks. The authors of TRAC were familiar with Ilroy's paper and its neologism, and cheerfully applied it term to their purely text-based system. Either they didn't understand the word, or else it really does cover that kind of thing.


A compiled Lisp function is a first-class Lisp object.


An excellent talk by Sturart Sierra that covers much of the same ground "Homoiconicity: It Is What It Is" https://www.youtube.com/watch?v=o7zyGMcav3c

spoiler: his talk also does not finish with a definition of homoiconic either


My response: clever article, but whatevs.

Lisp is homoiconic because the syntax for code matches the syntax for a list. Keep saying it.


To me this article feels like someone working really hard to find an objection to something (may or may not be true, but it feels that way to me).

The first line of the article's conclusion is "Homoiconicity is a term surrounded by much confusion." I simply do not agree with this assertion.

I was introduced to the term many years ago, and I have have read it in numerous contexts, and the meaning has been quite clear in every context I can recall.

I also will keep using it the way I am now, since I see absolutely no reason to stop.


My only problem is that the term is a feature not a benefit. The two most powerful ideas in lisp are the lack of statements (everything is an expression, even atoms) and the macrology (which pretty much requires expression-only, and of course requires the Mooers/Deutsch coinage of...homoiconicity).

Eval is great but unavoidably requires special forms. That's just life: you can't be circular; at best you can be metacircular. It's like the second law of thermodynamics.


This is a great article. It takes a pedantic point about an answer that "everybody knows", and manages to make a compelling case that the way we use this vocabulary is suboptimal.

Great find!


I'm not a big fan of 'homoiconic' either. I'm more in the 'source code as data' camp.

> The first objection concerns the external representation. In the above we simply stated that the external representation is an s-expression. In most practical programming environments, however, the actual representation of program sources is as text files which contain strings of characters. It is only after parsing this text that the representation is really an s-expression. In other words: in practical environments the external representation is not an s-expression, but text.

The point is that Lisp s-expressions are an external representation of data and that Lisp programs are represented as s-expressions. Various programmatic interfaces see this source code as data: for example the function EVAL or the macro expander.

> The second objection concerns the internal representation. Practical implementations of Lisp interpreters do generally not operate actually directly on s-expressions internally for performance reasons.

Actually they do. The definition of 'Lisp interpreter' is that it operates on source code as data (lists, symbols, ...). Lisp has other ways to execute: as compiled byte code, as compiled native code, etc. -> but that's not a Lisp interpreter.


I always had the impression homoiconicity could be used to do interesting machine learning applications, becauee the programm could easily modify itself. Did this happen?


It depends on what you mean by application, self-evolving/learning programs is basically impossible at this point because of the extremely large search space of possible code within the syntax (how many possible programs exist within say 10 lines, and of those how many would actually run without errors and produce something useful?), not to mention it would produce unmaintainable code in the first place.

But what is useful is giving the language users the ability to write their own domain specific compilers, for example extending the syntax to represent the math more closely making it easier for non programmers and for self-documenting code, doing symbolic analysis to simplify the code and finding the best way to solve it, exporting directly to other formats like LaTeX and even extending the compiler to automatically generate and compile the optimized gradient code for any function you write.


It wasn't successful.

It seems that it's more important for the algorithms to be emberassingly parallel and so work on big data than to be flexible.


What exactly does that have to do with machine learning and neural networks?


> emberassingly parallel

What is embarrassing about being parallelizable?



http://taeric.github.io/CodeAsData.html is where I took my stab. Lisp is the main language I know where you can write a function, and write code that can both symbolically run a derivative, and let you use that derivative just like any other function. Without having to dive into a bunch of AST specific structures to do it.

It really is a shame that people think the eval of javascript and python is equivalent to the eval of lisp. It is crazy just how different they are, once you see it.


Kind of echoes a reddit comment I wrote two years ago:

https://www.reddit.com/r/lisp/comments/5h2kdc/homoiconicity/...


What about using the word "homoiconic" when we mean its meaning: "homogenous icons" other said, a regular and equivalent representation of all its tokens. The article sort of tries to validate its point from a historical approach, but kind of ignores the meaning of the word by its compounds. Will continue using it.


The HP 48 calculator seems fairly homoiconic. Binary compiled programs and symbolic ones alike can be pushed on the stack, stored, dropped.


Way ahead of you there, bud.


Ugh. Comp sci has enough Andy Rooneys already.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: