
Homoiconicity isn’t the point (2012) - pcr910303
http://calculist.org/blog/2012/04/17/homoiconicity-isnt-the-point/
======
zenhack
Fwiw, I think what's important about homoiconicity isn't so much that the
language uses "boring" data structures for its (intermediate?) syntax tree,
but that the both the syntax itself and the representation of the syntax as an
AST is _simple_ and _obvious_.

Haskell has TemplateHaskell, which can be used for macro-like things, but it's
substantially less ergonomic, not because Haskell isn't "homoiconic", but
because the grammer is actually really complex and non-obvious. There's tons
of little things that you don't think about when writing Haskell code, but you
have to deal with when manipulating it. For example:

[https://hackage.haskell.org/package/template-
haskell-2.15.0....](https://hackage.haskell.org/package/template-
haskell-2.15.0.0/docs/Language-Haskell-TH-Syntax.html#t:SourceStrictness)

That's a node in the AST that stands for something that is at most one
character in the source text, and usually zero. So code manipulating this
stuff gets really verbose and clunky. It's still powerful, but it's not the
same.

As a side project, I'm actually working on an ML-family language with a macro
system. It still has a more traditional ML-style syntax, but it is _simple_ so
working with it should be comparatively ergonomic. In an ML you wouldn't
frequently want to be working with loosely defined data structures anyway; the
first thing you'll do is convert it to a more strongly typed form that
captures what you really want to be manipulating.

~~~
hinkley
I hope I stay active in the industry long enough for people with influence to
start talking about Human Factors as they apply to development tools. Ten
years ago I thought that might be right around now, but today I'd still say
ten years from now. I might be in my fusion powered self-driving car waiting
for that boat to come in.

When I'm digging through a large body of code looking for subtle bugs, I want
the code to be boring but not bland. By that I mean, yes, all of the bits
should be obvious, because I'm having to contend with the cartesian product of
all of the bits. _But_ if everything is self-similar top to bottom, there are
no landmarks. It becomes very easy to get 'lost' in the code and have trouble
telling if the next candidate for debugging is 'up', 'down' or sideways in the
call stack.

Fractals are really cool to look at, but they're murder for navigation
purposes.

~~~
coldtea
> _Fractals are really cool to look at, but they 're murder for navigation
> purposes._

Are they? They imply that the structure is self-similar, which is a good trait
for a structure, and makes it easy to read it at any level and get what's
going on.

That's what trees are, lists of lists are, strings of characters are, etc.

> _But if everything is self-similar top to bottom, there are no landmarks._

The specific functions called at each level are the landmarks.

~~~
hinkley
What specific functions? That's my point. If you go all in on recursive
design, all the functions, variables, and object names are the same all the
way up and down your graph. There are no specific functions. It's all grey
goo.

~~~
thom
Forgive my failing imagination, but can you give some concrete examples of
what you’re describing?

~~~
tabtab
I cannot speak for the others here, but with languages like say JavaScript,
the symbols usually represent something. {...} will usually represent a block
of code. "[...]" will usually represent an array-like index.

With Lisp you don't have such visual cues; you have to read the function name
and perform a mental translation (lookup) of that function name to "find" a
purpose in order to know what the "parenthesized unit" is. Thus, it's more
mental steps to compute the general meaning of the code.

One is mentally searching (mapping) on name, not visual appearance.

Lisp fans seem to perform this lookup faster than average. Whether it's
because they've been doing it for so long or they have an inborn knack is
unknown. It would make a fascinating area of research.

I tried to get the hang of name-based fast recognition, but was progressing
too slow for my comfort.

~~~
lispm
Lisp code usually has a complex tree like indentation & layout. Indentation is
provided as a standard service by editors and by Lisp itself. See the function
PPRINT as the interface to the pretty printer, which does layout and
indentation of source code.

We look for visual tree patterns. For example the LET special form:

    
    
      LET         
             BINDINGS
    
         BODY
    
    

or more detailed

    
    
      LET         
             VAR1  VALUE1
             VAR2  VALUE2
             ...
    
         BODYFORM1
         BODYFORM2
         ...
    

The list of binding pairs is another common pattern.

There is a small number of tree patterns which are used in the core operators
of Lisp. Once you've learned them, reading Lisp is much easier than most
people think.

    
    
      CL-USER 13 > (pprint '(let ((one-number 1) (two-numbers 2) (three-numbers 3)
      (four-numbers 4) (five-numbers 5) (six-numbers 6))
      (+ one-number two-numbers three-numbers
      four-numbers five-numbers six-numbers)))
    
      (LET ((ONE-NUMBER 1)
            (TWO-NUMBERS 2)
            (THREE-NUMBERS 3)
            (FOUR-NUMBERS 4)
            (FIVE-NUMBERS 5)
            (SIX-NUMBERS 6))
        (+ ONE-NUMBER
           TWO-NUMBERS
           THREE-NUMBERS
           FOUR-NUMBERS
           FIVE-NUMBERS
           SIX-NUMBERS))

~~~
tabtab
All indentation does is tell you that there is a hierarchy of some sort. The
fact something is one level deeper than another still doesn't tell me
generally what it does, because that depends on what the parent(s) does. And
it's not a difference maker because "regular" language can also use indents.

Further, the coder decides the indentation, and I'm not convinced it's
consistent enough. In my example, a curly brace is a curly brace regardless of
a coder's preference. It's enforced by the rules the language, not the coder.

Another thing I'd like to point out is that languages like JavaScript provide
two levels of common abstraction. Using curly braces, parenthesis, and square
brackets as an example, you spot them and immediately know the forest-level
category something is: code block, function call, or array/structure index.

With Lisp, all you are guaranteed have is the function name (first parameter),
which you have to do a mental name-to-purpose lookup, which could involve
thousands of names. Splitting the lookup into levels improves the mental
lookup performance, at least in my head.

It also helps one quickly know to ignore something. For example, if I'm
looking for a code block, I know I can probably ignore array indexes, and vice
versa. It's forest-level exclusion, which is hard to do with single-level
lookups because the list is too long: it has to contain the general category
of each name. It's extra mental accounting.

Hard-wiring in big-picture categories into the syntax allows quickly making
forest-level reading decisions, and which individual coders generally can't
change.

When it comes to team-level read-ability, consistency usually trumps
abstraction and many other things.

Maybe Lisp can do something similar with "let", bindings, and body, but then
it starts to resemble "regular" languages, along with their drawbacks, which
is typically less abstraction ability. Consistency "spanks the cowboys", both
the good cowboys and the bad cowboys. But at least you know what you have, and
can estimate and plan accordingly.

~~~
nickbauman
With lisp you stop thinking about syntax rules anymore (because the program is
in a very simple form) so you can focus purely on semantics. Some things, like
you say, are too overloaded in Clisp, for example:

    
    
      (defun averagenum 
        (n1 n2 n3 n4)
        (/ ( + n1 n2 n3 n4) 4))
    

Clojure recognizes this goo problem and expresses the parameters in a vector
instead of how Clisp does it in a list

    
    
      (defn averagenum 
        [n1 n2 n3 n4]
        (/ ( + n1 n2 n3 n4) 4))
    

Helps a lot.

~~~
kbp
That isn't how that defun would normally be formatted, though. Lisp knows that
the 'body' part starts with the 3rd argument (the same way Clojure knows it
starts with the second), so it indents the lambda list farther right (if it
doesn't fit on the first line, otherwise it would go there).

For my eyes, [] aren't distinct enough from () to make the second style
preferable. I'd rather have indentation to set it apart.

~~~
nickbauman
I just made them consistent. It certainly helps me visually. Also a list
implies you intend to evaluate it by executing a function. A vector implies
you do not intend to call a function.

~~~
lispm

      (defun averagenum (n1 n2 n3 n4)
        (/ (+ n1 n2 n3 n4)
           4))
    

The typical pattern is

    
    
      DEFSOMETHING name arglist
       
        body
    

A Lisp programmer reads those structural patterns, not the delimiters.

Lisp programming is more about thinking of trees of code and their possible
manipulation - even independent of a visual notation and especially
independent of the exact delimiter used.

The delimiters are in shape recognition much less important than the shape
itself.

~~~
nickbauman
"A Lisp programmer" ... Sheesh, I am a lisp programmer, man, and I assure you
I know what trees of code independent of visual notation is. You're making a
point that doesn't need to be made here. It's this kind of phrasing that
really turns off people from the lisp community.

I wrote it that way so it's easier for non-lisp programmers to compare with
what they're more used to as well.

~~~
lispm
That doesn't help them. Explain it like it is. Lisp is different from what
they are used to.

------
DonHopkins
This may be missing the point, but PostScript is not only homoiconic, but also
point-free!

[https://en.wikipedia.org/wiki/Tacit_programming#Stack-
based](https://en.wikipedia.org/wiki/Tacit_programming#Stack-based)

[https://en.wikipedia.org/wiki/Talk%3AHomoiconicity#PostScrip...](https://en.wikipedia.org/wiki/Talk%3AHomoiconicity#PostScript_is_homoiconic)

[https://news.ycombinator.com/item?id=18317280](https://news.ycombinator.com/item?id=18317280)

>The beauty of your functional approach is that you're using PostScript code
as PostScript data, thanks to the fact that PostScript is fully homoiconic,
just like Lisp! So it's excellent for defining and processing domain specific
languages, and it's effectively like a stack based, point free or "tacic,"
dynamically bound, object oriented Lisp!

[https://medium.com/@donhopkins/the-shape-of-psiber-space-
oct...](https://medium.com/@donhopkins/the-shape-of-psiber-space-
october-1989-19e2dfa4d91e#506e)

>Interacting with the Interpreter: In PostScript, as in Lisp, instructions and
data are made out of the same stuff. One of the many interesting implications
is that tools for manipulating data structures can be used on programs as
well.

This is the point:

[https://www.youtube.com/watch?v=z5y6L6He8Bo](https://www.youtube.com/watch?v=z5y6L6He8Bo)

~~~
agumonkey
Maybe that's why i Always Love both stack concatenative and pf Haskell

------
choeger
The point of homoiconocity is that your macro language is your language. And
your data structure language is your language. As well as your AST language.
They are all the same.

So the author is right about observing that "intermediate ast" and "ast" share
the same concrete syntax and macros transform one into the other. But macros
are also _defined_ in that language. Furthermore the output of your program is
definitely in that language.

~~~
hinkley
There are two features of Jai I hope turn up in other languages. I'm
fascinated by the 'struct of arrays' data pattern (columnar vs row-oriented
storage, in effect), but also the ability to declare functions that run at
compile time instead of run time, instead of macros.

~~~
pcwalton
> declare functions that run at compile time instead of run time, instead of
> macros.

Lots of languages have this, including C++.

~~~
thethirdone
Jai does it to a degree that is certainly not common.

C++ in particular only has true guaranteed compile time execution for
constexpr functions in static_asserts and a few other small cases.

Jai on the other hand can do pretty much anything at compile time.

~~~
unlinked_dll
constexpr is Turing complete and it’s only limit is recursion depth of the
compiler, which can be changed.

And as of C++17/20 some of the weird stuff like not using conditionals and
such isn’t present anymore.

Also if you’re a masochist, the template system is Turing complete.

~~~
thethirdone
I was not talking about the turing completeness of constexpr.

There are only a few contexts in which constexpr functions are guaranteed to
be evaluated at compile-time. This contrasts Jai's compile time only
functions.

Additionally, Jai can perform arbitrary IO at compile-time which is simply not
possible with c++. One of the early demos had a program which played a
videogame at compile-time.

------
dang
A thread from 2018:
[https://news.ycombinator.com/item?id=16387222](https://news.ycombinator.com/item?id=16387222)

Discussed at the time (a little but quite well):
[https://news.ycombinator.com/item?id=3854262](https://news.ycombinator.com/item?id=3854262)

------
didibus
Homoiconicity means same representation. Literally, the word homo means same,
and icon, same icons. The source code and the data-sructures are represented
the same using the same iconography, aka, syntax.

What's the point of that? Well, let's see... Why is working with Json in
JavaScript so much better than in most other language?

When the syntax for data-structures and code and data serialization, and
configuration, etc. is the exact same, it is really harmonious to work with.
That's one of the great things about homoiconicity.

The article is saying that macros are the point, and macros are great for
sure, and it is one of the points, but homoiconicity is also a great point. It
is useful even in non-macro scenarios, and it is even more useful when
combined with macros, since it makes writing and reading them that much
easier.

------
DonHopkins
Maybe what Lisp needs to make it popular with kids these days is a hip new
syntax.

Instead of nested in-and-out bubbles like (foo (bar)), it could have nested
up-and-down ramps like \foo \bar// or /foo /bar\\\, to represent a "change of
level".

That way you could have positive and negative nesting, and turn programs
inside-out!

Y-Combinator:

    
    
        \defun Y \f/
          \\lambda \g/ \funcall g g//
          \lambda \g/
            \funcall f \lambda \&rest a/
                         \apply \funcall g g/ a//////
    

Y-Uncombinator:

    
    
        /defun Y /f\
          //lambda /g\ /funcall g g\\
          /lambda /g\
           /funcall f /lambda /&rest a\
                        /apply /funcall g g\ a\\\\\\

~~~
gchamonlive
Aren't you just exchanging the pair (,) for /,\ or \,/? What real benefits
that implies apart for aesthetics?

~~~
DonHopkins
Purely aesthetics! ;) I wouldn't want to mess up a good thing. You would just
have a positive and a negative way of writing the same thing.

You could turn the paren bubbles inside-out like )foo )bar(( but that doesn't
look as cool to me as flipping the ramps upside-down like /foo /bar\\\ and
\foo \bar//.

------
martyalain
I wonder what you could say about this foreign dialect of lambda-calculus,
lambdatalk :
[http://lambdaway.free.fr/lambdaspeech/?view=lambda](http://lambdaway.free.fr/lambdaspeech/?view=lambda)
or [http://lambdaway.free.fr/](http://lambdaway.free.fr/) where the evaluator
is meanly built on a single regexp going back and forth on the code string,
replacing directly s-expressions by words. Just reading and replace without
parsing.

~~~
DonHopkins
TCL is kind of like Lisp with strings instead of s-expressions, if you squint
at it right and hold your nose. All evaluation is simply text substitution.

But TCL is not nearly as powerful or efficient as Lisp.

TCL's historic advantages (which were unique and important in 1988) are that
it's free, easy to integrate with C code, and it comes with a nice user
interface toolkit: Tk, which also has a great interactive canvas drawing api.

Tk is nice because it was designed around TCL from day one, which vastly
simplified its design, since it didn't suffer from Greenspun's tenth rule like
most GUI toolkits do, because it already had half of Common Lisp: TCL.

[https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule](https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule)

>Any sufficiently complicated C or Fortran program contains an ad-hoc,
informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Advantages of Tcl over Lisp (2005) (tcl.tk)

[https://wiki.tcl-lang.org/page/Advantages+of+Tcl+over+Lisp](https://wiki.tcl-
lang.org/page/Advantages+of+Tcl+over+Lisp)

[https://news.ycombinator.com/item?id=15578238](https://news.ycombinator.com/item?id=15578238)

------
lmm
Yeah, that's... exactly what homoiconicity means?

------
skybrian
This seems similar to Ant's relationship to XML?

~~~
DonHopkins
That Ant is a domain specific language for translating XML into Java stack
traces?

~~~
skybrian
I meant that Ant is built on a generic language for representing data.

But XML isn't a great choice and JSON wouldn't work well either. S-expressions
are popular with Lisp programmers and unpopular with most other people.

It seems like there might be some other solution?

~~~
DonHopkins
My remark was just an old Java joke I repurposed for Ant!

"Java is a DSL for taking large XML files and converting them to stack
traces." -Andrew Back

[https://www.reddit.com/r/programming/comments/eaqgk/java_is_...](https://www.reddit.com/r/programming/comments/eaqgk/java_is_a_dsl_for_taking_large_xml_files_and/)

But in all seriousness:

OpenLaszlo used XML with embedded JavaScript in a way that let you extend XML
by defining your own tags in XML+JavaScript. I've done a lot of work with it,
and once you make your peace with XML (which seemed like a prudent thing to do
at the time), it's a really productive enjoyable way to program! But that's
more thanks to the design of OpenLaszlo itself, rather than XML.

[https://en.wikipedia.org/wiki/OpenLaszlo](https://en.wikipedia.org/wiki/OpenLaszlo)

OpenLaszlo (which was released in 2001) inspired Adobe Flex (which was
released in 2004), but Flex missed the point of several of the most important
aspects of OpenLaszlo (first and foremost being cross platform and not locking
you into Flash, which was the entire point of Flex, but also the declarative
constraints and "Instance First Development" and the "Instance Substitution
Principal", as defined by Oliver Steele).

[https://en.wikipedia.org/wiki/Apache_Flex](https://en.wikipedia.org/wiki/Apache_Flex)

[https://blog.osteele.com/2004/03/classes-and-
prototypes/](https://blog.osteele.com/2004/03/classes-and-prototypes/)

The mantle of constraint based programming (but not Instance First
Development) has been recently taken up by "Reactive Programming" craze (which
is great, but would be better with a more homoiconic language that supported
Instance First Development and the Instance Substitution Principle, which are
different but complementary features with a lot of synergy). The term
"Reactive Programming" describes a popular old idea: what spreadsheets had
been doing for decades.

OpenLaszlo and Garnet (a research user interface system written by Brad Myers
at CMU in Common Lisp) were exploring applying automatic constraints to user
interface programming. Garnet started in the early 1990's. Before that, Ivan
Sutherland's Sketchpad explored constraints in 1963, and inspired the Visual
Geometry Project in the mid 1980's and The Geometer's Sketchpad in 1995.

[https://en.wikipedia.org/wiki/Reactive_programming](https://en.wikipedia.org/wiki/Reactive_programming)

[http://www.cs.cmu.edu/afs/cs/project/garnet/www/garnet-
home....](http://www.cs.cmu.edu/afs/cs/project/garnet/www/garnet-home.html)

[https://en.wikipedia.org/wiki/Sketchpad](https://en.wikipedia.org/wiki/Sketchpad)

[http://math.coe.uga.edu/TME/Issues/v10n2/4scher.pdf](http://math.coe.uga.edu/TME/Issues/v10n2/4scher.pdf)

[https://en.wikipedia.org/wiki/The_Geometer%27s_Sketchpad](https://en.wikipedia.org/wiki/The_Geometer%27s_Sketchpad)

I've written more about OpenLaszlo and Garnet:

What is OpenLaszlo, and what's it good for?

[https://web.archive.org/web/20160312145555/http://donhopkins...](https://web.archive.org/web/20160312145555/http://donhopkins.com/drupal/node/124)

>Declarative Programming: Declarative programming is an elegant way of writing
code that describes what to do, instead of how to do it. OpenLaszlo supports
declarative programming in many ways: using XML to declare JavaScript classes,
create object instances, configure them with automatic constraints, and bind
them to XML datasets. Declarative programming dovetails and synergizes with
other important OpenLaszlo techniques including objects, prototypes, events,
constraints, data binding and instance first development.

Constraints and Prototypes in Garnet and Laszlo
[https://web.archive.org/web/20160405015129/http://www.donhop...](https://web.archive.org/web/20160405015129/http://www.donhopkins.com/drupal/node/69)

>Garnet is an advanced user interface development environment written in
Common Lisp, developed by Brad Meyers (the author of the article). I worked
for Brad on the Garnet project at the CMU CS department back in 1992-3.

[https://news.ycombinator.com/item?id=17360883](https://news.ycombinator.com/item?id=17360883)

DonHopkins on June 20, 2018 | parent | favorite | on: YAML: probably not so
great after all (2017)

>That was also one of the rationales behind TCL's design. John Ousterhout
explained in one of his early TCL papers that, as a "Tool Command Language"
like the shell but unlike Lisp, arguments were treated as quoted literals by
default (presuming that to be the common case), so you don't have to put
quotes around most strings, and you have to use punctuation like ${}[] to
evaluate expressions.

>TCL's syntax is optimized for calling functions with literal parameters to
create and configure objects, like a declarative configuration file. And it's
often used that way with Tk to create and configure a bunch of user interface
widgets.

>Oliver Steel has written some interesting stuff about "Instance-First
Development" and how it applies to the XML/JavaScript based OpenLaszlo
programming language, and other prototype based languages.

>Instance-First Development: [https://blog.osteele.com/2004/03/classes-and-
prototypes/](https://blog.osteele.com/2004/03/classes-and-prototypes/)

>The equivalence between the two programs above supports a development
strategy I call instance-first development. In instance-first development, one
implements functionality for a single instance, and then refactors the
instance into a class that supports multiple instances.

>[...] In defining the semantics of LZX class definitions, I found the
following principle useful:

>Instance substitution principal: An instance of a class can be replaced by
the definition of the instance, without changing the program semantics.

>In OpenLaszlo, you can create trees of nested instances with XML tags, and
when you define a class, its name becomes an XML tag you can use to create
instances of that class.

>That lets you create your own domain specific declarative XML languages for
creating and configuring objects (using constraint expressions and XML data
binding, which makes it very powerful).

>The syntax for creating a bunch of objects is parallel to the syntax of
declaring a class that creates the same objects.

>So you can start by just creating a bunch of stuff in "instance space", then
later on as you see the need, easily and incrementally convert only the parts
of it you want to reuse and abstract into classes.

>What is OpenLaszlo, and what's it good for?
[https://web.archive.org/web/20160312145555/http://donhopkins...](https://web.archive.org/web/20160312145555/http://donhopkins.com/drupal/node/124)

>Constraints and Prototypes in Garnet and Laszlo:
[https://web.archive.org/web/20160405015129/http://www.donhop...](https://web.archive.org/web/20160405015129/http://www.donhopkins.com/drupal/node/69)

[https://news.ycombinator.com/item?id=11232154](https://news.ycombinator.com/item?id=11232154)

>DonHopkins on Mar 6, 2016 | parent | favorite | on: Garnet – a graphical
toolkit for Lisp

>Yay Garnet! ;) I worked on Garnet with Brad Myers at CMU, on the PostScript
printing driver. Brad is really into mineral acronyms, and I came up with an
acronym he liked: "GLASS: Graphical Layer And Server Simplifier".

>Garnet had a lot of cool ideas in it, especially its constraints and
prototype based object system.

>A few years ago I wrote up a description of a similar system called
OpenLaszlo, and how OpenLaszlo's constraint system compared with Garnet's
constraint system. Garnet had a lazy "pull" constraint system, while Laszlo
had an event/delegate based "push" system. Each used a compiler to
automatically determine the dependencies of constraint expressions.

>Constraints and Prototypes in Garnet and Laszlo

>[https://web.archive.org/web/20160405015129/http://www.donhop...](https://web.archive.org/web/20160405015129/http://www.donhopkins.com/drupal/node/69)

>The problem we ran into with supporting PostScript with Garnet is that we
wanted to use Display PostScript, but Garnet was using CLX, the Common Lisp X
Protocol library which was of course totally written purely in Lisp.

>Of course CLX had no way to use any client side libraries that depended on
XLib itself. I'd steer clear of using anything that depends on CLX for
anything modern. (Does CLX still even exist?)

>Brad Myers produced "All the Widgets," which must have some Garnet demos in
there somewhere! [1]

>[1] All the Widgets, sponsored by the ACM CHI 1990 conference, to tell the
history of widgets up until then:
[https://www.youtube.com/watch?v=9qtd8Hc90Hw](https://www.youtube.com/watch?v=9qtd8Hc90Hw)

mpweiher on Mar 6, 2016 [-]

>>
[https://web.archive.org/web/20160405015129/http://www.donhop...](https://web.archive.org/web/20160405015129/http://www.donhopkins.com/drupal/node/69)
"Constraints are like structured programming for variables"

>Love it! Having been very interested in constraint programming and also
dabbled a bit here and there[1][2][3], what do you think is holding back
constraint programming?

>[1]
[http://2016.modularity.info/event/modularity-2016-mvpapers-c...](http://2016.modularity.info/event/modularity-2016-mvpapers-
constraints-as-polymorphic-connectors)

>[2] [http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-
and...](http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and-cocoa-
bindings.html)

>[3] [http://blog.metaobject.com/2015/09/very-simple-dataflow-
cons...](http://blog.metaobject.com/2015/09/very-simple-dataflow-constraints-
with.html)

DonHopkins on Mar 6, 2016 [-]

>> what do you think is holding back constraint programming?

>All the constraints. ;)

>As you mentioned, there are tricky two-way mathematical constraints, like
Sutherland's Sketchpad [1] and descendants [2], and Gosling's PhD Thesis [3],
where the system understands the constraint expressions mathematically and
transforms them algebraically.

>[1] Ivan Sutherland's Sketchpad:
[https://en.wikipedia.org/wiki/Sketchpad](https://en.wikipedia.org/wiki/Sketchpad)

>[2] Geometer's Sketchpad:
[https://en.wikipedia.org/wiki/The_Geometer%27s_Sketchpad](https://en.wikipedia.org/wiki/The_Geometer%27s_Sketchpad)

>[3] Algebraic Constraints; James Gosling; CMU CS Department PhD Thesis:
[http://digitalcollections.library.cmu.edu/awweb/awarchive?ty...](http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=362626)

>And there are simpler one-way data flow constraints like Apple's KVO
notification [4], Garnet's KR based constraints [5], and OpenLaszlo's
event/delegate based constraints [6].

>[4] Introduction to Key-Value Observing Programming Guide:
[https://developer.apple.com/library/mac/documentation/Cocoa/...](https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/KeyValueObserving/KeyValueObserving.html)

>[5] KR: Constraint-Based Knowledge Representation:
[https://docs.google.com/viewer?url=http%3A%2F%2Fwww.cs.cmu.e...](https://docs.google.com/viewer?url=http%3A%2F%2Fwww.cs.cmu.edu%2Fafs%2Fcs%2Fproject%2Fgarnet%2Fdoc%2Fkr%2Fkr-
manual.ps)

>[6] Oliver Steel: Instance-First Development:
[http://blog.osteele.com/posts/2004/03/classes-and-
prototypes...](http://blog.osteele.com/posts/2004/03/classes-and-prototypes/)

>As you pointed out, KVO constraints simply say that object.x = otherObject.y,
so there's not much to them.

>I think one thing holding back constraint programming is that they require an
interpreter or compiler to understand them, or the programmer to write code in
a constrained syntax.

>Garnet's KR constraints are written as Lisp expressions implemented by Lisp
macros, that parse the expressions and recognize certain expressions like
"gvl" for "get value", and named path expressions. KR wires up the dependency
graph based on that information, and marks the constraint as invalid if any of
the links along the dependency path change, as well as when any of the final
values the expressions reference change. But it doesn't understand the
mathematical expressions themselves. At the time I was working on it, it
didn't know hot to figure out which branches of conditional expressions
mattered, so it would assume it depended on everything in the expression.
(i.e. like the C expression "size = window.landscape ? parent.width :
parent.height" \-- if window.landscape is true, it depends on parent.width,
else it depends on parent.height, but ). It only recalculates the constraint
values lazily when you read them. ("pull" constraints).

>OpenLaszlo constraints are written as JavaScript expressions that the
OpenLaszlo compiler parses, and it creates some JavaScript data and hidden
methods behind the scenes that go along with the class, which are used at
runtime to keep track of all the dependencies. You don't have to use a special
expressions in constraints to read values, but you do have to use
object.setValue("key", value) to write values. (This was because OpenLaszlo
was targeting the Flash runtime, and that was the most efficient trade-off,
since Flash didn't support property setters like modern JavaScript does.)

>OpenLaszlo constraints used a "push" model of propagating all dependent
changes forward when you called "setValue", because that was the best trade-
off at the time for speed and usability and how it was intended to be used.
But with getters and setters you could implement a more convenient constraint
system that didn't put so many constraints on the programmer and how you use
them.

------
fakegalitarian
> I’ve never really understood what “homoiconic” is supposed to mean.

The author gives a great candidate definition but doesn’t want us to call it
homoiconicity for some reason

> What’s this intermediate syntax tree? It’s an almost entirely superficial
> understanding of your program: it basically does paren-matching to create a
> tree representing the surface nesting structure of the text. This is nowhere
> near an AST, but it’s just enough for the macro expansion system to do its
> job.

I was taught that macro expansion is just another AST transformation, and
homoiconicity makes this easy because the syntax tree can be transformed
without understanding the grammar.

But I guess this whole article is an exercise in un-defining things.

Also, (2012).

------
gowld
This is a better article than the OP:
[https://en.wikipedia.org/wiki/Homoiconicity](https://en.wikipedia.org/wiki/Homoiconicity)

