
Lisp as the Maxwell Equations of Software (2012) - jgrodziski
http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/
======
cousin_it
I admit that I don't completely understand Lisp's claim to fame. Yes, programs
in the language are represented by a built-in data type, and you can write a
self-interpreter quite easily. But the same is true for a simple assembly
language, if you know the instruction encoding! You can represent a program as
a code pointer, and it's easy to write an analog of "eval" by hand, using just
a handful of arithmetic instructions and conditional jumps. What's more, such
an "eval" won't even depend on a garbage collector or other niceties that Lisp
self-interpreters take for granted.

~~~
RodgerTheGreat
Lisp gets a great deal of undue credit as being some kind of timeless
enlightened wisdom, which I see as ignorant of the history of the language's
development.

It took years to develop approaches to garbage collection (the earliest
prototypes used bump allocators and crashed when they exhausted a heap), years
to finalize the syntax of the language (you'll notice that the "maxwell's
equations" are written using M-Expressions, a syntax for the language which
was discarded for ease of implementation and later rationalized because
homoiconicity can be handy), decades to realize how important lexical scope
(versus dynamic) is to maintaining encapsulation. Modern Lisps are
substantially different in syntax and semantics from these earliest ideas;
we're just still calling them Lisp.

It's very easy to make Lisp look elegant when you brush all the implementation
details under the rug.

~~~
Delmania
No language that is developed is perfect on release. The timeless wisdom of
Lisp, as you call it, rest largely on macros and lambdas. Those ideas are
powerful in their ability to allow you to build abstractions and customize the
language to your needs. Basically, Lisp allowed for DSLs before the concept of
a DSL was even recognized.

~~~
vinceguidry
You can't call it timeless if it took time to develop.

~~~
nights192
Yeah- you're right. Pythagoras' theorem obviously isn't timeless; the fool
took time devising the formula!

~~~
vinceguidry
Pythagoras did not discover the theorem, it's just named after him. He _may_
have been the first to record a proof of it. It's conceivable that the very
first peoples to become seriously interested in geometrical calculation would
have discovered it as a matter of course. It does meet the definition of
timelessness whereas Lisp does not.

As Wikipedia notes: "Mesopotamian, Indian and Chinese mathematicians are all
known to have discovered the theorem independently and, in some cases, provide
proofs for special cases."

~~~
nights192
... which is irrelevant. The point posed was that the ephemeral form heralds
the so-called 'revolutionary' concepts, and stating that a prolonged delivery
of the concepts invalidates their usefulness is the result of an overly
literal interpretation of a common phrase.

~~~
vinceguidry
Usefulness? When did I say Lisp wasn't useful?

------
jgrodziski
Hi, Submitter here,

I stumble upon that article from my bookmarks while pouring my latest side
project ([http://www.learn-computing-directory.org/languages-and-
progr...](http://www.learn-computing-directory.org/languages-and-
programming/compilers-fundamentals.html)) and thought about submitting it to
HN. By the way, feel free to give me feedbacks about the directory! (in still
in very alpha state but already online). I only fill for the moment the
"Algorithms and Data Structures", "Compilers" and "Theory of computation"
topics.

Two great articles that complements perfectly the one submitted are LISP
interpreters in Python from Norvig:
[http://norvig.com/lispy.html](http://norvig.com/lispy.html) and
[http://norvig.com/lispy2.html](http://norvig.com/lispy2.html) Also, the book
"Understanding Computation"
([http://computationbook.com/](http://computationbook.com/)) can be a great
companion as there is a section about Lambda Calculus.

Jérémie.

------
nabla9
While this is nice academic view. As a Lisp programmer, the Power of Lisp
comes from being big ball of mud, see:
[https://en.wikipedia.org/wiki/Big_ball_of_mud#In_programming...](https://en.wikipedia.org/wiki/Big_ball_of_mud#In_programming_languages).

Usable and efficient Common Lisp implementation can be build from just ~25
primitives (more than core LISP but still very elegant). Elegant derivation of
Lisp world does not matter when the atoms and the molecules can't be
distinguished. Is the object system and meta-object protocol implemented as
primitives by people who designed the Lisp implementation, or is it external
library? Who cares. Only performance and correct function matter.

------
samatman
I understand why Alan Kay said what he did, but a closer analogy would be to
Euclid's Elements. If one follow's Lisps axioms, one ends up with Lispian
Computation, elegant, clean, tail-call optimized.

Other axioms of computing lead the user in different directions, notably
Hindley-Milner. Lisp carves territory in mathematical state space, not
physical reality.

~~~
abecedarius
Maxwell's equations in the usual Heaviside form aren't even the most elegant
way to write them: [http://www.av8n.com/physics/maxwell-
ga.htm](http://www.av8n.com/physics/maxwell-ga.htm)

I'm curious how people might carry that through the analogy.

~~~
MeadowTheory
I'm rather partial to the differential form version. Something about the neat
almost symmetry (in the layman's sense) of the two equations. And how
deceptively innocent looking they are.

------
adamgravitis
I just noticed this was authored by Michael Nielsen... co-author of _the_ text
on quantum computation, and now a huge open-academia advocate. And he's open-
sourced the micro-lisp he wrote in this article:
[https://github.com/mnielsen](https://github.com/mnielsen)

------
mvc
If you're near Boston/Cambridge, the Clojure Meetup Group is showing a live
video of William Byrd talking about this tonight.

[http://www.meetup.com/Boston-Clojure-
Group/events/218650142/](http://www.meetup.com/Boston-Clojure-
Group/events/218650142/)

------
quarterwave
Any chance of a resurgence in Lisp machines? Especially in view of the changes
in CPU architecture due to semiconductor scaling challenges.

~~~
hga
I've been thinking hard about this lately, and the first question for me is
"What would a 21st Century Lisp Machine _mean_?"

Lisp Machines were created in part due to the desire to get the most
performance possible back in the days when CPUs were made out of discrete low
and medium scale integration TTL (there were also ECL hot-rods, but their much
greater costs across the board starting with design limited them to proven
concepts, like mainframes of proven value, supercomputers, and the Xerox
Dorado, after the Alto etc. had proven the worth of the concept).

Everyone was limited: maximum logic speeds were pretty low, you could try to
avoid using microcoded synchronous designs, but e.g. Honeywell proved that to
be a terrible idea, as noted elsewhere memory was very dear. E.g. the Lisp
Machine was conceived not long after Intel shipped the first generally
available DRAM chip, a whopping 1,024 bits (which was used along with the
first model of the PDP-11 to provide graphics terminals to the MIT-AI PDP-10),
etc. etc.

So there was a lot to be said for making a custom TTL CPU optimized for Lisp.
And only that, initially: to provide some perspective, the three major
improvements of LMI's LAMBDA CPU over the CADR were using Fairchild's FAST
family of high speed TTL, stealing one bit from the 8 bits dedicated to tags
to double the address space (no doubt a hack enabled by it having a 2 space
copying GC), and adding a neat TRW 16 bit integer multiply chip.

The game radically changed when you could fit all of a CPU on a single silicon
die. And for a whole bunch of well discussed reasons, to which I would add
Symbolics being very badly managed, and LMI killed off by dirty Canadian
politics, there was no RISC based Lisp processor, Lisp Machines didn't make
the transition to that era. And now CPUs are so fast, so wide, have so much
cache ... e.g. more L3 cache than a Lisp Machine of old was likely to have in
DRAM, the hardware case isn't compelling. Although I'm following the lowRISC
project because they propose to add 2 tag bits to the RISC-V architecture.

So, we're really talking about software, and what was the Lisp Machine in that
respect. Well, us partisans of it thought it was the highest leveraged
software development platform in existence, akin to supercomputers for
leveraging scientists (another field that's changed radically, in part due to
technology, in part due to geopolitics changing for the better).

For now, I'll finish this overly long comment by asking if a modern,
productive programmer could be so without using a web browser along with the
stuff we think of as software development tools. I.e., what would/should the
scope of a 21st Century Lisp Machine be?

~~~
quarterwave
Thanks for the detailed perspective.

My limited & roseate view of a 21st century Lisp machine is based on an old
theme - a massively parallel computing system using bespoke silicon logic
blocks.

As you have noted below, not only are the cache sizes in a modern CPU
monstrous, there's also the compilers optimized for these caches,
instructions, branch prediction units, etc. No point in ending up with a chip
that is much slower than an equivalent one running on a specially-designed
virtual machine, which is itself much slower than MPI.

Dreaming on, such a Lisp machine would need a vast collaborative academic
effort with substantially new IP design, in say the 32nm silicon process node.
That's the most advanced node where lithography is still (somewhat) manageable
for custom IP design.

~~~
hga
Well, there's the first Connection Machine architecture, very roughly
contemporaneous with Lisp Machines (I had to regretfully tell my friend Danny
Hillis that LMI wouldn't be able to provide Lisp Machines for Thinking
Machines Corporation in time (which had to be formed because the project
needed 1-2 analog engineers, which MIT was structurally unable to pay, no one
gets paid more than a professor). He was really, legitimately pissed off by
what Symbolics did with Macsyma, a sleazy licensing deal to keep it out of
everyone else's hands (and they tried to get everyone in the world who'd
gotten a copy of it to relinquish it). Later neglected, even when it became
the Symbolics cash cow.)

Anyway, if you're not talking ccNUMA, the limitations of which has got me
looking hard at Barrelfish
([http://www.barrelfish.org/](http://www.barrelfish.org/)), e.g. if you're
talking stuff in the land of MPI, again it's going to be very hard to beat
commodity CPUs.

Although in that dreaming, look at lowRISC:
[http://www.lowrisc.org/](http://www.lowrisc.org/) looking at things now, they
propose taping out production silicon as soon as 2016, and say 48 and 28nm
processes look good. From the site:

 _What level of performance will it have?

To run Linux "well". The clock rate achieved will depend on the technology
node and particular process selected. As a rough guide we would expect
~500-1GHz at 40nm and ~1.0-1.5GHz at 28nm.

Is volume fabrication feasible?

Yes. There are a number of routes open to us. Early production runs are likely
to be done in batches of ~25 wafers. This would yield around 100-200K good
chips per batch. We expect to produce packaged chips for less than $10 each._

And with a little quality time with Google, the numbers look good. Ignoring
the _minor_ detail of NRE like making masks, a single and very big wafer
really doesn't cost all that much, like quite a bit less than $10K.

And we now have tools to organize these sorts of efforts, e.g. crowdsourcing.
But it's not trivial, e.g. one of the things that makes this messy is modern
chips have DRAM controllers, and that gets you heavily into analog land. But
it's now conceivable, which hasn't been true for a very long time, say
starting somewhere in the range between when the 68000 and 386 shipped in the
'80s.

------
unoti
There's nothing inherently unnatural about taking a bottom up approach to
solving problems. Sometimes, knowing you're going to need a particular tool
and doing that first makes sense. I think of it like the cooking technique
Mise en place [1], where you lay out all the ingredients before you start
cooking. Neither bottom up, nor top down, is always the most natural way to
attack a problem. Sometimes you can attack a problem from both sides at once.

[http://en.m.wikipedia.org/wiki/Mise_en_place](http://en.m.wikipedia.org/wiki/Mise_en_place)

------
gobengo
What does Lisp provide on top of e.g. the lambda calculus? I'd expect the
latter to be even more fundamental. But I suppose one could reasonably argue
that that wasn't 'Software', just Math.

------
jheriko
surely this is actually that a short lisp compiler or interpreter is the
maxwell's equations. the analogy is way off the mark for software as a
discipline in total. you can know nothing about lisp and understand pretty
much everything... thats not true of classical electromagnetism and maxwell's
equations

~~~
one-more-minute
Not necessarily. You learn a ton about electromagnetism before you ever get to
Maxwell's equations.

~~~
MeadowTheory
Faraday sure did.

------
keithgabryelski
for what it is worth, here is a version of a lisp interpreter I wrote in
python:
[https://github.com/keithgabryelski/plisp](https://github.com/keithgabryelski/plisp)
I used _Interpreting_LISP_ by Gary D. Knott as a guide.

~~~
lisper
Lisp in Python in 137 LOC:

[http://www.flownet.com/ron/l.py](http://www.flownet.com/ron/l.py)

~~~
keithgabryelski
not really, but it is interesting.

------
kazinator
That half a page of code isn't "Lisp in itself".

I do not see any low-level I/O routines, or a reader to scan expressions and
convert them into objects.

I don't see the actual function call mechanism: where the subroutine linkage
is set up and torn down, and what goes into what register.

I don't see a garbage collector.

A whole bunch of hand-written assembly language made the code on that page
work.

------
charlieflowers
Shouldn't [2012] be appended to the title? I've seen the article discussed on
HN before. [1]

[1]
[https://news.ycombinator.com/item?id=3830867](https://news.ycombinator.com/item?id=3830867)

~~~
eridal
I thought HN wont allow to re-post the exact same url, so to bypass people add
noise to the url.

is date of post also considered?

~~~
andyjohnson0
As far as I know, re-posts are allowed after some period of time has elapsed.
I don't think this is documented anywhere though.

~~~
samatman
I've always figured the URLS just fall off a queue, so there's no absolute
period of time involved. No actual knowledge, just a guess.

~~~
eridal
sweet, so we could attempt to find out the queue size!!

------
GFK_of_xmaspast
Disagree with premise, Maxwell's equations are useful in practice.

------
phkahler
I hate that. The half page of code is not Lisp. Not even close, and I'm not
talking about a standard library. There is no parser in there. Eval is
operating on Lisp data, not lisp source code. There is no support for
efficient math, or strings, or anything really.

If that is lisp, then a byte code interpreter loop is every interpreted
language: read_opcode, inc_pc, call a function indexed from a list indexed by
opcode. It can be written in one line of C. This BTW translates to hardware a
lot easier than the half page of Lisp. That one line of code by itself is also
just as useless as the half page of Lisp.

It's still interesting, but people need to stop claiming it's Lisp defined in
this tiny little block of code.

