
Lisp as the Maxwell’s Equations of Software (2012) - newswasboring
http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/
======
pjc50
I've said this before, but I've come to think of Lisp and Forth as the left-
handed scissors of software: for 10% of the population they're significantly
easier to use, to such an extent that it can feel like a revelation. The
remaining 90% of the population tries it, finds it harder to use, and doesn't
get why others are raving about it.

This would be down to very fundamental differences in thinking and
conceptualisation that are difficult or impossible to "just" teach over. It
requires those arguing over "ease of use" to recognise that it's not a
property of the tool itself, but a function of both the tool and the user and
how well the tool fits the particular, individual, user.

~~~
evdubs
What are the fundamental differences that are "difficult or impossible to
'just' teach over"? Looking at a language like Racket, you have classes and
objects, (first class) functions, recursion, "for" iteration, threads,
mutability, etc. Aren't these features found in plenty of languages?

Anecdotally, there seems to be a dislike of Lisp just because of the parens-
based syntax. Do you think the different syntax counts as being such a
fundamental difference for 90% of developers? Is it really so bad to have to
write:

(define x 1)

instead of

let x = 1;

That difference makes it "left-handed scissors"?

~~~
lispm
Racket is not a good example. People learn it as non-interactive. The project
now also switches to a conventional syntax without s-expressions.

The big 'problems' are a) interactive use (much of Lisp usage is
'interactivity first'), b) the code as data thing and lack of static feedback
(type checks, ...).

The 'code as data' thing is a real hurdle: what is code, what is data, what is
transformed code, what are code transformers, ... One needs a mental model to
work with code which is making heavy use of meta-linguistic-programming.

~~~
evdubs
> Racket is not a good example. People learn it as non-interactive. The
> project now also switches to a conventional syntax without s-expressions.

I can understand that people that have achieved "lisp enlightenment" will view
Racket this way. With respect to replacing s-expressions, I am hopeful that
the original s-expression-based Racket retains prominence even when the
Rhombus language is fleshed out. I know I will continue to use s-expressions.

> a) interactive use (much of Lisp usage is 'interactivity first')

To me, having used Clojure (which hopefully counts as an interactive Lisp) and
worked in environments like A+ (in the family of APL/J/K), this is an amazing
feature and I struggle to see this being such a "fundamental difference ...
that is difficult or impossible to 'teach'" (paraphrasing OP).

> b) the code as data thing and lack of static feedback (type checks, ...).

With many people programming in Javascript, Python, and Ruby, the lack of
static feedback seems less compelling to me. With "code as data" thing, as a
Racket programmer, I rarely if ever think about macros or readers or code
transformers. I am sure that can be understood as missing the whole point of
lisp, but I have been productive in an environment whose underlying bits can
make use of all of that and I can just write code as code and treat data as
data in a manner similar to many other languages. Maybe I'm weird, but
programming in a manner similar to Java but having code written with
s-expressions feels better to me.

~~~
lispm
> which hopefully counts as an interactive Lisp

Not that much in my view. I don't see Clojure as a particular interactive
Lisp, I actually see it only as 'derived from Lisp and others'. 'Lisp' is
usually much more interactive, with stateful images (running copies of 'object
seas'), mix of interpretation and compilation, interactive error handling,
all-layers Lisp (where most of the layers can be inspected/manipulated/changed
at runtime, incl. its own implementation) and internal development
environments (not externally attached).

For an explanation see:
[https://news.ycombinator.com/item?id=22326853](https://news.ycombinator.com/item?id=22326853)

------
Beldin
Interestingly, the 4 equations shown in the article are due to Oliver
Heaviside [1]. Maxwell came up with 20 equations, which captured
electromagnetism. But it was... rather unwieldy. Heaviside reformulated that
into 4 equations. Basically, Maxwell managed the breakthrough and Heaviside
cleaned up after him and made it palatable.

Why is that interesting? Because it raises the question: is the analogy made
in the article more akin to Maxwell (breakthrough, brilliant but close to
unusable), or is it more akin to Heaviside (taking known concepts and making
them usable in practice)?

[1]
[https://en.m.wikipedia.org/wiki/Oliver_Heaviside](https://en.m.wikipedia.org/wiki/Oliver_Heaviside)

~~~
nmyk
A somewhat related idea: the discovery that light is an electromagnetic wave
falls right out of Maxwell's equations in a vacuum. Simply rearranging terms
yields two wave equations in 3D - one for the electric field, and one for
magnetic. The term in both wave equations representing the speed of
propagation (i.e. the speed of light) is a constant. It depends only on the
physical properties of the vacuum, which never change as far as we know. That
means an observer traveling at any constant velocity with respect to light
will always measure the speed of light to be the same. Maxwell died in 1879 at
the age of 48. It is not only plausible but likely that he would have come up
with special relativity had he lived a bit longer, and also likely that it
would have been years before Einstein published his 1905 paper on the subject.

So in addition to the Stigler's law [1] situation with Heaviside, the
formulation of special relativity may have been more or less inevitable after
all of Maxwell's laws were put in the same room together. Not that Einstein
wasn't a once-in-a-generation talent, but the reality of how progress is made
in science is often glossed over in favor of assigning glory to particular
individuals.

Even Maxwell's laws taken individually are named after different people, but
together they all belong to Maxwell. I get that this practice is intended to
be a convenient way to put a label on an abstract concept rather than a way to
write history, but I still think it has an effect on how we think and talk
about the history of science.

[https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy](https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy)

------
dang
If curious see also

2015
[https://news.ycombinator.com/item?id=9038505](https://news.ycombinator.com/item?id=9038505)

Discussed at the time:
[https://news.ycombinator.com/item?id=3830867](https://news.ycombinator.com/item?id=3830867)

------
minerjoe
Yay! A bunch of people that haven't programmed (much?) in Lisp t-ing off with
a few that have, dragging out all the old tired arguments (LISP?!)!

------
tromp
Most recently discussed about a month ago in the thread

“Maxwell's equations of software” examined (2008)

[https://news.ycombinator.com/item?id=23321955](https://news.ycombinator.com/item?id=23321955)

------
vindarel
FYI CL might have more libraries than you think:
[https://github.com/CodyReichert/awesome-
cl](https://github.com/CodyReichert/awesome-cl) just saying.

------
amelius
Maybe it's just me but it seems silly to compare equations to Maxwell's
equations just because Maxwell (or his peers) originally used the same number
of equations. In the modern mathematical framework of differential geometry
and tensor calculus, Maxwell's equations can be reduced to only a single
equation. See e.g. [1].

Also, Maxwell's equations have the important property that they are linear.
How would that work in the Lisp analogy?

[https://en.wikipedia.org/wiki/Maxwell%27s_equations#Alternat...](https://en.wikipedia.org/wiki/Maxwell%27s_equations#Alternative_formulations)

------
magicmouse
LISP has an insidious parenthetical notation, and is well known to be a "write
only" language, where nobody but the author can understand large programs
written in LISP. It is avoided commercially for this reason.

~~~
vincent-manis
That's why GNU Emacs is written in C and Cobol.

More seriously, I have seen a lot of badly written Lisp/Scheme, as much as in
any other language. Lisp DOES scale quite well, if the code is well-designed.

~~~
lmm
Every language scales well if the code is well-designed, almost by definition.
The more salient question is how well large codebases written by mediocre
programmers can be maintained by other mediocre programmers, because that's
what any long-lived codebase will have to deal with.

------
ncmncm
This keeps popping up, and it is no more persuasive the tenth time than the
first.

Universal Computation was demonstrated a long time before Lisp. Any Turing-
complete language is universal, Lisp included. Church's lambda calculus is
included, and Lisp is a notation for lambda calculus. So, you can write a Lisp
interpreter in Lisp, and it's pretty compact because (surprise!) a Lisp
Interpreter has a lot of support for the stuff you need to do to Interpret
Lisp. There's nothing deeply meaningful about it beyond what you start out
with from Church.

It is much more impressive, and subtly meaningful, that a NAND gate is
universal. You can build a machine that runs Lisp from nothing but NAND gates,
and people have. There have even been commercially successful discrete-
transistor mainframes made of practically nothing but NOR gates (plus core
memory), which are identically universal.

Transistors are therefore universal. But because their operation is described
in analog terms, they don't quite fit the mathematical formalism. The universe
doesn't care about that, so it allows us to build up Universal Computation out
of analog transistors. The more transistors you put in, the more computation
you can do in a second.

~~~
tr352
You're speaking of functional completeness of boolean operators which I think
is quite unrelated to theories of computation. For computation you need some
notion of state, and while a circuit made of logic gates may possess some
notion of state, a logic gate in itself does not. As for transistors being
universal (whatever that means) if this is true then vacuum tubes, relays,
pneumatic valves, etcetera are also universal.

~~~
ncmncm
Do you understand that you can construct flip-flops from NAND gates? You can
construct registers from flip-flops.

