

Lisp is Too Powerful - gnosis
https://c2.com/cgi/wiki?LispIsTooPowerful

======
raganwald
The notion of programming languages being “too powerful” rests on fallacious
assumptions.

The faulty reasoning is this: If Hacker Hortense does something with powerful
language L that Newbie Nathan finds confusing, we assume that language J won’t
permit Hortense to do that, and therefore if we standardize on language J,
fewer bad things will happen.

This reasoning is faulty. First off, if Hortense and Nathan don’t see eye to
eye on how to write programs, no language will solve the problem, because you
have a disparity of experience and/or education. If what you want is code
Nathan will like, and you don’t want to raise Nathan to Hortense’s level, you
have to drag Hortense down to Nathan’s level with coding standards. And you
could have done that in Language L just as easily as J. If you ban L, what
will you do when Hortense implements Parser Combinators and Monads in J?

The first flaw is the assumption that language _features_ introduce
complexity, when in reality it isn’t the features, it’s the _solution to the
problem_ that confuses.

This indirectly leads into the second fault with the reasoning. There is a
hidden assumption that Hortense is going to write so much code, and there will
be so many “problems” Nathan will have with his code, and every problem we
eliminate is one fewer problem in the result. This is not how software works.

Let’s say Hortense writes some code in L, and Nathan complains that there are
seventeen things he doesn’t understand. Ten of them rely on features of L,
seven are independent of the language. If we tell Hortense to rewrite things
in J, we cannot assume that the result will have seven things Nathan doesn’t
understand. It might have even more as Hortsense works around J’s
deficiencies.

For a demonstration of this, look at any modern Java Dependency Injection
Framework. These complex and opaque software machines are made up of of XML,
interfaces, and classes. They exist in no little part because Java lacks the
reflection and meta-programming of languages like Lisp or even Ruby. Are these
thing simple by virtue of being written in Java instead of Lisp?

Imagine for a moment that DI frameworks don’t exist, and that Hortense had
built something in Lisp with macros for dependency injection. Nathan is
nonplussed, so Hortense rewrites it in Java and rolls her own DI
infrastructure. Will the result really have fewer things that Nathan doesn’t
understand? Or will it be even more inscrutable thanks to the accidental
complexity of working around Java's limits??

The “problem” is not that some languages are too powerful, it is that people
imagine that teams of programmers can work together on complex problems
without communicating, and some people assume that when there is a disparity
of experience of knowledge, that people can work together without educating or
learning from each other.

~~~
sirclueless
I'm not so sure I agree with you, because there is something to be said
against your counterexample. In java, I can reason as follows: the Java
Dependency Injection Framework is gigantic and convoluted, therefore it is
going to be complex and difficult to manage.

In Java there is a sort of ceiling, one might call it an upper bound on the
mindfuckery per square inch of code. In 100 dense lines of lisp, there is no a
priori reason to assume that no one has written a recursive descent macro-
expanding s-expression handler for arbitrary routing of network packets in a
rules-based DSL. In the same amount of dense java code there are rarely more
than 200 method calls, all statically checked, and rarely more than 100
variables, typed and scoped.

From John Carmack's recent article on static checking: "If you have a large
enough codebase, any class of error that is syntactically legal probably
exists there." Now, he is concerned with actual defects, but the same rules
apply even more strongly for stylistic rules. As the size of a codebase
increases, the probability that any particular language feature is absent
approaches zero. And in a language like lisp, where you can write your own
language features, this is moderately horrifying.

The fact that you can replicate java's dependency injection in a few orders of
magnitude less code in lisp is not a comfort to me. Because the 10,000 lines
of code in Java's Dependency Injection Framework is a red flag to me. The
chances that someone who writes the same thing in lisp has drastically
simplified the implementation are not so high.

~~~
raganwald
_the 10,000 lines of code in Java's Dependency Injection Framework is a red
flag to me._

We agree.

 _The chances that someone who writes the same thing in lisp has drastically
simplified the implementation are not so high._

We disagree.

First, when I roll my own, I scratch only my own itch. I don’t need to build
something that works for everyone, everywhere. It’s like Microsoft Word: MSFT
boasts that most users only need 5% of what it does, but every niche of users
uses a different 5% of the whole thing.

But can I roll my own? Well, I suggest that the answer is more likely to be
“yes” in Lisp than in Java. First, folklore suggests that defects are constant
per line of code. Therefore, if I need fewer lines of Lisp than of Java, I
should have fewer defects to contend with. I assume three things. First, I
need much less than the full framework’s functionality. Second, Lisp is more
expressive than Java, so I need fewer lines of Lisp for any functionality than
of Java. Third, I suggest that languages with meta-programming support are
particularly well suited for tasks like dependency injection, reducing the
amount of code I need to write even further.

Now, the big DI framework is written by someone else. So is it free? No. I
need time to learn it, time to use it, I can make mistakes using it, I can get
an XML configuration wrong, I can implement an interface when I am supposed to
extend an abstract class, I am not immune from defects just because I am using
a library.

So the net question for me is whether the chances of successfully rolling my
own feature in Lisp for my project’s specific needs are greater than the
chances of successfully using an existing framework in the Java world=, and
parallel to that, the question is whether someone else working with my code
will find it easier to decipher the XML configuration and interfaces and
classes I have written to work with a Java DI framework or will find it easier
to work with the smaller, simpler and more compact Lisp code written for this
specific project.

Reasonable people can go either way on this, I find it hard to believe that
Java is “obviously” a win, especially if they’ve used one of these big
frameworks with the many gotchas (as I have).

p.s. Of course, the wild card is that there are plenty of libraries in Lisp as
well. I reject the notion that every Lisp programmer reinvents everything from
scratch: <https://github.com/lprefontaine/Boing>

~~~
vannevar
Keep in mind that a less verbose expression is not necessarily

~~~
vannevar
Sorry, I didn't see that this actually posted. What I intended to say is that
a less verbose expression is not necessarily easier to understand. The article
is really about the balance between elegance (or performance) and
accessibility. An expert coder working with a powerful language like Lisp can
implement a lot of functionality very quickly, but there is a point where less
skilled programmers working with _common understanding_ of a less flexible
language can implement more functionality more quickly, simply because there
are more of them working in parallel.

~~~
gruseom
This is the kind of thing we say about programming all the time without
evidence. We don't know this, or anything like it.

~~~
vannevar
Strictly speaking yes, it's a hypothesis. But the fact that the programming
ecosystem looks as it does constitutes some evidence in favor of that
hypothesis. Were it otherwise, you'd expect the professional programming world
to be economically dominated by Lisp and a handful of super-programmers. Yet
that isn't what we see. Why not?

~~~
gruseom
That's the stock objection. Here's my answer: historically speaking, we've
barely started. Software is the first mass endeavor of its kind that humans
have tried. It belongs to a post-industrial era that can be expected to take a
long time to work itself out. Under such conditions, social proof doesn't
work. Whatever the rational way of making software turns out to be,
statistically speaking it hasn't been tried yet.

Will it turn out to be "Lisp and a handful of super-programmers"? I don't
know. What we need is an age of experimentation. The great thing is that
startup costs are now so low that we are beginning to see that happen.
Emphasis on _beginning_.

~~~
vannevar
That argument seems a little too convenient; we are after all talking about a
field (and a language, Lisp) that's been around for over 50 years. I could
certainly see pockets of inefficiency persisting after such a time, but I
would hardly expect the exception to be the rule at this point.

Keep in mind that I'm only suggesting that a crossover point exists, I don't
pretend to know where exactly it is. In order for me to be wrong, a single
superior programmer would _always_ have to be better than two _slightly_
inferior programmers working with a _slightly_ less expressive language. I
strongly doubt that this is true. The simplest explanation for what we observe
is that in fact a team of inferior programmers working in parallel _can be_
more efficient than a single superior programmer working alone. Not always,
but often enough to prevent more expressive but less comprehensible languages
from becoming dominant. What constitutes "expressive" and "comprehensible"
will evolve over time, as you suggest (maybe Lisp will someday become
tomorrow's Java!), but the underlying scaling law will remain.

~~~
akkartik
This is a fascinating conversation. I've always had trouble working in teams,
so I'd _like_ to believe that superior programmers will out in the end. Or at
least that they will in a few problem domains.

But I wonder if this is wishful thinking, if this isn't just another case of
the prisoner's dilemma. Perhaps like how cities with mostly poor people would
collaborate many times in history to conquer neighboring barbarians, even
though the barbarians had more freedom and were thus _richer_. (See
<http://en.wikipedia.org/wiki/Fates_of_Nations.>)

Then again, there's reason for hope. Perhaps the parallelizable sort of
programming is more menial. It certainly seems that way with the way
communication costs overtake large teams. It's almost like Vernor Vinge's
zones of thought (<http://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep>,
<http://www.youtube.com/watch?v=xcPcpF2M27c>) - as your team grows bigger you
can just watch the members grow dumber in front of your eyes as more and more
of their cognitive effort is eaten up by internal communication, leaving less
and less for externally-useful work. If this is true, there's hope that
advances in programming will automate the low-cognition tasks and allow
programmers to focus on the high-cognition ones, leveling the playing field
for small, high-cohesion teams.

\---

Me, I've been obsessed with something raganwald said when he spawned this
tendril of conversation: exercising _explicit_ control over the space of
inputs my program cares about. My current hypothesis: eliminate fixed
interfaces, version numbers, and notions of backwards compatibility. All these
are like petridishes of sugar syrup for code to breed more code. Replace them
with with unit tests. Lots of them[1]. If I rely on some code you wrote, and I
want to pull in some of your recent changes, I need to rerun my tests to
ensure you didn't change an interface. Programming this way is less
reassuring, but I think it empowers programmers where abstraction boundaries
impose mental blocks. Great programmers take charge of their entire stack, so
let's do more of that. I'm hoping this is the way to prove small teams can
outdo large ones.

[1] Including tests for performance, throughput, availability. This is the
hard part. But I spent a lot of time building microprocessor simulators in
grad school. I think it's doable.

~~~
gruseom
_I'd like to believe that superior [solo] programmers will out in the end_

I think you're wrong (sorry!) because it's impossible to talk about superior
programmers without talking about teams. Building complex systems is a team
sport. There's no way around this. But you can't have good teams without good
programmers.

The phrase "scaling complexity" has at least two axes built into it: the
abstraction axis -- how to get better at telling the program to the computer
-- and the collaboration axis -- how to get better at telling the program to
each other. Most of this thread has been about whether we suck at the former.
But I say we _really_ suck at the latter, and the reason is that we haven't
fully assimilated what software is yet. Software doesn't live in the code, it
lives in the minds of the people who make it. The code is just a (lossy)
written representation.

We can argue about how much more productive the best individual working solo
with the best tool can be- but there's no way that that model will scale
arbitrarily, no matter how good the individual/tool pairing. At some point the
single machine (the solo genius) hits a wall and you have to go to distributed
systems (teams). One thing we know from programming is that when you shift to
distributed systems, everything changes. I think that's true on the human
level as well. (Just to be redundant, by "distribution" here I don't mean
distributed teams, I mean knowledge of the program being distributed across
multiple brains.)

Maybe you wouldn't have trouble working in teams if we'd actually figured out
how to make great teams. So far, it's entirely hit and miss. But I think
anyone who's had the good fortune to experience the spontaneous occurrence of
a great team knows what a beautiful and powerful thing it is. Most of us
who've had that experience go through the rest of our careers craving it
again. Indeed, it has converted many a solo type into an ardent collaborator.
Like me.

I was originally going to write about this and then decided not to go there,
but you forced my hand. :) Just as long as it's clear that when I say "team" I
mean nothing like how software organizations are formally built nowadays. It's
not about being in an org chart. It's about being in a band.

~~~
akkartik
_The phrase "scaling complexity" has at least two axes built into it: the
abstraction axis -- how to get better at telling the program to the computer
-- and the collaboration axis -- how to get better at telling the program to
each other. Most of this thread has been about whether we suck at the former.
But I say we really suck at the latter, and the reason is that we haven't
fully assimilated what software is yet. Software doesn't live in the code, it
lives in the minds of the people who make it. The code is just a (lossy)
written representation._

Ah, you're right. I was conflating the two axes.

I'd like to be part of a 'band'. I've had few opportunities, but I've caught
the occasional glimpse of how good things can be.

Since that whole aspect is outside my ken, I focus on expression. Hopefully
there's no destructive interference. I would argue that what you call
abstraction is about 'communication with each other' more than anything (even
though it breaks the awesome symmetry of your paragraph above :)

~~~
gruseom
No, you're right. They're not axes.

------
ohyes
I use lisp in a commercial setting.

The main point about lisp is 'there is no accounting for taste'.

Lisp leaves most things up to the programmer's taste.

You can write Lisp that looks like Fortran, or C++/Java, or Scheme.

You can make a DSL that directly models the problem.

You can use objects or no objects, do everything in CLOS or do everything with
structs.

You can write your own object system.

You can make it fast or slow, you can use correct data structures or you can
do everything with lists.

You can use macros for everything or you can never touch macros ever.

You can make your program one big macro.

None of these things are 'bad taste.'

Most people have different taste from you.

Most people only end up learning the part of the language consistent with the
paradigm that they like.

If you only know that piece of the language, you will have difficulty working
on someone else's project when they are working in a different paradigm.

None of the individual pieces of the language are particularly difficult.

People complain macros are difficult to understand. Macros are easy. If you
can understand a program that concatenates lists to make a new list, you can
understand a macro. Macros are quite literally 'just lisp code'.

Are some macros written in a way that you can not personally understand? Most
likely, but that is not an issue inherent to macros. You have to be careful
about checking inputs, and creating the proper debugging and type checking at
macro-expansion time. Just like any other program. There does seem to be a
stupid tendency for people to cram an entire macro into a single function.
This is foolish, the whole point is that you have the entire power of this
lisp runtime. There is no reason to write it like c pre-processor garbage.

So, how does one write good lisp code? Well, one way is to pick some
standards. This is as difficult has having someone in charge willing to gently
say 'this doesn't really match up with the style of the code around it'. If
I've inherited some code, when I do a bug fix, I'm going to do my damnedest to
stick with the style that it is written in (unless it is truly tagbody/go
awful, in which case I might rewrite).

This kind of turned into a rant, I apologize. I guess my point is that sure,
lisp is powerful, but the real issue is the number of options that it
provides. At some point, you have to pick a subset and a style, and go with
it. And then you have to be comfortable learning if you inherit something that
you don't know yet.

~~~
ScottBurson
_Lisp leaves most things up to the programmer's taste._

I don't entirely disagree, but I think there's more to it than that. While the
language certainly does offer some free choices, more often there are
advantages and disadvantages to each, so that in any particular situation,
some choices are better than others. Becoming an expert Lisp programmer
requires learning about these tradeoffs, which takes experience and, usually,
guidance from existing experts.

That's true of any language, of course, but some of the facilities Lisp offers
are rare among other languages, so that people coming to Lisp from some other
language are unlikely to have experience with them.

Oh, one point about macros in particular. If you have to resort to reading the
implementation of a macro to understand what it does, the person who wrote it
screwed up. Macros should _always_ have documentation strings explaning their
syntax and semantics. If you find yourself in that situation, the best thing
to do is to go to the REPL and use `macroexpand' interactively to see the
expansions of the macro calls you're interested in.

~~~
ohyes
_I don't entirely disagree, but I think there's more to it than that. While
the language certainly does offer some free choices, more often there are
advantages and disadvantages to each, so that in any particular situation,
some choices are better than others. Becoming an expert Lisp programmer
requires learning about these tradeoffs, which takes experience and, usually,
guidance from existing experts._

There is a saying 'perfection is the enemy of done.' In any particular
situation, there is probably not a solution that is both optimally efficient
and also optimally elegant. (This is true in any programming language). If you
work towards that goal too much, you will likely miss your deadline.

But the point is, the little advantages and disadvantages don't matter until
they do. There isn't going to be a big difference in most programs between
using a loop to iterate a sequence, and using map nil with a lambda, and using
do* (for example). In fact, whether there is any difference at all in the
resultant assembly or byte code will depend entirely on the compiler
implementation. It is a style thing... so try to be consistent, and work
within what you are comfortable with.

 _That's true of any language, of course, but some of the facilities Lisp
offers are rare among other languages, so that people coming to Lisp from some
other language are unlikely to have experience with them._

Currently, the only thing I can really think of that is actually really
_unique_ is the macro facility (and that is only because, as soon as a
language adopts the macro facility, it _becomes a lisp_.

As a programmer, the focus of my job is learning new things. I am basically a
mechanism for translating the new things that I have learned into computer
code. If I can't learn a few measly language features, what good am I going to
be as a 'thing I just learned to computer' translator. And like I said, use
what you are familiar with, until you are faced with something someone else
wrote, or you have the time to learn new things. But don't punt.

If someone doesn't have experience with a given piece of the language that has
been used, I expect them to pick up a book or online resource about it (and
then, possibly most importantly, play with it). It is not hard, but it does
take effort. There is nothing in common lisp that requires genius level
intellect. (God knows, I'm certainly not that bright). No one requires that
fresh-faced C interns be pointer arithmetic gods, but i'm sure they are
expected to learn it if it is part of the job.

 _Oh, one point about macros in particular. If you have to resort to reading
the implementation of a macro to understand what it does, the person who wrote
it screwed up. Macros should always have documentation strings explaning their
syntax and semantics. If you find yourself in that situation, the best thing
to do is to go to the REPL and use `macroexpand' interactively to see the
expansions of the macro calls you're interested in._

I'll add that in addition to doc strings for syntax and semantics, there
should also be assertions written into the macro about the syntax and
semantics. If I am passing a number or list where the macro is expecting a
symbol, an error should get thrown during the macro-expansion phase. Macros
are programs like anything else. Validating inputs and throwing an error at
the earliest possible time is a good rule to go by.

So, macroexpand-1 is a good start, if the macro is implemented correctly, does
what its documentation says, and you are simply flubbing the syntax. (Of
course, it should be yelling at you for flubbing the syntax).

However, when there is a bug in a macro, the only thing that macroexpand-1
will tell you is that the macro doesn't work. You'll macro-expand it and say
'yup that's the wrong generated code.' It doesn't really tell you anything
about how to actually fix the macro unless you are already familiar with the
macro's code. Having examples of inputs with bad outputs will aid in
pinpointing the problem, but not unless I already understand how the program
works.

Macros are lisp programs, and can be as complicated as any arbitrary lisp
program. Writing a more complicated macro is not screwing up (I think this is
an important distinction to make)... inadequately documenting, explaining, and
bulletproofing it is. Someone might have to debug it later, so strive to write
readable macro code. It isn't hard as you are just constructing lists and
writing normal lisp code with minimal efficiency requirements.

So I guess that was a roundabout way of me saying "I agree, mostly."

------
spacemanaki
There's some definite trolling down at the bottom half of that page: "Huh?
I've seen no objective evidence that BrainFsck is more challenging for
business applications and systems software programming than is Lisp. I invite
you to provide clear evidence that it is more challenging."

Leaving that aside though, other people smarter and more experienced with Lisp
than I have suggested that this "problem" is not unique to Lisp, and may not
be a much of a real problem. I really think it's just a question of having
good documentation and sensible style which explains what you need to know to
use the magic even if it doesn't explain what's underneath. It's true that a
lot of "lone hackers" aren't going to be writing good (or any!) documentation,
but that's not a Lisp problem.

Ruby's metamagic is a common example. I already know Ruby and recently I've
been learning Rails. I know there's a lot of magic going on behind the scenes,
and occasionally I'll read some example or something and know that underneath
there's magic that I do not entirely understand. I can continue studying the
framework as a user and probably even get away using it for a while without
completely understanding the magic underneath. I doubt that every person who
has used Rails commercially completely understood every part of the framework
that they use, since that would be a huge drag on their ability to ship their
thing.

Another interesting example: This past week I talked to someone at a nearly
pure Scala company and he described this DSL that they wrote (or just use? I
can't remember) to interface with SQL databases. It exploited the fact that
infix operators are really just methods and that you can define implicit
conversions on existing library types that allow you to sort of extend them (I
don't actually know Scala, so this may be slightly off base). The snippet of
code he showed me, while it looked like Scala, would be turned into an SQL
clause. Programmers who use this don't necessarily need to understand all of
the gritty details so much as they need to understand what the designer
intends and understand the semantics of SQL and of the DSL.

In Common Lisp, "lambda" is a macro. I don't really know how it works, but I
know that it expands into lower level code defining a function and a closure.
It's the same way with standard macros (in some Lisps) like let, letrec, cond,
if, and, or etc... Most Lisp programmers who are not experts can use them
understanding that they are macros but maybe not knowing their complete
implementation because they are very well defined (and because they are pretty
simple).

More complicated macros like "with-open-file" can be used by programmers who
probably have some idea of how the macro works, but not a complete
understanding. As long as your own macros and Lispy magic are documented
sufficiently and designed sensibly programmers should be able to use code in
your "world" without understanding it completely. At least to start.

N.B. I haven't used Lisp in a commercial setting, so it's entirely possible
all this is bunk and I'm just being naive. Wouldn't be the first time.

 _edit_ Upon re-reading this monster comment I realize it might come across
that I'm advocating some kind of "cargo cult" or copy-and-paste style of
ignorant programming. I'm really not. My point is that while understanding
your tools is important, so is knowing when you don't need to peel back the
curtain and instead need to address the problem at hand.

~~~
LearnYouALisp
I think they are poking fun at the often-misused technicality that "all
Turing-complete languages are equivalent", which is obviously beside the
point.

------
breckinloggins
Lisp, more than any other class of language, is what you make of it. Also, it
is just as susceptible to the whims of culture as other languages (perhaps
MORE so). Take the Clojure world, for example...

Because it is being used by a generation of programmers who cut their teeth on
Rails and were frustrated by J2EE verbosity, popular Clojure code tends to be
written such that APIs are quite readable and bereft of too much cleverness
(the cleverness is usually hidden in the implementation of the API, rather
than the interface).

Examples: Compujre, Ring, ClojureQL, Encanter, etc.

S-Expressions don't befuddle people. People befuddle people.

------
gruseom
How is it not an obvious oxymoron to call a language "too powerful"? Imagine a
physicist or a mathematician calling a theory "too powerful". It explains too
much! You can prove too much with it!

We're not even close to understanding how to reliably make good programs, so
naturally we don't understand how to do that with Lisp either. Lisp is just a
particularly pure medium for programming. Most of the things people are saying
about Lisp ("it allows people to invent their own little worlds") are really
just statements about programming.

Oh and one other thing. A pox on the terms "readable" and "unreadable"
floating around as free variables. They are hopelessly relative and their
prominence in any discussion of programming (or should I say _every_
discussion of programming) renders said discussion pointless. We literally
don't know what we're talking about.

As pnathan brilliantly put it in another thread
(<http://news.ycombinator.com/item?id=3387432>): "Intuitive is just what
you've seen before." Bravo, pnathan.

~~~
ezyang
'Imagine a physicist or a mathematician calling a theory "too powerful". It
explains too much! You can prove too much with it!'

Actually, I can perfectly reasonably imagine a physicist complaining that a
theory is too powerful. If a theory has the capability to explain any possible
observation, it similarly does not have the capability of being falsified...

~~~
gruseom
That seems like a degenerate trivial case. A better objection would be Occam,
that a simpler theory is preferable where adequate. But even that breaks down
here, because we don't have any adequate "theories", only the hard problem of
how to build complex software systems.

I can see refusing an approach on the grounds that it doesn't work, but to
refuse it on the grounds that it works too well?

------
6ren
Instead of comparing Lisp with other languages, let's consider the problem
stated, of people inventing their own little worlds - i.e. DSLs.

Brooks said that a "programming product" (meaning one that can be used by
other people) takes x3 the work of a "program" that works. He talks about
documentation, testing, generalization and "can be run, tested, repaired by
anyone". I think this means careful API _design_ for usability,
discoverability and understandability - not just efficiency - is important.

So, inventing new worlds (DSLs) is not a problem; inventing your own little
worlds that are hard to understand and use is a problem. But it takes x3 as
much work to do it right, and it usually isn't worth it unless it is
explicitly intended to be used by others (e.g. it's a library for sale; or a
utility for use within a large organization; or a web API).

Secondly, an example from the history of relational databases. Codd had the
idea of relations plus a high level language. He designed a couple of high
level languages, but no one liked them. Instead, Boyce and Camberlin came up
with SQL (originally "SEQEL"), that was usable by mere mortals. " _Since Codd
was originally a mathematician (and previously worked on cellular automata),
his DML proposals were rigorous and formal, but not necessarily easy for mere
mortals to understand._ "
[http://webcache.googleusercontent.com/search?oe=utf-8&rl...](http://webcache.googleusercontent.com/search?oe=utf-8&rls=org.mozilla%3Aen-
US%3Aofficial&client=firefox-a&gs_sm=e&gs_upl=59466l59466l0l59756l1l1l0l0l0l0l224l224l2-1l1l0&hl=en&q=cache:gLdUndrvpwAJ:http://mitpress.mit.edu/books/chapters/0262693143chapm1.pdf)
Sometimes, designing a DSL for others is so hard that it takes a different
person to do it.

This isn't specific to Lisp. It's an issue for designing DSLs and APIs (and
languages in general), which can be done in any language. Lispers may invent
more often and with more variation, because lisp is more powerful. Power -->
Responsibility

~~~
6ren
Summary: poor abstraction is worse than no abstraction.

In Java, the "DSLs" you tend to get are at the class level, in different
files. Like any abstraction, these can be well- or ill-designed. There's some
differences to lisp: (1). the abstractions are less flexible/powerful, so
there's less to understand; (2). having them in different files makes it
harder to grasp the whole than if all in one file (or one screen); (3). the
syntax is fixed, so you can at least understand the symbols without
understanding anything else.

I think that inventing a new language has the best chance of making something
that is a genuinely better solution. But if you want it to be understandable,
it's helpful to link it to existing concepts that are already known,
understood, and with known modes of use and application, perhaps via metaphor
- i.e. adoption through familiarity.

But the general case for adoption seems to be that something must be x10
better (or compelling in some way) for people to go through the pain of
adoption, of learning new syntax, new concepts, new ways of working, new
infrastructure, new tradeoffs, new gotchas, new shortcuts, new consequences,
new policies, new standards, new training, new suppliers, and so on.

If you can reduce the pain of adoption, adoption is more likely.

Put another way: there are two kinds of pain: the pain your solution
addresses, and the pain your solution creates. The reason to adopt your
solution is to reduce pain, but if the solution itself brings too much pain,
it's simply not worth it.

Relational databases are an example of this. The relational concept solved the
pain of storage change, but also created pain (difficult to use; x10-x100
slower). As those secondary pains were solved (with SQL; with optimization
stategies and Moore's Law), its adoption accelerated.

So, sometimes a fundamental improvement needs to place power and flexibility
over ease-of-use - but to be adopted, sufficient ease-of-use is essential. And
your abstraction has to be, not just _good_ or _better_ , but _x10 better._

------
sigil
Always lacking from these kinds of debates about code: actual code.

Does anyone have examples of crazy unmaintainable Lisp code we could look at?

On the other side, what examples of powerful / elegant Lisp code do you feel
best make the case for Lisp?

I fully realize this is subjective ("no accounting for taste"), and that a
handful of anecdotes doesn't really settle anything. Nevertheless, I'm
interested in what the failure modes might be for Lisp, and whether they have
analogs in the languages I'm more familiar with.

I'm also bothered by a tendency in the Lisp community to say, "Lisp is the
best, and I have all this awesome Lisp code, but no, I'm not going to show it
to you." If someone asked me for great C, C++, Python, or Perl code, I have
favorite examples I'd point them to without hesitation. What gives Lispers? Is
your Lisp code so personalized or specific to the problem that you fear it
really wouldn't make any sense to an outsider? If so, how come this doesn't
translate into maintainability problems?

~~~
pavelludiq
A well engineered lisp(and any other language actually) program isn't a stream
of beautiful lines, but a set of components, any of which might have a
complicated implementation, but all of which have a good clean interface. Lisp
is beautiful not because it allows you to write beautiful 20 line programs,
but because it allows you to design large systems that still have a chance to
be maintainable, despite the complexity of the problem they are solving.

I can still show you examples of beautiful and horrible 20 line lisp programs,
but I'd rather show you examples of large scale design. The most popular might
be Emacs. Emacs has a million lines of elisp, imagine if it was written in
java or C++, scary thought :)

In a large system like emacs you'll find many examples of beautiful and ugly
code, but the overall system is still beautiful and maintainable. This talk by
Stuart Halloway might explain what i mean by that: <http://vimeo.com/1013263>

In a nutshell, emacs is big, but small for its size, meaning that those
1000000 lines of lisp do much more than a million lines of java will ever be
able to do. That property comes in part by using a lisp as an implementation
language(and not a very good lisp at that :)

~~~
dmansen
Great talk, thanks for the link. The power of emacs isn't immediately obvious,
until you try to use it for something it wasn't explicitly designed for.

------
machrider
You may want to link to http (rather than https), since this site has a bad
certificate. Big scary warnings on Firefox here.

~~~
sp332
I was expecting a self-signed certificate, which I usually accept, but this
one shows up as "localhost.localdomain". No thanks, I don't want to trust this
cert to sign for my localhost :)

------
ken
Besides the superficial ("parentheses!"), I've heard two major complaints
about Lisp. One is that it's just too powerful. The other is that macros don't
really let you do anything you can't do with lambdas in other languages, just
with (much) easier quoting.

I don't think you can have it both ways. (I'm not saying that any one person
is making both of these points, but person B's anti-Lisp argument is arguing
against person A's anti-Lisp point.) If you can't use Lisp in a team because
somebody might write a macro that you don't understand, how can you deal with
Python or Ruby or C# code that inevitably tries to fake it by taking as
parameters functions-that-returns-other-functions?

(The other common alternative I see these days is to make your DSLs in XML.
That means they're more verbose, you can't step through them in your debugger,
and so on -- plus you still have the problem that allowing arbitrary DSLs is
too powerful! I suppose there's also a third alternative: don't even try to
use higher-level abstraction, and build giant apps out of low-level spaghetti
code.)

I've seen perfectly readable code in Lisp, even making extensive use of
macros. (There are conventions, and good programmers do follow them.) I've
also seen perfectly undecipherable code in every language I've ever seen. When
I've watched individuals write code in multiple languages, those that write
bad Lisp tend to write bad anything. Nothing that I've seen leads me to
believe that there is any significant set of programmers who can only write
bad Lisp code, but can write great code in lower-level languages. We can give
them Lisp, though, and they will at least write less of it!

------
sjs
You could make the same argument about Unix, but I still say that with great
power comes great responsibility. If you can't handle the responsibility maybe
you should use watered down tools. That's somewhat bleak though and following
that logic you end up with Java. Java is not the worst language on Earth but I
doubt it's your favourite and no one writes it w/o a code generating and
refactoring IDE.

~~~
RodgerTheGreat
This is both glib and anecdotal, but Java _is_ one of my favorites and I never
write Java with an IDE. It has an extensive standard library that I can count
on having available without requiring users to install optional packages, it's
easy to write portable code and I can write software for everything from a
cheap feature-phone to a high-end server.

There's a lot of horrible Java code on the internet, mostly because there are
_tons_ of people writing Java. Nevertheless, in good hands, Java can be quite
elegant and succinct.

~~~
chimeracoder
> Nevertheless, in good hands, Java can be quite elegant and succinct.

Could you elaborate, perhaps with an example? Not trying to start a flamewar
or anything - but I honestly can't imagine a case where Java is succinct
relative to other languages. And as far as elegance goes, Java is so overly
verbose and fixated on classes ('too many classes? there's a class for that!')
that I have a hard time thinking of it as elegant.

But then again, maybe I'm just used to reading bad Java code everywhere, and
you've been lucky enough to find the good stuff!

~~~
sjs
There's no way to reduce something like:

    
    
        runOnUiThread(new Runnable() {
            @Optional
            public void run() {
                // ...
            }
        });
    

It's just the nature of the language. I honestly don't mind recent versions of
Java all that much but without an IDE I would lose my mind in about 3 seconds.

And once you start writing something significant or that needs to work cross
platform say hello to design patterns to work around the straight jacket.

I'll take Lisp, JavaScript, Ruby, or Python any day of the week. All languages
have warts but to me it seems that Java has warts by design.

------
billrobertson42
C/C++ can be taken to extremes with preprocessor abuse. If you've never looked
at the obfuscated C programming contests, then you should. As far as Scala
DSL's go, didn't somebody post one the other day that made an ascii art
picture of a Christmas tree into a valid scale program? Isn't there also a
Scala DSL that lets you make it look like basic?

Sure, there are languages that don't let you get at the meta, but just because
the ones that let you do can be abused does not invalidate the usefulness of
the notion.

~~~
tikhonj
I don't know about Scala, but somebody did embed basic into Haskell:
[http://hackage.haskell.org/packages/archive/BASIC/0.1.5.0/do...](http://hackage.haskell.org/packages/archive/BASIC/0.1.5.0/doc/html/Language-
BASIC.html)

However, it's at least partly taking advantage of the fact that normal Haskell
actually looks (in shape, anyhow) vaguely like basic as is. At the very least,
Haskell doesn't force you to use parenthesis or braces everywhere.

Here's an example: [http://augustss.blogspot.com/2009/02/is-haskell-fast-lets-
do...](http://augustss.blogspot.com/2009/02/is-haskell-fast-lets-do-
simple.html)

------
j_baker
_This is why Lisp is a HackerLanguage instead of a commercial language:
hackers are generally loners who don't care if others can figure out their
code (at least while they are in the mode or role of hacking). Thus, they
build their own little world in it that fits themselves nicely so that they
can hack fast, but the rest of the world be damned._

I think calling this a straw man would be generous. Hackers use lisp _because_
they feel it makes code more easily readable. You don't have to agree with
them, but at least take time to understand their arguments before you refute
them.

------
hluska
For the most part I enjoyed reading this article, however I'm uncomfortable
with one assertion:

"This is why Lisp is a HackerLanguage instead of a commercial language:
hackers are generally loners who don't care if others can figure out their
code (at least while they are in the mode or role of hacking)."

I don't know many hackers who this applies to. Rather, the DRY ethos seems to
extend into readability and accessibility - most acknowledge they'll one day
pass the code to someone else to maintain and too much complexity makes that
near impossible.

------
bitcracker
If Lisp is too powerful then you are too weak :-)

Lisp requires a new way of thinking - in recursion, lambdas, mapcars etc. - to
write good code which reflects the awesome abilities of Lisp. Unfortunately
many people don't grasp it. They don't want to think or to learn superior ways
if they just can use a language which makes them able to solve their problem.
The way to the solution doesn't matter a lot if the solution itself works.

Btw the same phenomenon happened with Ada. The Ada 95 language is awesome. I
admired it, it was real fun to use it. But average programmers are simply
overwhelmed. That's the reason why Ada died.

Many people also complain about Unix and Linux but if you take the effort and
learn it seriously you will love it.

~~~
stevecooperorg
What awesome abilities are you talking about particularly? Recursion, lambdas,
and mapcar are available in just about every modern languge, be it VB, Ruby,
JavaScript, or PHP.

~~~
pavelludiq
Here is a short list of awesome lisp abilities, some of them not unique to
lisp, but still interesting read: <http://random-state.net/features-of-common-
lisp.html>

~~~
bitcracker
Interestingly this article doesn't dive deeply into the most awesome Lisp
feature: Macros. Here a quick glimpse:

<http://www.apl.jhu.edu/~hall/Lisp-Notes/Macros.html>

And here an insightful statement of someone who was sceptical but became a
Lisp convert:

<http://www.defmacro.org/ramblings/lisp.html>

Lisp is so different from all other languages that you have to use it to
understand it. Just reading about it is not enough.

Note that there are many variants of Lisp. Older Lisp versions are suitable
for experts only. For beginners I would recommend Scheme which is a well
defined successor of Lisp. I would recommend Racket (<http://racket-
lang.org/>) as SDK. It is suitable for beginners as well as for professional
development.

~~~
pavelludiq
Scheme is a successor to Lisp, the same way motorcycles are a successor to
trucks. Despite the syntactic and historic link, they are pretty much
different languages(as is clojure). Although scheme is a descent intro to
lisp-like languages, and racket is an awesome environment, both for teaching,
and probably for actual work(haven't done any in it, so I’m only speculating
about this), one piece of advice to those who chose to start with it is to
keep in mind that scheme is only one way to look at what lisp is(and IMHO not
the most enlightening or useful one), and its important to know what
assumptions its creators made about what programming should be like.

As I pointed out in a comment at the beginning of the thread, scheme teaches
you some habits that don't translate well to other lisps, so to those thinking
of picking it up, be mindful of the assumptions of the language, and when you
decide to look into other lisps, don't assume they will be the same there as
well.

In fact i would actually recommend learning clojure or common lisp before
scheme. I consider both of them to be better languages, but i have my own set
of assumptions that might not be shared by others :).

------
motxilo
Related: <http://www.winestockwebdesign.com/Essays/Lisp_Curse.html>

------
tete
Perl has a similar (non)problem with TIMTOWTDI.

------
langsamer
I like how most languages now a days are trending back towards LISP-like
functional languages by supporting constructs like closures, lambda
expressions, etc. It's all about modularity and making the programmer most
productive, which LISP seems to do quite well. I guess John McCarthy was on to
something.

------
raganwald
Argument for the prosecution, “The Rule of Least Power:”

<http://www.w3.org/2001/tag/doc/leastPower>

~~~
postfuturist
The paper describes a completely false dichotomy between powerful and simple.
In the context of an extremely simple and powerful language like Scheme or
other Lisps, the argument is almost meaningless.

------
algoshift
How about FORTH then?

~~~
RodgerTheGreat
Forth can be a pretty good way to go about implementing a Lisp, actually. I
have a book[1] that uses Forth to build a set of list manipulation facilities
inspired by Lisp and then in turn use those to construct a Prolog-like DSL for
expressing expert systems. I've actually been tinkering with a pair system in
my own Forth dialect that could be interesting if you've ever wondered what
recursive list operations would look like in a concatenative language.[2]

[1] [http://www.amazon.com/Designing-Programming-Personal-
Expert-...](http://www.amazon.com/Designing-Programming-Personal-Expert-
Systems/dp/0830626921/) [2] <http://pastebin.com/q9hMS1rE> (warning: WIP)

------
mnemonicsloth
Progress.

The standard answer to If-Lisp-Is-So-Great used to be "It isn't."

