The real power of this isn't just differentiating a given function; as others have pointed out, you can do this in, e.g., C with function pointers. Having first-class procedures and closures means you can actually return the derivative as a function. This lets you do things that the simple example doesn't show. For a physics example, see https://mitpress.mit.edu/sites/default/files/titles/content/... : given a Hamiltonian (generalized energy) describing a mechanical system, you can automatically construct a function that computes the associated Hamiltonian vector field. Pass the vector field and an initial condition into a numerical integrator, and out pops the trajectory.
To clarify, the relevance is that deriving the vector field from the Hamiltonian requires taking partial derivatives; see the linked section from Structure and Interpretation of Classical Mechanics (SICM) by Sussman and Wisdom w/ Mayer. Closures lets one do this by composing functions in a way that mirrors the mathematical structure. Note in the SICM implementation, the partial derivatives are computed exactly, not numerically, but the resulting ODEs are integrated numerically.
Using Lisp (in the form of Racket) for many years in a production environment, I've observed:
1. People do complain about the parentheses but they generally get to work dealing with them right away. It's the loudest complaint, but it doesn't really even slow people down.
2. This is a similar complaint, and does scare people off from jumping in if they have a real choice.
3. Racket in particular solves this one to an extent. We've used it with Redis, JSON, MySQL, Perforce, etc etc productively. More commentary below, though...
4. This one is a non-issue as far as I've seen. If you hire someone who can program, then can quickly learn and be productive in Lisp.
5. This is tied in with 4, a non-issue.
6. This is an issue with perception. The problem is more that people believe they know something about Lisp because they've heard of it, and they come in with the expectation that it'll be weird or difficult, which is a bit of a speed bump to getting to work right away.
Ultimately the biggest issue we've seen is that the broader ecosystem of organizations using Lisp (or Racket) in production environments is missing. As such, there are many sharp corners that you have to file off yourself as you deploy your usage. Useful resources like Stack Overflow just don't have the answers prepackaged because it's likely you're the first to use Perforce to deploy a precompiled version of your Racket application on Windows (for example).
It's easy to underestimate just how much work people have put into making ecosystems in C++, Java, and Python contain all the pieces, experience, and knowledge you'll need to solve your particular problems quickly and effectively.
Another pieced missing along with that ecosystem: outside of the implementation of the language itself, no one has taken an application of significant size and age and added a major feature to it. This is something that happens in production all the time -- and it happens on a schedule with a budget. Virtually no one using a small or academically minded languages ever does this, though. And it's not until you try that do you discover all those sharp corners and missing support tools and libraries.
The hesitation I'd have with Lisp(s) as an outsider, which ties in to number 3, is that the language itself is compact, elegant and extensible. It's the opposite problem to C++ (the language is too large) - the language is too small and it encourages you to build other languages inside it. So it'd be very hard to share code with others because as soon as you start building abstractions your language and worldview diverges from everyone else's version of lisp - to use a library you have to learn someone else's mini-language. Maybe this is an invented concern though, did you find integrating other libraries a problem?
Did you have any problems with development speed or performance in production or was that fine?
With Racket, at least, the language encourages you to build very small pieces and assemble them into larger systems. It's fairly straightforward to connect the pieces together to share code.
It does also encourage you to create mini-languages, but our approach was usually a large amount of modular Racket-y code with many fairly thin layers of syntax transformation used in each mini-language case. In that sense the problem of mutually incompatible languages didn't really arise (in fact, separate mini-languages coincided with separate usage areas quite well).
Performance was an issue, and we generally needed to use the precompilation feature of Racket to be successful. The issue we ran into is that Racket's precompilation system produces fairly fragile and coupled binary files -- and is generally difficult to separate cleanly from the execution environment.
"Lisp is small" without being specific about the dialect is meaningless.
ANSI Common Lisp's one and only 1994 standard is over 1000 pages long. The Scheme reports are a lot smaller, though growing. I think still under 100 pages.
I developed a Lisp dialect that has a dense reference manual that is over 530 pages long with no table of contents or index.
I think that objection is likely true of scheme. Scheme is like the Assembly language of lisps. To build it up to the level of usefulness requires a lot of wheel re-inventing, and those wheels are likely to be not quite standard.
Clojure and Common Lisp are larger languages and don't really suffer this problem.
7 - It's too easy to make a mess. And the mess made by brilliant people who are looking for any excuse to put higher order metaprogramming and functional concepts into production is considerable.
8 - It has consistently lost in the marketplace in the last 20 years. We had most top CS grads in North America groomed on SICP at one point in history. You'd think many of them would want to use Lisp in production. Many of them did. Now MIT uses Python to introduce programming, and Lisp code bases tend to be (horrifying) legacy systems. Virtually no startups base the core of their business on Lisp anymore, and it's not because the technical founders aren't aware of it.
(Lots of skunkworks Clojure projects are out there bearing load though.)
SICP uses Scheme, which is not Lisp. It was taught at an introductory level because some concepts of computation can be taught nicely with Scheme, up to making a compiler for another language (perhaps a "real" language in the student's minds?), but it doesn't actually teach you Lisp, and I imagine leaves a bad taste in many students' mouth at the nonsense no-for-looping-no-mutation they had to suffer through which isn't a requirement in Lisp (nor necessarily in Scheme). SICP doesn't even have something as quick to go through as e.g. this series http://malisper.me/2015/07/07/debugging-lisp-part-1-recompil... to get a feel for what it's actually like to work with Lisp.
Clojure has been very successful though it seems like your point #7 would apply more-so to it (since Lisp isn't as functional oriented) unless you mean crazy-in-production things like closures and mapping functions. =P I don't even think you can call all that many Clojure projects skunkworks ones, because the language is quite visibly successful. Something like https://www.ptc.com/cad/elements-direct/modeling you might call a skunkworks success for Lisp...
Not only does SICP use Scheme, but it uses a really old and primitive Scheme, so that's what a lot of students who've only had exposure to SICP think all Schemes are like -- almost a toy language. They usually have no idea what a modern, full-featured Scheme like Chicken Scheme or Racket is like: light years ahead of the Scheme used in SICP, with a rich ecosystem of libraries.
I mean Scheme is not Common Lisp, and Common Lisp is Lisp. Both are "a Lisp" (as is Clojure), or perhaps "a dialect of Lisp", but it's questionable if that really means anything beyond s-exps and macros.
Common Lisp is backwards compatible to Lisp I and Lisp 1.5 in many ways.
Code like that can be relatively easy ported to Scheme (naming and syntax is slightly different). There are examples which are more difficult. But generally simple (!) Scheme is not that far away from that old core.
I read that answer, I didn't find it helpful on two points.
1. There's this really nice analogy about German and English languages, but it doesn't have any specifics. In particular, the crux of his entire argument is in this statement
> The two languages and their attendant communities have drifted so far apart that there is nothing of value in their intersection.
Which he simply states as fact without any sort of follow up or justification. And I disagree. Consider this Common Lisp snippet from Wikipedia computing the answer to the birthday paradox:
Literally replace "defconstant" with "define" and "defun birthday-paradox (probability number-of-people)" with "define (birthday-paradox probability number-of-people)" and that's valid Scheme.
I'll readily admit they are two different languages, but pretending they are as different as C and Python or something is absurd.
2. Even if you convince me these two languages are so different, there's nothing here to convince me Common Lisp deserves the title of the "real Lisp" or "true Lisp". Scheme can also run LISP code from 1960 with a very small driver.
Is it that absurd? Where is Scheme's CLOS, condition system, built-in debugging framework, and batteries-included standard library? Type declarations? Dynamic scoping? Multi-methods?
Maybe Racket (or Chicken or Chez or Guile or...) has all of these things, Racket is pretty awesome, but those things aren't standardized, whereas I can get those things in Lisp regardless of if I use SBCL/Clozure/clisp/etc. Maybe it's closer to the difference between C (with a bunch of different compiler extensions) and C++ than C and Python, but it's a pretty big difference.
I agree they are pretty different languages, and you give some solid points as to why: CLOS, dynamic scoping, multi-methods, etc. My point was your link mentioned exactly none of those examples, in fact gave no examples at all, and so wasn't very convincing.
The second point that the argument that Common Lisp is the "true Lisp" because that's why it was made is a faulty syllogism. Just because that was their goal of creating Common Lisp doesn't necessarily mean that is the case.
And Common Lisp isn't the only language with a standard: Scheme had an IEEE standard less than a year after ANSI Common Lisp, and has several RnRS updates since then.
Long story short, I don't think any language should be considered the "true Lisp". Maybe the original Lisp, but even then only as a technicality. And trying to declare a language as the "true Lisp" comes off as a little insulting to related languages and a little bit arrogant. It implies, to a certain degree, that you look down on other languages as lesser, inferior, failed attempts to reach the true zen of Lisp. I'm sure you didn't mean it that way, but as a fan of Scheme (and Common Lisp too!) it reads that way.
Maybe there's no platonic ideal Lisp, and that's the problem, since the thing that makes programmers feel good is some vague sense that there is such an ideal. A sort of god. ;) On the level of communication and definitions though, it seems totally reasonable to me that if your goal as a community of all sorts of versions/dialects of Lisp is to unify into one standard, then you get to call your new unification the Lisp.
I see your point about arrogance, though I think the insulting tone could be turned around easily -- you have these Lisp-like languages that don't even have Lisp in their name nor a lot of Lisp's crucial features trying to appropriate/claim an inheritance on all the hard work and glory and name recognition that went into Common Lisp and its predecessors. Let them have their name. I'm a pretty big fan of Clojure (and do like what Racket is doing when I read up about it) but imagine trying to sell Clojure to an old Lisp hand with "it's like Lisp" and watching them type (inc "a") vs (1+ "a"). https://pastebin.com/kBfNvxei vs https://pastebin.com/hPB2cy1X and there's no contest. (You might win them over with further persistence but they'll probably be wondering where all their favorite parts about Lisp are in this so-called Lisp-like.)
>Is it that absurd? Where is Scheme's CLOS, condition system, built-in debugging framework, and batteries-included standard library? Type declarations? Dynamic scoping? Multi-methods?
Does Lisp from 1960 have these things? Then I guess it isn't really Lisp?
How I see it, Lisp 1.5 was Lisp, but as time went on and many 'versions' of Lisp for different systems with different capabilities came into being, there was no longer really a singular Lisp. Then there was a great unification that spit out Common Lisp, and that became Lisp. So if Lisp 1.5 was released today as something new, it wouldn't be Lisp, but merely another Lisp-like. It's too different. No one today thinks that if you drop "Lisp" in a conversation you really mean Lisp 1.5. (Maybe if you write it as LISP...) It's either going to mean Common Lisp, or maybe a reference to the vague Lisp Family. I don't think Lisp will ever be synonymous with "Scheme".
Maybe you'd appreciate some Lisp history, to also answer your question about "what other dialect[s]"?
I do know some of the history. I was just asking those questions as a challenge. Regarding the question of the other dialect, the parent used the term as a way of distancing Scheme from what I assume is the one true Lisp: Common Lisp. But the point the passage they had an issue with noted two dialects. The other, of course, being that one true Lisp.
The way I see it, the things that make Lisp "Lisp" are not exclusive to either one. One might say that the intersecting characteristics aren't interesting, but these characteristics are what Lisp is truly about.
SML and Caml are both ML. Haskell 98 and Glasgow Haskell are both Haskell. Dyalog APL and APL2 are both APL. And so on.
He was just giving those as examples of ways in which Scheme and Common Lisp differ. Which is a fair point. Of course it does raise the question: are those features essential attributes of the Platonic ideal that is True Lisp, or simply nice addons Common Lisp provides?
> Even if you convince me these two languages are so different, there's nothing here to convince me Common Lisp deserves the title of the "real Lisp" or "true Lisp"
It's the lineage of Lisps where Common Lisp is in, which is Lisp:
Lisp I, Lisp 1.5, Standard Lisp, Interlisp, Maclisp, New Implementation of Lisp, ZetaLisp, Spice Lisp, Common Lisp, Emacs Lisp, ISLisp, ...
Those are all compatible to some extent and share a common core. Code moved a lot along these dialects. Macsyma had been written in Maclisp, moved to Zetalisp, then to Common Lisp. Reduce was written in Standard Lisp and then was ported to Common Lisp (without adoption). Hemlock was written in Spice Lisp and moved to Common Lisp. LOOPS was written in Interlisp and then moved to Zetalisp and later Common Lisp. CLOS was written in Common Lisp and was moved to Emacs Lisp and ISLisp. Even much of Common Lisp was moved to Emacs Lisp. ISLisp can be implemented as a package in Common Lisp relatively easily (Kent Pitman did this to check that ISLisp is compatible and has nothing which is not 'easily' implementable).
It was also usual that some Lisp implementations ran multiple Lisp dialects side by side, sharing everything: data, memory, I/O, ... For example Interlisp-D had a Common Lisp implementation side-by-side with Interlisp. The Symbolics Lisp Machine ran Zetalisp and four variants of Common Lisp side-by-side.
Then there are a lot of derived languages which either have different syntax, different semantics, different data structures, etc.
Examples are Logo, MDL, Scheme, Racket, Dylan, ML, Clojure, Javascript and a bunch of other languages.
Scheme initially shared some code with Lisp. Dylan tried it with a Lisp transpiler, ML initially was written in Lisp, ...
> Scheme can also run LISP code from 1960 with a very small driver.
The old Scheme is relatively near the Lisp mainstream, the newer Scheme less and less. Twenty years ago there was some code sharing between Lisp and Scheme, now almost none.
For me there are two definitions of Lisp:
* the main line compatible and sharing Lisps
* the general abstract family of Lisp which share an undefine subset of a set of features: lists, s-expressions, lambdas, parentheses, ... But you can find lots of languages in the larger Lisp family which have only a rough subset of this, like Javascript or Logo.
> The "Lisp" Scheme is a dialect of is no longer the current meaning of
"Lisp". It is somewhat like calling English a dialect of German because
of ancient history that has since between invalidated by each of their
separate evolution, Fahrvergnügen, Weltanschauung, Kindergarten, and
Pennsylvania to the contrary notwithstanding. There is also a very
limited value in talking about "Germanic languages" in terms of your
actual ability to use any of the Germanic languags. You do not order the
"vertebrate" in a restaurant, but generally choose between fish or bird
or meat. In other words, there is a time when an abstraction and a
commonality has completely ceased to be valuable.
I think Erik Naggum was a bit imperialistic in wanting 'Lisp' to mean Common Lisp. One nice thing Clojure did was to revitalize the Lisp-as-language-family meme.
I feel like there should be a "No True LISP" fallacy in CS...
It expresses pretty much all of the core principals of a LISP. If it looks like a LISP, thinks like a LISP, behaves like a LISP; then do you really gain anything by trying to separate from other LISPs?
Is Python a LISP? Is JavaScript a LISP? Is Java a LISP? What do you mean by LISP, by its core principles? What does a different set of core principles look like that lends itself to a different family? I threw in my vague classification above: sexps and macros. But you can have macros without sexps, and sexps without macros (lots of "toy lisp interpreters" do that), are they LISPs? Lastly one wonders why we don't go calling all these C-like languages ALGOLs. What do you gain by trying to separate from other ALGOLs?
Or less extremely, why don't we consider Python and Ruby to be the same family, or Java and C#? Those pairs are arguably more similar than the pair of Common Lisp and Scheme.
Is Picolisp a Lisp? Is Newlisp a Lisp? Is Emacs Lisp a Lisp?
Maybe it's because I first learned programming in the 1980s in BASIC, but I'm pretty used to the idea of programming languages having dialects. Apple ][s, Commodore 64s, and Atari 800s all supposedly were programmable in BASIC, but it didn't mean you could take a program written on one and expect to run it without changes on another.
Surely that's the whole point - it's a dialect, so it has the same core but not 100% so you have to translate some of it to get to the other.
The dialects of Dutch in Belgium contain completely different words and phrases - but the core is still Dutch.
But for large parts of the speech they are recognisably similar.
So I'd say yes all the lisps you mentioned are dialects of Lisp. You can't run one directly in the other, but they are reconisably similar programming constructs.
> Lastly one wonders why we don't go calling all these C-like languages ALGOLs.
Referring to the broad Algol-family isn't uncommon (well, at least, I do it quite a bit.)
> Or less extremely, why don't we consider Python and Ruby to be the same family, or Java and C#? Those pairs are arguably more similar than the pair of Common Lisp and Scheme.
Java and C# are frequently described as being part of a single close language family (much closer than the broad Algol-family; sometimes in an intermediate-breadth family that also includes ), and Ruby and Python are much less similar than Scheme and Common Lisp.
Sure, we see there's a grouping, and might identify it when trying to create a cluster, but it's not as pervasive as the Lisp Family meme. When Clojure came out, it was advertised as "a Lisp" (not "Lisp", but "a Lisp"). When Go came out, it wasn't advertised as "an ALGOL", or "a curly brace language". What makes so many programmers want to associate only good qualities with this floaty abstraction of "Lisp" (that isn't simply Common Lisp) such that the arguments over how Lispy Clojure/Scheme/EmacsLisp/whatever inevitably keep happening, such that things can advertise as "a Lisp" and get attention despite not having some pretty core Common Lisp functionality? The only other language I see something similar to that is with Python. Go programmers say it's like programming Python but loads faster and with type security, Nim programmers say it's like programming Python but loads faster and with type security, I may have even heard something similar from an OCaml user once, but you look at either of them and they're both missing key Pythonic features.
Edit: In hindsight at the end of the day I should have been more careful with the "Scheme is not Lisp" (at least I didn't say "Scheme is not a Lisp") comment...I'm only continuing these threads because I read the original submission pretty close to when it was posted and back then I only knew a bit of SICP, tried enough Lisp to duplicate some SICP examples, and was frustrated with the Lisp-2 syntax along with what I perceived as a REPL freakout on any error so I put Lisp aside. It would have been helpful for someone to tell me "Scheme is not Lisp" back then and get me to actually explore all that Common Lisp has to offer.
To be fair, all code bases tend to be horrifying legacy systems if you wait long enough, and "long enough" is pretty much always shorter than the amount of time you'd want the system to still be in use.
I strongly suspect that it is possible for a language to be too powerful.
The more capacity a language has to let you do wonderful things, the more capacity it has to let you do truly awful things as well. ("With great power comes great responsibility" and all that).
Also, something about the expressivity of the code means that your code is wonderful, but other peoples code is potentially a nightmare.
> 1 - All those parenthesis. (Still a top objection)
If you actually count parens fairly you will find that Lisp has no more than any other language, it's just that other languages use a mix of parens, square brackets and curly braces whereas Lisp only uses parens. Also, if you really don't like parens, you can get rid of a lot of them using macros. See e.g.:
and in particular the binding-block (BB) construct.
> 2 - Lisp doesn't look like or work like what I'm used to.
Neither does anything else until you get used to it.
> 3 - Lisp doesn't have as many libraries as the most popular mainstream programming languages.
That used to be one of my objections, but it's simply not true any more. And with Quicklisp accessing the (very large) collection of available libraries is incredibly convenient. (Thanks Zach!)
> 4 - There aren't nearly as many Lisp programmers, so it'll be hard to find more to join your project/company if you use Lisp.
That becomes a self-fulfilling prophecy. But it's no harder to create new Lisp programmers than it is to create new X programmers for any other value of X.
> 5 - There aren't nearly as many Lisp jobs, so why bother learning Lisp if you're going to have a hard time finding work using it?
Because knowing Lisp is a huge lever for learning other languages. Once you know Lisp everything else becomes a lot easier to learn (but a lot more frustrating too because now you will be aware of the shortcomings of other languages).
> 6 - Lisp is ancient, and anything that old is useless and primitive compared to new and shiny languages.
Chasing the shiny new thing is usually a mistake. Most old things suck simply because most things suck in general. But the things that don't suck continue to not suck even after they become old. In fact, standing the test of time is one of the defining characteristics of non-sucky things.
The part about lisp being primitive was rather silly. Many languages don't have anything close to what Lisp has in its macro system, not even other functional languages, even in those languages with temperating systems. Perhaps Perl 6 comes close (I haven't had the chance to use p6 macros yet) but to say that lisp is 'primitive' is rubbish, unless you're going to claim that case insensitivity is enough to declare a language primitive.
F# and all ML derivatives are an aberration because they use juxtaposition to denote function calls. That eliminates a lot of parens but personally I find the result less readable.
But compare to, say, C where no one complains about the parens:
int fact(int n) {
if (n==0) {
return 1
} else {
return n*fact(n-1);
}
}
That's six pairs of parens. The only reason it's not more is because some function calls in C are disguised as operators, and those just happen to be the functions that factorial uses. If we wanted a semantically equivalent function that handled bignums you'd have to write:
> F# and all ML derivatives are an aberration because they use juxtaposition to denote function calls. That eliminates a lot of parens but personally I find the result less readable.
It makes a lot of sense when all functions are curried. Parenthesised function calls treat all of these as different things:
If functions aren't curried, then these all mean different things. When functions are curried, all of these have the same semantics (call `f` with `a`, call the result with `b`, call that result with `c` and call that result with `d`); ML's juxtaposition syntax makes these semantically-identical expressions syntactically-identical too:
Yeah, you know what else I'm realizing? All the different kinds of punctuation let my eyes skim over different parts of code so easily, and lisp doesn't give me that.
You know how, when you're reading natural language, you don't actually read individual letters, not once you have any reading fluency? That's how I read code, too, I'm realizing. When I see the parentheses in C#, my brain just goes "oh that's a method/function call" and I can either skip over the contents of the parens or read them if I need to. Reading comprehension is pretty quick.
Looking at all the parens of even that simple lisp factorial, and I realize that I'm feeling a lot of cognitive load that would simply never go away, because the punctuation doesn't let me filter any of it out. I'd have to read everything more carefully.
> Looking at all the parens of even that simple lisp factorial, and I realize that I'm feeling a lot of cognitive load that would simply never go away, because the punctuation doesn't let me filter any of it out. I'd have to read everything more carefully.
Completely the opposite here.
Parens means function call or list. Lists usually denoted by a quote first.
I don't read the parens, I skim everything, reading words. And the punctuation is really simple.
For example:
if (number? a)
+ a b
error "Not A Number" a
I stripped most of the parens there, (like Wisp [0] does), because I don't really see them. I see function call and arguments, because there isn't anything else.
Scheme has the very least syntax out of just about any programming language I've used. It has a tiny cognitive load compared to most.
Try putting the closing parens on a line aligned with the opening one, as if they were K&R style closing braces (ignoring that open-paren goes before keyword/func-name, rather than after)
(define (fact x)
(if (zero? x)
1
(* x (fact (- x 1)))
)
)
Square brackets work in some dialects, as well:
[define (fact x)
(if (zero? x)
1
(* x (fact (- x 1 ]
Almost all Lisp code will put the closing parens all together at the end. And the idea is to not care about how many there are by using an editor that provides the appropriate assistance.
Especially since most IDE's seem to go out of their way to make sludge out of the text, unless you have hours to spend training each one, and can figure out how to do so...
... until your coworker edits the file in the out of the box IDE and munges it.
I guess now-a-days, our languages and frameworks are so ugly they need a "bag" over their [inter]face while you are doing them :-)
To me, that's a sign that maybe a language has too little syntax.
I can read C++, Javascript and PHP just as well in plain text because the other non-paren elements provide necessary contextual cues, along with nesting. Syntax coloring is helpful, but it shouldn't be necessary.
I can at least understand nesting closing parens on their own lines, the way Roboprog did above, but throwing all of them on a single line just seems like needless noise.
It's not something we ever really think about. That list of closing parentheses just signals termination of the top-level S-expression.
And there's virtually nobody coding in a Lisp that doesn't use a structural editing mode (like Paredit or SmartParens). E.g., any time you type "(", you get "()", enforcing balance.
What we rely on is proper indentation to indicate the important groupings.
Exactly. IMHO, this is quite similar to Python indentation rules, except it is up to writers of Lisp code to be disciplined enough to maintain the proper indentation.
Wait, even Lisp does that for me, because it has a built-in source code pretty printer:
Call the pretty printer to format this mess:
CL-USER 41 > (let ((*print-case* :downcase))
(pprint '(defun add-text-padding
(str &key padding newline)
"Add padding to text STR. Every line except for the first one, will be
prefixed with PADDING spaces. If NEWLINE is non-NIL, newline character will
be prepended to the text making it start on the next line with padding
applied to every single line."
(let ((str (if newline (concatenate 'string (string #\Newline) str) str)))
(with-output-to-string (s) (map 'string (lambda (x) (princ x s) (when (char= x #\Newline)
(dotimes (i padding) (princ #\Space s)))) str))))
))
It prints wonderfully formatted and indented code:
(defun add-text-padding (str &key padding newline)
"Add padding to text STR. Every line except for the first one, will be
prefixed with PADDING spaces. If NEWLINE is non-NIL, newline character will
be prepended to the text making it start on the next line with padding
applied to every single line."
(let ((str
(if newline
(concatenate 'string (string #\Newline) str)
str)))
(with-output-to-string (s)
(map 'string
(lambda (x)
(princ x s)
(when (char= x #\Newline)
(dotimes (i padding) (princ #\Space s))))
str))))
Now one would only need to fix this beginner level code.
No. Our editors auto-indent the code for us. This is one of the reasons Lisp syntax is better than python. Python syntax does not have enough information to auto-indent so it really is up to the programmer. Not so with Lisp.
I hate when I code python and refactor stuff, moving stuff around and such, that I cant just copy/clip around and then do a final 'indent all my work correctly' like I can in a language like lisp.
I am kind of developing a grudge against whitespace sensitive languages. Not because it forces the programmer to indent properly, but because it disables my editors abilities to do it for me.
Actually, Python has the equivalent of open-parens: the colon. What it doesn't have is close-parens, but if you use emacs then it will outdent on PASS and RETURN statements. In my personal coding style I always put PASS statements at the end of block so my code auto-indents properly. Hard-core Pythonistas hate this, but I really don't care what they think.
Thanks. Almost nobody liked my comment (downvoted), but it sure kicked off a long discussion :-)
I haven't done any actual Lisp since the 80s on a CDC Cyber mainframe and a dedicated expansion card on an Apple II, so I'm "a wee bit" out of date.
I'm very visual / "geometric", so I like to see the "shape" of things. I can't stand looking at stuff "formatted" (?) in the typical enterprise java (8) "staircase of doom" with minimal whitespace, incredibly long lines, with a few arbitrary line wraps that indent to random spots way off on the right.
I swear, many programmers have never considered how newspapers and textbooks are formatted. They don't sprawl text endlessly across even if the paper is wide, they keep it reasonably narrow so your eye can follow it.
Seriously, if you would have showed me the text of the code I'm working on now when I was a year into college in the early 80s, I think I would have changed my major.
To any lisp enthusiasts who wonder why the language family isn't more popular: this is 99% of why. You have to use an editor where "copy" is called "Kill-ring-save" is invoked with "Meta-w". Any attempt to use any other tool will be met with incredulous responses to "just use Emacs".
It's such a shame because the development environment for Lisp could be so much better than other languages. It could be mindblowingly futuristic, instead of mindblowingly archaic.
I've just started using Clojure (another Lisp) and I use IntelliJ IDEa with the Cursive plug-in, with par-infer to deal with the parens.
It works like a charm. I can have a REPL and my code right in front of me, I can cmd-down to go to a function definition, and it's overall a very pleasant developer experience.
Now, I did experiment with Emacs, so I do understand what you're talking about -- kill-ring-save and the related key bindings made zero sense to me, and I was having trouble doing things I'd do easily in IntelliJ.
> You have to use an editor where "copy" is called "Kill-ring-save" is invoked with "Meta-w"
You are misinformed. I'm using a GNU Emacs (-> Aquamacs) on the Mac and cut/paste/copy are invoked with the usual command keys. But usually I use the LispWorks editor, which is also in the Emacs family, and even there cut/copy/paste is the usual command sequence... Both editors are built with excellent Lisp support. One is written in C + Emacs Lisp and the other one in Common Lisp.
> mindblowingly archaic
You haven't even understood the old part, how would you be able to deal with the futuristic part?
He didn't talk about old technology like 'smartphones', which existed as products (Nokia Communicator, ...) 20 years ago, but claimed that much better 'futuristic' IDEs for Lisp are possible, though what he claims may currently only exist in his imagination...
Any large piece of software will have its own terminology (car, cdr, frame, window, buffer, yank, kill, meta, ...). It is a shame that people would rather fixate on these superficial things instead of the minimal effort to get familiarized.
To me the hard part about lisp is that every programmer that uses lisp tailors it so to their own taste that it can become quite hard to read the 'top level' of a lisp program without first having gone through all the lower layers. It is as if every project in lisp somehow magically develops its own DSL. That's a high hurdle for newbies to clear.
Sounds like a common objection to C++, where the language is so large that every team chooses its own subset of features they'll use, and ignore the rest :)
That's much less of a problem, because if you know the whole thing, then any subset of it is also known. It only becomes a problem if you need to change the code, because then you have to be aware of the subset to restrict yourself to.
C++ actually has a similar problem with developing idiosyncratic in-house solutions to common problems - it's not uncommon to find hand-rolled collections, reference-counting pointers etc, especially in older codebases from before STL and Boost became more widely accepted. But because the language doesn't offer anything close to the flexibility of Lisp macros, those idiosyncrasies are also much easier to parse.
Knowing the language itself is not that hard. It's big, but manageable. Obviously there will always be corner cases where you need your language lawyer hat (and the copy of the Standard), but those exist for pretty much any language.
The library is more complicated, especially once you get to iostreams and locales. But also less of an issue in this scenario.
> It is as if every project in lisp somehow magically develops its own DSL. That's a high hurdle for newbies to clear.
I'm not sure it is, most Lisp code I've seen is very readable, and you don't need to understand the internal most of the time.
Like in Scheme, if the function name isn't followed by a !, then you can assume that it is functional, so you don't need to worry about anything other than input and output.
It's a DSL, it's created a non-standard list at the end, an X-Expr, but it's readable, and predictable, without me trying to work out the internals of how a string-port functions, or how read-html works beyond it reads from a port.
I mean, the "state of the art" is to have readable code everywhere. What readable really means is "have each component defined in terms of the standard library". This involves a lot of repetition and a lot more code, with the advantage that any part of the program can be understood by a new programmer.
The Lispy/Bottom Up way is to write a language, then solve your problem in terms of the language you just wrote. Only at the very low levels are you really dealing with standard library stuff. The advantages are obvious, but the disadvantage of this is you have to learn the DSL as well.
> The objections the author had way back in the day are no longer the objections programmers of mainstream languages have to Lisp today.
> Today the objections I hear are more along the lines of:
> 3 - Lisp doesn't have as many libraries as the most popular mainstream programming languages.
Amusingly one of the "objections" to Common Lisp when it was standardized was its absurdly large library. Things sure look different today.
I have to say though that many languages have a large number of libraries, the quality is, naturally, variable.+ The real library strength of a language is the power of its standard library. (Admittedly some of that can come via lore or acculturation, e.g. numpy. On the other hand, what legs will, say, react have? Who knows?)
+ no examples because I have no desire to insult anyone -- rather thanks for releasing your code!
The number of bad libraries doesn't affect the number of good libraries, so that is still an important distinction. Also, if an average quality library is good enough that I don't need to spend a few weeks doing it myself, that's still a bonus.
To be fair, the one time I decided to do something biggish and kinda important in Lisp, I stopped at #3. Not exactly because "it does not have many libraries", but because "library for X, Y and Z are not there and I don't want to write them".
Oddly, Haskell has a similar problem, but going through C code by the FFI does not feel like a problem. I don't think why it does in Lisp, it may be just a matter of better documentation.
The only difference of using Hy instead of Python are the macros though. What can you do easily with macros that is cumbersome to do with regular Python?
Macros aside, Hy removes the single statement limitations of Python lambdas and the reliance on indentation. The fact that it's a Lisp also introduces a higher level of composability.
I'm working on FFI for TXR Lisp. Until this morning, it was in the stage of "collection of working API functions, written in C, exposed with Lisp bindings".
I have almost everything I want working, including callbacks. There is a decently rich type system which supports pointers that annotate data passing directions for automatic malloc and free, and this works in both directions: out calls and callbacks (closures).
I support by-value passing and returning of structures.
And also of C arrays! Even though C doesn't have by-value arrays. You see, my FFI type (array 42 int) is actually equivalent to the C type struct __anon { int __anon[42] } and not the C type int [42].
Just this morning, between 6 and 6:45 a.m I wrote a partial implementation of the FFI declaration language. Here is what it looks like, with actual output:
Load the under-construction FFI macros:
(load "ffi")
Define a show macro for showing an an expression and its value as expr -> value:
(defmacro show (expr)
(let ((val (gensym)))
^(let ((,val ,expr))
(format t "~s -> ~!~s\n" ',expr ,val)
,val)))
Now the FFI test. Some structures we need, and declarations of FFI typedefs for them giving type info. The above typedefs aren't required; we we could just write out the (struct ...) syntax inline in the later FFI declarations.
;; Lisp struct representing C struct ldiv
(defstruct ldiv nil quot rem)
;; FFI typedef for it
(deffi-type ldiv (struct ldiv (quot long) (rem long)))
(defstruct pipe nil rfd wfd)
(deffi-type pipe (struct pipe (rfd int) (wfd int)))
(defstruct dirent nil
ino off)
(deffi-type dirent (struct dirent
(ino int)
(off int)
(nil (array 62 int))))
;; the member named nil here is anonymous padding
Now, declare some FFI functions. Very succinct and expressive:
(with-dyn-lib (libc "libc.so.6")
(deffi c-abs "abs" int (int))
(deffi ldiv "ldiv" ldiv (long long))
(deffi c-getenv "getenv" (buf 3) (str))
(deffi c-getenv-2 "getenv" str (str))
(deffi wcslen "wcslen" ulong (wstr))
;; Two ways to call pipe: using a pointer to two-int
;; structure, or an int[2] array. They are binary identical.
(deffi c-pipe "pipe" int ((ptr-out pipe)))
(deffi c-pipe-2 "pipe" int ((ptr-out (array 2 int))))
(deffi c-read "read" int (int buf int))
(deffi c-read-str "read" int (int (ptr (array 10 char)) int))
(deffi opendir "opendir" cptr (str))
(deffi readdir-r "readdir_r" int (cptr (ptr-out dirent) (ptr-out cptr))))
Finally, the test: with two lines of interactive input from the TTY to satsify two read calls from file descriptor 0. For both of these, I enter the text abc[Enter]:
Look what is happening in (c-read-str 0 str 10). We pass in a Lisp string which contains "xxxxx...". Magically, it becomes "abc\nxxxxxx..." after the read, as if we were working in C. This is because the type for the parameter (ptr (array 10 char)). The FFI framework recognizes arrays of char and makes them correspond to strings in a two-way manner.
It's all manual memory management in the FFI layer, driven by the semantics of the type expressions. The type expression syntax is compiled to a tree of objects. Those objects are walked before and after a FFI call to do the right actions related to memory management, encoding and decoding. Any temporary buffers needed by FFI are cleaned up by it. The Lisp objects going in and out are stock Lisp objects without any special memory management related to FFI.
Also on the front page of HN today: SQL is 43 years old.
So, exactly what is the problem with being ancient? Nobody objects to SQL on those grounds. So I suspect that 6 is really "I want to object, and I found a plausible-sounding reason to do so". (It could be people who actually evaluate stuff based on how new it is. But I'd prefer to think that there aren't people who really evaluate languages that way...)
1) You can become familiar with the parentheses. One might say, "all that white space" about Python, but that doesn't make Python any less powerful a language.
2) It does not look like what you're used to, but it certainly can work like you're used to since it is a multiparadigm language. Of course, programming Lisp like you program C or Python loses a lot of the power.
3) As a Lisp programmer, this is my biggest gripe.
4-5) True. If you aren't a contractor/freelancer that can choose your tools, Lisp is a hard sell.
6) Lisp is ancient. That said, the rest of the programming world has yet to catch up to the some of the concepts in Lisp like conditions or CLOS. Many of the "advances" in the new shiny languages have been in Lisp for decades.
Lisp 9 vs 13 JavaScript(excluding keywords and return)
2, 3 ,4 , 5, 6 -> Chicken and egg problems, Lisp is not popular enough an this makes things difficult.
Lisp real issues are:
1. Fragmentation: Scheme, CL, Clojure, ELisp, Racket...
2. Tooling: package manager, build system, support outside emacs.
3. Concurrency and safety: some do better than other but added to late.
True, but the issue is that most academic and vast majority of paid coding roughly looks like the JavaScript example (C/C++/Java/JS/etc) - so to most the Lisp style looks odd and hard to parse, not the raw symbol count.
Lisp style is more elegant, almost everything is either data or function. Other languages you mention are massively more complex and difficult to understand. For me C++ is some kind of clusterfuck of symbols/keywords/operators and plain magic.
After you learn prefix notation, Lisp is super easy. Unfortunately in the world were "worse is better" we discard beautiful and smart: Smalltalk, Lisp and ML. Instead we build our towers of Babel using PHP, JavaScript and C++.
It is so ridiculous that people choose to spend whole career using PHP because it is easy learn.
When it comes to learning anything, we humans have a massive tendency to prefer that which is familiar. This means that when a human is faced with an obstacle and must choose between:
- An unfamiliar tool or paradigm which is custom built to solve this and all future problems (should they arise) such that time and effort is saved
- A familiar tool which has been extended this one time to solve this one problem, and which can be inellegantly extended to solve future problems potentially at the cost of time and effort
I find the argument that Lisp needing fewer symbols means that it's simpler to be suspect. Imagine if we took English, and replaced all punctuation with parentheses. Would it be "simpler"? In some very academic way, probably. I'm not at all confident it would actually be easier to read.
This is really something that needs input from people who professionally deal with the way our brains process visual information, especially text. It might be that there's value in having more varied and fancy punctuation, e.g. because the differences between ( and [ and { serve as anchor points for visual pattern matching.
It would be interesting to have an experiment whereby people not previously exposed to any PL have to analyze a code snippet in, say, Lisp vs JS vs Python side by side, and discover its structure (e.g. by drawing it as a block diagram).
For me it's not the paranthesis, but that Lisp posts always show low level code. At least half of my programming is tying together high level services and libraries. I know how I can express those concepts succinctly in Java, C++, JS, Swift etc. I'd love to see some examples of Lisp for something like a REST controller, where I call services, repositories etc.
> At least half of my programming is tying together high level services and libraries. I know how I can express those concepts succinctly in Java, C++, JS, Swift etc. I'd love to see some examples of Lisp for something like a REST controller, where I call services, repositories etc.
You should look at Racket. Racket is a Scheme-like[0] Lisp that aims to be "batteries-included". It includes things like a web-server out-of-the-box.
[0] it is technically a hybrid of R5RS and R6RS, so pedants can argue over whether it is really "a Scheme" or not.
It shows a little JSON-based REST service which stores a list of messages and allows listing them and adding to them. With more URLs you'd likely use something like this: http://docs.racket-lang.org/web-server/dispatch.html for dispatch, instead of hand-coded function.
It's notable for only using stdlib, no additional packages (beyond purely syntactic, Clojure-like, `~>` support).
To be honest, the APIs exposed in the stdlib feel kind of awkward - like using SimpleHTTPServer in Python. Still, it's just a couple of lines and easy (automatic, in this example) in-process concurrency is not bad.
I have written an example web application for my cloud computing lecture. The code is not pretty nor 100% functionally and there are some bugs, but it shows how my applications work. It's just for demonstration purposes and therefore kept simple.
* Lisp does actually work a lot like stuff you are used to: evaluation of argument expressions to argument values, which are passed by value: much like C or Java. Functions return a value or multiple values.
It has familiar features like mutable lexical and global variables, control constructs for selection and iteration and so on. There is even a form of goto.
Aggregate objects like structures, class instances, lists, vectors and so on are actually referential values, like in many languages.
It is said that Javascript is a dialect of Lisp. If you understand how Javascript evaluates expressions, that goes a long way toward Lisp. Ruby is sometimes called MatzLisp, after the surname of its creator, for very good reasons. Lisp has inspired many features found in other languages. The comma, ?:, && and || operators in C appear to be Lisp inspired, as is the very idea of "expression statements": for instance when we call a function in C as a statement, it is an expression with a value, which is discarded, just like in Lisp.
Lisp lists lack encapsulation; they are not opaque bags with which you do things like (add list item). That takes getting used to: always capturing the result value of a list construction. It doesn't take that much getting used to for programmers coming from C, who understand a bunch of ways of representing lists, including representations in which a null pointer represents an empty list.
A container-like list data type is easily written in Lisp, either as a function-based ADT with a couple of functions around small state struct (or perhaps cons cell or vector), or full blown OOP.
> Lisp lists lack encapsulation; they are not opaque bags with which you do things like (add list item). That takes getting used to: always capturing the result value of a list construction.
You're conflating two properties: lack of encapsulation and functional update. It's true that lists expose their internal structure as conses, and it's also true that they're usually updated functionally (requiring, as you say, capturing the result), but these properties don't have to go together. It's entirely possible, and arguably desirable, to have functional collections -- where instances are immutable, but which provide operations for creating new instances from existing ones -- which are also opaque data types; see for example FSet [0]. Conversely, it's possible to have fully-mutable lists that expose their internal structure; you just have to wrap each list in a mutable cell. It would be ugly, and I don't know why you'd do it, but it's entirely possible.
You're right; encapsulation doesn't mean mutable state; it means combining code and data (making a capsule).
Lisp list aren't ... whatever you call those stateful collection things that you can mutate with a list.add(42) type code that people are used to in a lot of scripting languages nowadays. That will trip up people who are used to that sort of thing.
Yeah, back when I tried Common Lisp #3 was a stumbling block. I hear Clojure solves that problem pretty well, maybe because it's a little more mainstream than other lisps and also because my understanding is it's pretty easy to work with existing Java libs from Clojure.
To reduce confusion, JavaScript programmers follow the Lisp convention and put parens around expressions like the latter. Once you take the 5 minutes needed to get used to the difference between x and (x) it's actually less confusing than popular alternatives.
I'm glad for its parens. Would never have checked out Haskell (and gotten into purity&lazy-eval) if Lisp's syntax was like Haskell's (no mandatory braces except by choice, no mandatory parens except by choice --- indent is enough and in practice we all mostly end up indenting all code in all languages anyway, so.. works for me)
The parenthesis go away pretty fast once you start writing. I remember it took me a day or so. Mind you I mostly work in Clojure where parentheses are not used everywhere (function airity is described using a vector [ ] for example).
7 - Lisp has a tendency to lure programmers up the river of insanity [0] due to the sheer power of macros (particularly reader macros). This can leave the rest of the team (or any successors) struggling with a mountain of technical debt. Overall, I think the power of Lisp makes it difficult for a community to organize without a BDFL to keep everyone on the same page.
Yeah, this seems to be a common objection, but in my experience of actually having used a lisp (Clojure) in production, this was never a problem. Sure, you'll likely still end up with some technical debt, but no more than the Java project's we had at the same company. And not because of macros.
When people first learn about macros, they go crazy trying to do all kinds of things that weren't possible without macros. But once developers gain a little experience with macros, they learn to use them judiciously and tastefully.
I think one problem with certain 'declarative' styles, is that imperative code is, in a sense, temporarily ordered.
When you lose this, you need to replace it with some other enforced order, that helps you navigate the code.
Think how much debugging is based on locating buggy code by walking through the execution path and/or data-path. When you lose that kind of strictly linear execution, you still need a layer of heavy modulation to compensate.
Consider reaching a state of https://www.emacswiki.org/emacs/DotEmacsBankruptcy you can easily modify the editor in any way from any file, so you really need to use in-build tools to navigate how the environment is affected by the config files.
_My_ main objection to lisp is that it's confusing.
I don't mean the syntax is hard to grasp. I mean the way it looks. It becomes very confusing when the program grows in size.
This is not unique to Lisp. It applies to all dynamic languages that don't have strong tooling support (IDE's with intellisense).
(Maybe I am wrong about Lisp being dynamic; my only experience has been with "arc"; and I have had someone tell me before that common-lisp has static types)
I used to be happy about programming python and javascript in a plain text editor (vim).
However, as I was doing my first intermediate size project, I realized that I've hit a limit.
My project was in Javascript (frontend) and I couldn't keep track of what was going on anymore. Each unit of code is understandable on its own but the whole thing is a mess to look at. When I need to change the inputs or outputs of a function, I have no idea if I've done it correctly or not; due to the lack of type checking.
I absolutely need to make sense of what is what. Having everything look the same makes me confused. And I've come to realize that tooling support is of the utmost importance. Having an editor like vim with cool tricks is nice, but what I really need is to "rename variable" or "rename function" automatically across all usages in the project.
If every feature of a programming language syntax can be used in about 100 kilobytes of code, and your project is 3000 kilobytes, that's a 30:1 "sameness ratio" at best. If you use every language feature in that program, any 3% (if not more) of your program will look the same as some other 3%.
Most programs don't exercise every language feature throughout most of their code, so their pieces look even more self-same than that.
In Lisp, we have to learn to factor in the identity of the operators into what is "same": learn not to see (defclass bear (animal) ..) as being the same "same" as (block foo (init) ...).
When you're reading Lisp code, you're seeing chunks like this:
I never thought that I would miss writing C, but writing Python in a fairly large system, I miss the type checking.
Some of the ease of use of dynamicity is paid back in the added costs of writing more tests.
In my experience a big downside of highly dynamic languages that let you hack the language itself, including Python and Lisp, is that they give programmers quite a bit of room to shoot themselves in the foot by designing language-altering atrocities. On the Python system I currently work on, I've found atrocious uses of metaclasses that make classes behave in unintuitive ways.
Don't get me wrong, I love using Python and Lisp for my own projects, and I don't like Java very much, but I've come to appreciate its one big advantage for large software systems: it limits the damage that can be done by mediocre programmers.
> When I need to change the inputs or outputs of a function, I have no idea if I've done it correctly or not; due to the lack of type checking.
Well, the good news is Common Lisp has runtime types and some implementations even support limited forms of compile time checking.
The Common Lisp Object System allows you to use classes.
One then gets nice runtime errors.
CL-USER> (defmethod add ((a number) (b number)) (+ a b))
#<STANDARD-METHOD COMMON-LISP-USER::ADD (NUMBER NUMBER) {1003ECEF53}>
CL-USER> (defmethod add ((a string) (b string)) (concatenate 'string a b))
#<STANDARD-METHOD COMMON-LISP-USER::ADD (STRING STRING) {1003FDC323}>
This works then:
CL-USER> (add 1 2)
3
Now this will get a runtime error, because there is no method for these arguments:
CL-USER> (add "1" 2)
One can also declare types. For example that something is an integer or a subset of integers...
CL-USER> (defun baz (a)
(flet ((foo (a b)
(declare (integer a b))
(+ a b)))
(foo a "b")))
The SBCL compiler then complains, that above code has a type error:
; in: DEFUN BAZ
; (FOO A "b")
;
; caught WARNING:
; Constant "b" conflicts with its asserted type INTEGER.
; See also:
; The SBCL Manual, Node "Handling of Types"
This could be simply familiarity. Back in the 80s I wrote a large system (100s of Kloc) in Common Lisp (Cyc) and it was clear and easy to follow...for a Lisp programmer.
I don't say "familiarity" as some sort of insult. I find human-generated Lisp, Forth, Postscript, and TECO clear. I find APL code impenetrable, but I know my APL-using friends (well, some Well Street variant) find it as clear as a bell. And machine-generated C code is, well, opaque.
Now what I like about the lisp syntax is precisely what (if I understand you correctly) don't: it fades into the background so I can concentrate on the algorithm. This means the C/C++ code (mostly C++ these days) I write for myself or in my small workgroup is visually unlike traditional style guides:
node*
foo(node& first, node& last){
if(first_fails_qualifier(first)) return nullptr;
if(last_fails_qualifier(last) ) return nullptr;
if(!some_other_precondition(first, last) return nullptr;
// OK, we have valid arguments
...do some processing on the args...
return (wanted_first_p() ? &first : &last);
}
I find the aggressive use of { on its own line distracts; you want the interesting stuff in your fovea (so the preconditions are easy to scan; the algorithm is set apart, and the return clearly returns one type, doing a late discrimination).
They are IDEs, and if you think about it for a minute, Lisp languages have to be the easyest to implement IDEs for, as their syntax is exactly the AST and easy to parse.
> if you think about it for a minute, Lisp languages have to be the easyest to implement IDEs for, as their syntax is exactly the AST and easy to parse
I wouldn't think the parsing stage has ever been the top bottleneck or chief complexity/complication for IDE/tooling developers..
Learning lisp or functional programming is more of a conceptual experience than a practical one.
That is not to say it is not practical to learn lisp, but the state of the programming world as it is today makes it so that it is less practical to learn lisp than it is to learn about say something like java.
But luckily in newer versions, sane features like modules are introduced that are syntactic sugar over the trick with closures that have to be used to fake them. And things like Promises instead of passing callback functions everywhere.
The functional paradigm in Javascript is used everywhere because that's all there was.
I have used javascript extensively. I can tell you the functional paradigm is almost NEVER used in javascript. Javascript just contains libraries, features and concepts brought over from functional programming but I have never seen anyone program javascript functionally..
If a single one of your functions in javascript contains more than one imperative procedure you have exited the functional paradigm.
Weird lineage-jumping: Bill Siebert, in his other career as a hearing researcher, is my academic grandfather.
His textbook on signals and systems is one of the two classics in the field -- i wonder, now, how much commonality there is between that and his LISP experience.
I was similarly impressed when reading the first chapter of SICP (https://mitpress.mit.edu/sicp/), which shows the same code examples. In a very few elegant lines, without explaining much syntax, some very fundamentals of computing are explained. I can only recommend to read the book, even if "Lisp programmer" is not on the career plan.
If you're interested in the math-abilities of LISP you should check out Gerald Sussman's talk on Flexible Systems [0]. It's THE Sussman doing THE LISP.
I'm having trouble understanding why the derivative example was so impressive to the author. Can someone explain? It seems trivial to do in any language where functions are first class citizens.
> I'm having trouble understanding why the derivative example was so impressive to the author.
Says someone 35 years later. :)
Probably every (commonly found) derivative example, and the majority of languages with first class functions all come from this example.
MIT introducing Scheme and those two professors writing SICP are a large part of the reason for the current functional programming landscape. Without them I believe they'd still be as niche as APL.
The first two being based lisp, and the third being what inspired lisp.
It's like saying: I learned low level imperative programming from using Go, C++, and register machines. Which, sure, but the C language was hugely influential to all of that.
Because lisp is good at anything? But especially alternative evaluation systems (of which theorem proving is one).
Because they took lisp and added types to it; via the ISWIM language document ("The Next 700 Programming Languages") which was based off of the original lisp.
Because the lisp language family is a formulation of lambda calculus and has first class functions, unlike any other language family at the time, and that is necessary for theorem proving that is based on lambda calculus.
Because they likely prototyped the system in a set of lisp macros before writing the language (all the authors knew lisp and had taught each other it).
No one has really written a paper about the history of all this (I also mentioned this paper [1] in one of my other comments). But if you read the original ML paper [0] you'll see how they developed the language (notably by using ISWIM's syntax as a base). Note also that they are solving a problem with LISP (and dismissing Algol languages entirely) by building in strong types.
Note that the simplistic approach doesn't scale well, as terms tend to "explode" in size and hence computation time.
There is a whole sub field of mathematics / computer science called "algorithmic differentiation", also known as "automatic differentiation".
The goal is to take an existing computer program and transform it into another computer program that calculates the derivatice
just as efficiently (up to a constant factor of ~2 to 5, depending on how you measure), and with about the same numeric stability (by usual measures).
And this is just the start. Classic differentiation is also called "forward" mode. Understanding the "reverse mode", and why you want this for gradients, is even more mind-bending.
There is a whole world to discover here. The book "Evaluating Derivatives" is a good starting point for anyone interested.
It doesn't stop at forward-mode, and reverse-mode, because finding the optimal mode relates to a complicated graph problem, which, IIRC, is NP-hard in general. Not only that, but most of the difficulty is in getting these tools to work on arbitrary codebases, making it more a software engineering problem rather than a maths problem.
> It doesn't stop at forward-mode, and reverse-mode, because finding the optimal mode relates to a complicated graph problem, which, IIRC, is NP-hard in general.
Yes, there are combined forward-reverse-mode approaches as well as "cross country" approaches. That's in part of what I menat with a "whole world to discover".
> Not only that, but most of the difficulty is in getting these tools to work on arbitrary codebases, making it more a software engineering problem rather than a maths problem
Depending on your viewpoint, it is neither SE nor math, but a PL (programming languages) issue, meaning that viewing the program e.g. as register machine (classic assembly) yields to different approaches than viewing it as lambda calculus expression (which provided great results, but it is hard to transport the results back to languages like Fortran or C++).
Exactly. I've just started exploring dual numbers which are needed for automatic differentiation [0]. It's mind-blowing stuff, and it's easy to code in Lisp.
This was back in 1983 where most (all?) other languages did not have first-class procedures. If you can imagine programming without them and then being exposed to that, it would be a big deal. We take them for granted today.
Can relate. I remember programing in basic, then c, then java.. and at one point thinking 'man, i wish i could send this function to this other function somehow'. Granted, didn't understand function pointers, but when much later i learned about functional languages, it was instant return to childhood obsession with programming.
Well, in 1992 (IIRC), I did a numerical integration in C. For a "first class function", it just took a function pointer. That approach would have been available in C in 1983...
That's not really the same thing at all. What's missing is the ability to close over values, which is a key part of first-class functions. You could not re-create the original example in C, which is to return a new function. You'd have to return some sort of object that keeps a reference to the function pointer, and provides a special mechanism for calling it, i.e. a poor man's closure.
Sigh. How soon they forgot. You create a thunk that provides the closure. Which used to be a common thing to do when you wrote C code.
Note: I'm not saying that C could express closures and first-class function objects anywhere near as neatly as Lisp. But the idea that only Lisp allowed you to use them, and C was the land of straightforward procedural code... does not represent what I saw back then.
Just as you can write FORTRAN code in any language, you can transfer ideas from other languages to C. What's great about Lisp is that it made the ideas easily accessible.
There's no way to create a thunk in portable C even today, much less back then. You had to do some form of machine code generation on the fly, which instantly limited it to some particular architecture / calling convention at the very least.
OK, for the example given, I don't see how closures are relevant. I could write the exact same derivative function in C in 1983.
Are you saying that in general, closures are essential to first-class functions? Or are you saying that, in the derivative example, there's something going on with closures?
See smitherfield's examples. Yes, the inner function needs access to f. This means that it has to be passed in to the inner function. But in my view, that's not any different than (in the original example) having deriv-cube calling deriv with cube as an argument (or composing deriv and cube, if you prefer).
You may say that the difference is that in Lisp, deriv-cube is a partial application, whereas in C you really can't do that. And you'd be right. But for this example, I don't see what difference it makes.
> What's missing is the ability to close over values, which is a key part of first-class functions.
This is not true. You can have first-class function which are not closures: every dynamically-scoped Lisp works that way, see Emacs Lisp without `lexical-binding: t` and the `lexical-let` implementation.
And the derivatives example will not work, because it needs to close over the variable f.
(setq dx .0001)
(defun deriv (f)
(lambda (x)
(/ (- (funcall f (+ x dx)) (funcall f x))
dx)))
(defun cube (x) (* x x x))
(setq cube-deriv (deriv #'cube))
(funcall cube-deriv 2)
Error: Symbol's value as variable is void: f
Setting lexical-binding to t, so that deriv returns a closure, fixes the problem.
Incidentally, I think comparing the elisp and the Scheme versions is instructive because it shows how much is going on "behind the scenes" in this lecture. Abelson and Sussman made it all seem natural and effortless, but the key ideas in this 1983 lecture come from Sussman and Steele's research on Scheme in the late 1970s.
The blog post author marveled that the compiler could optimize all tail-calls, which was a new emphasis of Scheme (earlier Lisps only did a best-effort job), and which required e.g. changing the function calling conventions.
He appreciated how the definition of derivatives looks like the mathematical definition; but it only looks this clean because of Scheme's choice to have a single namespace for functions and values (1-Lisp versus 2-Lisp), so we get rid of elisp's setq/funcall/#' cruft.
And he wondered if the substitution model (which lets you think about programs like high school algebra, instead of modelling the execution in detail) could really always work. But it only works because of Scheme's choice to use lexical binding and closures.
All of these were difficult design problems at the time, and different Lisps explored different choices. We really have Scheme to thank for a lot of programming language concepts which we take for granted nowadays.
He is right, you could do something roughly equivalent in 1983-era C, albeit a fair amount clunkier due to the lack of partial application or (any kind of) type-checking:
If I am not mistaken, you will need to know the signature of the function. Even if you use void* you would need to know the amount of parameters that the function uses.
Sure. For numerical (one dimensional) integration, it took one (double) parameter. In 1992, C was getting to the point where I could make that type-safe. In 1983, not so much.
SymPy does it beautifully but it's much much more complicated than the equivalent in Scheme.
Symbolic differentation a topic covered in the early chapters of SICP. Doing the same in Python requires intimate knowledge of the dark corners of Python and is not at all suitable for an example in an introductory computer science class.
Are you sure random people are going to recognize "3/4" notation? It could be different for "3 ÷ 4", but I think "3/4" is not widely used outside of programming?
It is a country-specific convention. What's written as e.g. ½ in US would be more commonly written with a horizontal dividing line in many other places, or simply as 0.5. Similarly, 2.5 instead of 2½ etc.
The tricky thing here isn't first-class functions, but closures. For the example to work, the inner function construction has to implicitly capture the argument to the outer function.
Yeah, I understand why this is difficult if not impossible to do in BASIC, but functions are first-class in just about any assembly language.
That said, remember that the author is talking about his experience as a first-year college student. When I went to college, I had extensive experience with BASIC (GW, Q, TI, and a bit of Visual), a year of C++ in high school, and a few vague attempts at assembly (386 and Z80) that got me nowhere. The idea of function pointers was one of those scary advanced C things that I had picked up to avoid, and certainly the corresponding assembly concept didn't occur to me. Of course I knew you could jump to a computed value, but the mindset was foreign. I sort of knew that this was doable with objects in C++ - I could define a base class with an abstract method calculate(), and make a Cube subclass - but thinking of functions as first-class still wasn't obvious.
Yes, pointers to pointers etc. are difficult to grasp, but then again, so is passing functions as arguments. Also, information wasn't as readily available then as it is now.
Anyways, to make a long story short, here's "mega-deriver" numerical calculus package I just whipped up in COMMODORE BASIC 2.0: http://imgur.com/94eNuSX
It should run (unmodified?) on all MS BASIC dialects of the time (including Apple II and VIC-20). Note the higher precision of floating-point numbers than the puny m86k lisp the OP linked to. This one's probably faster, too ;)
You can create closures in CBM BASIC V2 by referencing arrays or creating thunks as an extension. But that's beside the original point. Implementing is not a lot of work but since my mega-deriver already solves the problem (derivation) elegantly and efficiently (most likely more efficiently than the original m68k lisp program), I really don't see any reason to use closures here. Do you?
> It seems trivial to do in any language where functions are first class citizens
When you put that in the historical context, you will see that first class function with dynamic scoping are coming around that time. Also, Scheme which was the LISP used in the story, was the first introducing proper support for lexically scoped first-class functions. I would guess that Javascript inherited this from Scheme.
Did that for simple polynomials back in the early 80s. The approximation of an arbitrary function is more interesting to me, precisely because I didn't do that, I guess :-)
Cal State (Stan) only had (some) Lisp available because I was interested in AI. We only had shallow treatment of it (GC scoffed at...), and I've only been discovering deeper features second hand by reading about it the last 10 or 15 years.
I have to admit, the slightly misleading title got me. I'm currently going through SICP myself (albeit at a snail's pace), but it seems like the article's author and I both had the same initial objections to LISP, only to be blown away by its simplicity and expressiveness with a few keywords and lines of code. I'm still partial to other programming languages, but LISP holds a special place in my hard drive.
a lot of people handwave the parenthesis and prefix notation as something you get used to, but it really is the thing that I think most people can't get a handle on.
There's a reason why DSPs and languages that look like a real languages are sought after - it makes conversion between business logic/requirements to code easier.
It makes maintenance easier - it's easy to make sure you made the right changes when the changes to your code looks like the changes to your business logic.
According to RMS [1], even the secretaries in Bernie Greenberg's office were extending Emacs in Emacs Lisp. Nobody had told them what they were doing was programming and they were able to pick it up from a manual.
I would restate that as people not pushing through past their initial uncomfortable feelings and preferences. Similarly, of all the people who have ever tried Prolog only a small fraction got past the "this is weird" stage. Or Forth.
This unwillingness to learn something more than slightly different is limiting, and usually expresses itself in other ways as well. That does NOT mean that such people aren't good developers. You do run risks asking them to work outside their comfort zone, though.
> Slower than assembly? Maybe for table lookups, but who cares about something as mundane as that? I want to do magic. If I have to look up something in a table, maybe I'll use assembly code.
Best line.
Also, I wonder how Go compares? Can you do basically the exact same things?
I wanted to try the examples. Installed MIT/GNU Scheme on Windows. It gives me some heap errors. There's a bug open about this from year 2010. I despise GNU software.
> An Introduction to Programming in Emacs Lisp
> 1 List Processing
> To the untutored eye, Lisp is a strange programming language. In Lisp code there are parentheses everywhere. Some people even claim that the name stands for "Lots of Isolated Silly Parentheses". But the claim is unwarranted.
Someone show code in other programming languages want to show Lisp's parentheses is silly or other languages more elegant.
BUT: I can't find any language can treat the code is data and the code is data
AND: I can't find any except Lisp there is just one syntax ().
Yeah, but it's goddamned ugly and unreadable. It considers repetition to be a design feature. If you're going to sell people on the benefits of functional programming, I think you should really be pushing more for SML or Haskell or something like that.
x.f(y, z) has just as many parens as (f x y z). And Haskell particularly comes with so many syntax quirks. There is a lot of cruft you need to learn to get to the underlying functional core of haskell. Lips you can get 100% of the cruft out within a few days / 1 week. And the rest is just understanding programming concepts.
I don't use any LISP on a regular basis, but it still amazes me how often people are willing to dismiss a language based on parens.
It reminds me of a friend who would never try food from any other country because: Look at all the weird colors and ingredients. I'm not trying that.
In the end, no one will ever force anyone to try something - but my view in life is if everyone raves about something, and my main concern with it is by (my own admission) superficial. I still go and give it a try.
SICP gives you a new view on programming regardless of whether you ever touch lisp again or not.
> And Haskell particularly comes with so many syntax quirks
Wouldn't agree per se, I'd rather formulate "it gives the user-land a lot of freedom & powers for introducing syntax quirks". You can code super-clean Haskell, or you can roll in 2 dozen language extensions and libraries with a 100+ custom operators, and use TemplateHaskell --- then yeah the syntax will begin to look horribly "quirky". Many practitioners end up limiting themselves in this regard after a while, or insist on "going back to basics" as much as feasible
The super-cleanest Haskell still has a ton of syntax to learn compared to Scheme. If your goal is to teach concepts of programming, almost any time spent on the syntax of a particular language is time wasted. Particularly so during a semester in university.
Similarly, if you won't be using FP for your day job, and would be using it for the insight gained (and have never used it), a language where you can begin learning concepts without needing to internals a bunch of syntax or infix operator precedence rules is also the better choice.
A handful of keywords (same as any language), fewer (mandatory) parens & braces than any other language, no (mandatory) semicolons, the same sort of indentation one tends to apply in any language anyway --- Essential-Haskell's "ton of syntax" in a nutshell.
Precedence rules for the built-in `Num` and `Bool` operators are equivalent to all languages. Other operators are either custom-defined (outside the scope of language's "syntax") or entirely inessential convenience-vehicles such as for composition (unnecessary, just handy in practice) or avoiding even more parens (dito) --- of which there are 2. Any tutorials or text-books that rely on such or other operators as part of "syntax" are, in that respect, simply a bit flawed.
What you are also glossing over are the precedence rules for operators. In this case it works in favor of your point and LISP indeed needs more parens.
However in other situations, you might need to re-parenthesize (language with flat precedence like APL) or consult your favorite chart[0] in order to figure out what is going on.
I find lisp to be the most beautiful and readable language. Lisp has practically zero syntax; all code can be written equally. You can even create your own "syntax" / language features without waiting years for the approval of a committee (I'm looking at you, JavaScript).
That's a pretty subjective opinion... I don't have much trouble with lisp but Haskell makes my eyes bleed. Of course I'm not a Haskell programmer, so it stands to reason that those who are find its syntax enjoyable.
I think haskell's syntax is extremely beautiful. However code that uses two dozen operators in half as many lines is incredibly ugly in any language and doing that can be quite tempting in haskell.
For code that non-haskellers have to work with I would just write everything out which is what lisp does and probably results in better code. Otherwise there still is a balance, I think. Like
Lispy:
n <- textInput (set textInputConfig_inputType "number"
(set textInputConfig_initialValue "0" def))
Operators for precedence:
n <- textInput $ def & set textInputConfig_inputType "number"
& set textInputConfig_initialValue "0"
Only on hacker news do people think there are objective opinions! The syntax is, let's say, "divisive" which is enough of a reason to think that if you want functional programming to grow, something needs to change about it.
yeah, I don't think that's true. Neither Clojure nor Haskell seem to have any problems finding adherents, nor does the functional paradigm itself seem to be in danger of... anything. Your personal distaste for parens isn't much evidence to support the idea that they are antithetical to the growth of functional programming, nor my own distaste for Haskell evidence of the same.
Unreadable? Personally, I think that Lisp has the clearest possible syntax - because it basically doesn't have any. It's just straight ASTs - something that I have to visualize myself in other languages wich have more syntax sugar.
where other languages require separating commas,
but Haskell has minimal syntax for function application:
f x y z
with no parenthesis needed. (Technically, this is a triple application ((f x) y) z, masked by the convention of application being a left-associative binary operator.)
Right, Haskell's function application syntax is more "minimal" that Lisp's, at least in the simple case.
But golergka said that Lisp's syntax is the clearest, not the smallest. And while that's probably something that depends on one's knowledge and past experience, I think there's something to it. The parens mean that it's unambiguous, and there is very little work on the reader's part to figure out where an expression begins and ends.
There's no operator precedence to keep in mind, no special whitespace rules, etc. It's about as minimal as you can get while still being completely explicit about syntactic constructs.
That is "minimally punctuated syntax". What we want is a "minimal correspondence between written syntax and the AST"; that is what is really meant by "Lisp has minimal syntax".
What you have there is loaded with ambiguity that has to be resolved by semantics. And after the dust settles, you're left without variadic functions.
Partial evaluation and currying are very valuable techniques. They are still succinct if there is a visible, explicit partial application operator to denote a function that is constructed by binding arguments around an expression.
That's if you choose to think abiut them as arithmetic expressions and not function applications. If you work with arithmetic expressions a lot, you should probably use appropriate domain language - in fact, you can build it in Lisp itself.
You can write a reader macro (in lisps that permit them, otherwise you've got no hope) but that's just lisp-speak for "program that parses strings". A language where you can use infix notation within the language is much nicer.
sml is uber cute, probably the only syntax I enjoy more than sexps. I never got the people with problems regarding lisp thin syntax.
- there's no universal syntax for everything [1]
- lisp entice you to make tiny dsl as sets of correlated functions on some domain for which there's no syntax .. [related to 1]
- close to zero magic
- parsing complexity removed and built-in, jump straight into problem solving [3]
- some advanced academics don't see classic math as the ultimate notation (one sicp author made SICM for langrangian mechanics) he likes the consistency and smallness of sexps [related to :3
[1] I know, people like infix math
[2] I failed an ADA exam because I couldnt find how to typecase a 1-char string to a char type.. try to guess
[3] instead of how cute is my <incompatible-and-not-more-powerful-lang-of-the-day> looking.. Also Brown Uni. had a chapter on not wasting time on syntax in education. Although they made pyret lang since .. but syntax is not the only reason.
> I think you should really be pushing more for SML or Haskell or something like that.
There are two clues in this sentence (mentioning SML and not OCAML, "something like that") that suggest you do not regularly program in a "something like that" language. I wrote my first Haskell program over a decade ago. I find Haskell very hard to read. The grammar is very complicated[1] and is actually context-sensitive[2].
I do like OCaml, but have never had the opportunity to write more than a few toy programs for fun. I put SML in there just to be like "and other ML-family functional programming languages." Haskell is complicated, but some complication is needed and that's more of a learning-curve problem. If humans were capable of simple, we'd all be typing 1s and 0s. We do better with grouping, context, similar things being similar and dissimilar things being dissimilar. I don't think Lisp is the way forward for functional programming.
> Haskell is complicated, but some complication is needed and that's more of a learning-curve problem.
That is a junk argument. No one needs context-sensitive grammars. I think designing a language that cannot be parsed with an LL(1) or LALR parser is just dumb. It guarantees that your compiler will be slow right from the start. In addition that applies to all the tooling - syntax highlighting and navigation in the editor, linters, pretty-printers, etc. And in practice the tooling is not just slow but also has a ton of bugs around corner cases.
Functional programming is but one of the many paradigms that you can use in Lisp, along with imperative, object-oriented, relational, etc. When you say a language is a functional language, you are saying that it is the paradigm supported, which is not the case here.
A language need not force purity to allow for purely functional programs. Besides, purely functional languages all need a way to cheat to actually do anything useful. Any nontrivial program needs a mix of paradigms.