Hacker News new | past | comments | ask | show | jobs | submit login
The truth about Lisp (2006) (secretgeek.net)
120 points by 0xmohit on Aug 5, 2016 | hide | past | favorite | 155 comments



This really is a great piece of affectionate satire. Lisp evangelism is annoying, even to Lisp programmers.

In any case, Lisp may be the best language, but it's not the most powerful.

What is the most powerful? The language from which god wrought the universe.(https://xkcd.com/224/)


I found really interesting the amount of broken links in a blog post that is 10 years old, besides the fact that the blog post still exists after 10 years.


One of those links has a domain of “webhost4life.com”. That must have been a short life.


The link in the following line:

> Lisp is so simple to learn that you can learn lisp in just a few minutes.

can be found on the wayback machine:

http://web.archive.org/web/20061011071033/http://blog.amber....


Same thing is probably going to happen in a few years, but with URL shorteners


I am fairly deep into learning Haskell. After I am done, should I learn Lisp (SICP)? Smalltalk? Prolog?

I am learning these languages for mind-expansion.

I am also studying AI, will learning Lisp be relevant? (e.g. Help me reason about and write AI systems)


Haskell and Lisp complement each other in a lot of ways. Lisp is about empowering the programmer, and building on that empowerment. Haskell is about restricting the programmer, and then building on those restrictions. For a clear example, consider STM, how it works in Haskell, and how it doesn't work particularly well in Lisp because of the inability to restrict transactions in the necessary way.

Lisp is good to learn, but because it has "won" in a lot of ways I'm not convinced it's as important as it used to be. Lisp acquired its reputation in a world where a C programmer would pick up Lisp and holy shit dynamic typing, first-class functions, garbage collection, REPL, macros, powerful syntax-aware editors (which C got, but later), atoms, recursion, violent foaming at the mouth and falling down in sheer awe. If it's still your first language after C, sure. But if you learned, say, Python in school, the list comes back to "macros, which you really shouldn't make much use of in the more modern understanding, and a bigger focus on recursion". Less likely to make you pass out in awe.

(Since I find I sometimes piss people off by claiming Lisp is less than the bee's knees, remember, it's because Lisp won in a lot of ways. It blazed a trail that every modern language today has traveled to some extent or other.)

Haskell still has a counter-cultural element because of the fact that Lisp is quite dominant right now and most languages are about empowering the programmer to do anything at any time. The restrictions-based languages are just starting an ascent (currently probably being led in practice by Rust, I think, if it's not already bigger than Haskell it probably will be in less than a year), but it's still pretty early.


"Lisp is about empowering the programmer, and building on that empowerment. Haskell is about restricting the programmer, and then building on those restrictions."

Best, briefest way I've seen it described. I think that's how I'll start describing it, too.


Its the Perl versus Python argument of functional programing languages.


Creative. Yeah, I can see that.


> But if you learned, say, Python in school, the list comes back to "macros, which you really shouldn't make much use of in the more modern understanding, and a bigger focus on recursion". Less likely to make you pass out in awe.

It really depends which Lisp you are talking about.

If this is true then why do I find Clojure so many orders of magnitude more powerful, more reasonable and more pragmatic than Python. I've done Python for a decade, but now days I would choose Clojure over it every single time.

Clojure over Python is so very much more than just macros. The default data structures are far superior, the time abstractions are fantastic, the concurrent and parallel programming features are better, the asynchronous support is miles ahead and it has parts you can use that just don't exist at all in Python like core.logic and core.match.

I also think the whole is greater than the sum of its parts. Yes, a lot of this stuff exists in Python, but not in the same elegant and idomatic way. In Python the onus is on you to get it right. And one rarely does. Clojure however steers you in the right direction, making it idomatic. When your code is right, it looks right. And when your code is wrong, it looks wrong.


Only thing I'd add is that homoiconicity is a big selling point that python doesn't have either. So "macros, a focus on recursion, and homoiconicity" might be the short list of things lisp has that python doesn't. (Common Lisp is also truly multi paradigm, unlike python)


I consider homoiconicity to fit into the "macros". Other languages have macros, but they are at maximal power in the Lisps.

On the one hand, it doesn't seem to me to be strictly necessary to be "homoiconic" for macros to work. On the other hand, it seems like non-homoiconic languages always end up mitigating the power somehow. Maybe it's just correlation rather than causation, but unless you're planning to write a new language that doesn't matter much in practice.


>macros, which you really shouldn't make much use of in the more modern understanding

Well, only if you're stuck with a broken macro system (cough cough Common Lisp cough cough). Here in scheme, we have er-macros and sc-macros, as well as syntax-rules. Hygene matters, and procedural macros rock, although you want declarative for the simple things. pg really made the wrong call on that, and it will probably haunt arc for years to come.


As everyone knows, Scheme is a Lisp 2 out of 7 days of the week + state holidays. Where I am, it's one of those days. YMMV depending on your relationship to the International Date Line.


Scheme is a lisp all 7 days. What makes you think otherwise?


Take it up with these people: https://encrypted.google.com/search?hl=en&q=%22scheme%20is%2...

I haven't got a horse in this race.


Yeah, no. Scheme is as much a lisp as eulisp, interlisp, common lisp, BBN lisp, maclisp, Racket (which isn't the same thing as scheme), or any other incompatible lisp. It has the macros, it has the syntax, it has the datastructures. It doesn't have nil/false equivalence, but that's not a core aspect of lisp.


They are/were not all incompatible. Once there was a lot of compatibility, backwards and by layers/libraries/translators.

maclisp -> Common Lisp

             -> Eulisp (Eulisp took a lot of inspiration from Common Lisp)
BBN Lisp -> Interlisp

               -> Common Lisp
Common Lisp ran inside Interlisp and Common Lisps had Interlisp compatibility packages. If you bought Xerox' Interlisp-D, it came with Common Lisp. The Xerox Interlisp community later contributed to Common Lisp. For example Interlisp LOOPS was developed into Common LOOPS (LOOPS in Common Lisp) and this was then developed into CLOS, the Common Lisp Object System. The Portable Common LOOPS implementation was even the reference implementation for CLOS.

Scheme -> Racket

     -> Common Lisp
Scheme had some minor influence on Common Lisp, and Common Lisp had some influence on Scheme (numerics, CLOS, ...). Years ago there were Scheme implementations in Common Lisp, compatibility languages, and even a Common Lisp implementation in Scheme. Nowadays almost nobody cares any more to make applications or libraries able to run in Scheme and Common Lisp from one code base. Take Common Music. Version 2 ran in CL and Scheme. Version 3 is Scheme specific. Actually it is specific to its own Scheme version S7 and a new surface language: SAL.

This compatibility does not exist anymore. Interlisp is gone. Scheme is now its own language universe. PLT Scheme was renamed to Racket to prevent users from thinking Racket cares about compatibility with Scheme.

What was once a language group with dialects of a core language, is now not more than a bunch of ideas, where independent language camps pick/mix freely whatever they find important. For all practical work (!) they are now fully incompatible. Some languages still have a common core (Common Lisp, Emacs Lisp, ISLisp, Visual Lisp), while others have new cores: Scheme and Clojure are cores of new languages, with their own syntax, semantics and pragmatics. Neither of those cares about any form of compatibility or code sharing anymore.


> PLT Scheme was renamed to Racket to prevent users from thinking Racket cares about compatibility with Scheme.

No, PLT Scheme was renamed to Racket to reflect the fact that the base language does not (and had not even prior to the rename) really target compatibility with any of the scheme standards, though using its module language approach, Racket includes both R5RS and R6RS Scheme implementations, so its not at all the case that Racket doesn't care about Scheme compatibility, its just bigger than just that.


That's slightly misleading: there are two things: the environment and a base language.

Racket, as an environment, has compatibility due to its module/language approach to a bunch of languages. The older Scheme R5RS / R6RS are amongst those.

http://racket-lang.org

> Racket is a full-spectrum programming language. It goes beyond Lisp and Scheme with dialects that support objects, types, laziness, and more. Racket enables programmers to link components written in different dialects, and it empowers programmers to create new, project-specific dialects. Racket's libraries support applications from web servers and databases to GUIs and charts.

The main language under development aren't those, it's Racket:

http://racket-lang.org/new-name.html

> They draw you in with the promise of a simple and polite little Scheme, but soon you'll find yourself using modules, contracts, keyword arguments, classes, static types, and even curly braces.

> programmers can now simply say “Racket” to refer to the specific descendant of Scheme that powers PLT's languages and libraries.

There is a specific new language, a descendant of Scheme, in which this stuff is written, and that is called Racket.

This Racket language and its main libraries is documented here:

https://docs.racket-lang.org/reference/index.html

See also:

> Shriram Krishnamurthi

> Racket is Racket. It's its own language.

> I'm not quite sure what "Lisp family" really means, given that even Common Lispers and Schemers seem to argue more than agree (maybe because they're close family members?). Given that Sussman argues that Scheme is also a descendent of Algol, doesn't that put Racket in the Algol family as well? So Racket is both an Algol and a Lisp; would you change your posting to say “picks Algol to teach son programming”? In short, do these labels tell us anything useful at all?


Scheme is the bastard child of ALGOL and maclisp, with neither the historical significance of the former, nor the PR of the latter. But I find it more elegant than either of them.


Thanks for the history. My point was just that Lisp is not merely common lisp, and that Common Lisp and Scheme aren't the only two camps.


Look at this thread for more details: https://news.ycombinator.com/item?id=6068732


I haven't done anything with Smalltalk, so I can't comment on it. However, learning Lisp (using SICP) and Prolog are both good choices. SICP uses Scheme. It'll teach you how simple concepts can be composed to build any software you want. You'll come to understand how interpreters work and execution models. With Prolog, pattern matching will become a dominant way of thinking. Prolog is also fairly unique in its backtracking execution model: backtracking is "baked into" the language itself.

Later on you can utilize your knowledge of Haskell and Prolog by studying Curry, which (sort of) marries the two together.

Depending on the area of AI you choose, Lisp can be a good choice but Prolog can be as well. The class I took focused on supervised learning with support vector machines, decision trees, and neural networks. These topics use fairly advanced mathematical concepts, so the language of choice is one in which libraries exist already. We used R, which the professor uses for his research. Python is also strong with its library support.

It's a good idea for you to narrow the area of AI you want to study and then find out the language people working in that area use.


If you ever want to get into object oriented programming, learning Smalltalk is the right way. It is a truly object oriented language, there are no non-object types. As it is not statically typed, you don't have to build up complex inheritance hierarchies to satisfy certain static type. It is based on message-passing, that is, you can send a message to any object type - the object then can understand it or not.


Minor nitpick: prolog uses unification not pattern matching which is more general because it can handle unbound variables.

Major nitpick: declarative thinking becomes the dominant way of thinking not pattern matching. When you write prolog declaratively (w/o cuts at the expense of efficiency) your code can answer multiple questions/queries.


Thanks for the clarifications. I learned Prolog over a few weeks for a class I took a year ago. It left a favorable impression on me.

With respect to answering "multiple questions/queries", is it fair to draw a parallel with SQL?


I don't think so. Sql is declarative though and the engine does the imperative work for you. A simple example of multiple uses:

Simpsons = ['bart', 'lisa', 'homer'], member(X, Simpsons)?

will do a for each over the list, with X bound to each character.

and the same member definition can also be used to answer Simpsons = ['bart', 'lisa', 'homer'], member('brian griffin', Simpsons)?


Yes, yes it is. In fact, Picolisp's builtin database uses a built in prolog dialect as its query language.


Speaking of SICP, which version of it is the best to dive into? I found the online version of it, but it's kinda clunky to click through. Also pretty dry.


>I found the online version of it, but it's kinda clunky to click through

If you don't have the cash for the very expensive softcover, you can grab the unoficial epub, which is a bit nicer than the online edition.

>Also pretty dry.

yep. I still haven't gotten all the way through the first section. It's a tough read.


After trying to read the online edition, I picked up a soft-cover version. I'll second the sentiment that it's easier (for me) to read it as a continuous read. I can see the online version being great as a reference. But, it didn't work for me.


I think its totally worth buying the printed version.


I do all of my own projects in Prolog, but I wouldn't immediately recommend it for AI- neither would I recommend Lisp, ML or Haskell for that matter. AI today is machine learning which means R, Python, Matlab, possibly Mathematica, Lua (for Torch), Julia etc- or even plain Java (more precisely: Wekka).

Prolog and Lisp are interesting if you're into programming language design, as examples of their respective paradigms. They both used to be the "languages of AI" back in the day (as in ten or so years ago... ) before the modern wave of machine learning. Most Prologs for instance don't have any machine learning or even tensor manipulation libraries to speak of [1], so if you want to learn about machine learning algorithms you have to do it yourself from scratch. Which can be a great way to learn of course.

One very big exception to the above: if you want to learn about the original AI project, before the most recent machine learning wave, then to an extent you can't avoid Prolog or Lisp. Many of the GOFAI textbooks are also Lisp and Prolog textbooks (or the other way around) so by reading about the one you necessarily learn about the other.

One excellent such resource is George Luger's book, "AI Algorithms, Data Structures, and Idioms in Prolog, LISP, and Java":

http://wps.aw.com/wps/media/objects/5771/5909832/PDF/Luger_0...

______

[1] Though all it takes is someone with a bit of time in their hands. Watch this space.


"but I wouldn't immediately recommend it for AI- neither would I recommend Lisp, ML or Haskell for that matter. AI today is machine learning which means R, Python, Matlab, possibly Mathematica, Lua (for Torch), Julia etc- or even plain Java (more precisely: Wekka)."

More like machine learning is about machine learning. Remember that AI has always been a bunch of subfields. One or more are extremely popular at any given point. There's still usage of rule-based, agent-based, GA, NN, lots of heuristic planners, and this number-crunching stuff called ML. The old books I started with on it taught LISP, Prolog, Poplog, etc. These days, given all I've seen, I'm strongly against Prolog for real-world AI as the best approaches use a mix of techniques with often imprecise "facts" in their "heads." Watson architecture is a good example, although I think they still use Prolog in part of it.

Best approach, born out of old days, is to use a LISP for the AI but build-in Prolog, Standard ML, or others as DSL's for specific problems. Allegro CL, Racket, and sklogic's tool all do that. Gives you max flexibility plus ability to express solution in language closest to your mental model of the problem.


>> I'm strongly against Prolog for real-world AI as the best approaches use a mix of techniques with often imprecise "facts" in their "heads." Watson architecture is a good example, although I think they still use Prolog in part of it.

"Imprecise facts"? What do you mean?

Yes, Watson uses Prolog- for its pattern matching (unification, ie. Turing-complete pattern matching). They store facts retrieved from shallow parses over raw text as Prolog predicates, then let Prolog do its thing when it's time to find an answer.

>> More like machine learning is about machine learning.

I'm just finishing a Masters course at the University of Sussex. The subject is AI and most of the curriculum was machine learning. Machine learning for NLP, machine learning for Image Processing, Machine learning straight up, Neural Networks and so on, so forth.

In the UK I believe you'll find the same in any university. Also, if you keep up with latest research again it's all about employing statistical algorithms to build models of the data... in other words, machine learning. From the papers I've studied during my course that's been going on for a couple of decades at least now. For instance, if you pick up one of the staple NLP textbooks (Jurafsky and Martin, Manning and Schütze, Charniak) there's not a single line of Prolog or Lisp in them, or indeed any use for first order logic except to discuss its possible applications in representing meaning, but then nobody does meaning in NLP (because it's bloody hard).

Also, I should point out that Sussex in particular was one of the centres in Europe where logic programming was developed in the first place. They had the Poplog II suite and it's the alma matter of Christopher Mellish (of Clocksin & Mellish, from "Programming in Prolog"). There was one module in my course that had a bit of logic in it (one lecture for propositional logic, one for predicate logic, one for Bayes I think it went) but there was absolutely no Prolog to be found anywhere whatsoever.

So it's a big deal that Sussex seems to have completely given up on it. There's probably a bit of a backlash effect in that they rammed it down the throats of their students for a long time (I've talked with several Sussex alumni who have commented on that, and they don't have happy memories of Prolog). Even so though I think it's indicative of the attitude in the field in general.

I'm not sure what's the situation with Lisp.

But, really- AI is now machine learning. Nobody is willing to try anything else.


""Imprecise facts"? What do you mean?"

That most AI problems deal with uncertainty that's harder to deal with in first-order logic. You gave a good example with NLP where the early stuff I looked at tried logic approaches to find they fell flat due to context & language's probabilistic nature. All kinds of things in game AI turned out that way. Even some stuff that would seem true or false, like "am I being attacked?", wasn't so clear when bluffing was involved. I couldn't imagine how to handle a Poker game in Prolog with any success. That's significant given a combo of math, human BS, and luck it brings is common in many AI problems. For these reasons, people were moving from first-order logic even in my day to fuzzy logic, probabilistic models, neural networks, and machine learning techniques.

"The subject is AI and most of the curriculum was machine learning."

Now, remember that I said one or more subfields (or trends more accurrately) are usually very popular in AI. Anything now considered machine learning seems to be it these days. One other thing to factor in is the old adage in AI that any AI technique becoming very popular or effective is no longer called AI. So, as get up to date here, we have to consider that machine learning or AI labels might have changed in how they're applied + AI stuff getting non-AI names.

"Expert systems" were a huge thing back then but today they just call them business logic, business process management, workflow, whatever. Usually lots of If-Thens or Case Statements. Decision-making was AI but now the field is called planning, solving, optimization. Methods were largely search with heuristics and/or constraint solvers. Winners in timetabling at least were the same type last I checked. Game pathfinding another example. Automatic programming was a combo of high-level descriptions (eg 4GL), templates, synthesis, and so on. Still is although they call them IDE's with code generators now. Computer vision and pattern matching were definitely machine learning with statistical models and such. NN's were really different from most things in how they worked, have more resources now, and I usually hear deep learning but maybe they're called machine learning now. Chatterbots were highest performers in conversational style with many methods. Machine-to-machine and some to human communication was (and mostly still is) done with finite-state machines. Hardware synthesis tried all kinds of things but geometic methods with search + heuristics wins to this day. Knowledge storage, querying, and so on mostly went into database tech for best results & performance due to easier model and machine implementation. Some still did logic languages (eg Prolog, Datalog), encoding in neural nets, lots doing RDF/OWL/whatever, and so on.

So, that's off the top of my head different areas I learned about with techniques and new labels that appeared. Have any of these that weren't using machine learning techniques gone mostly into that in academia and/or commercial implement that you're aware of? While we're at it, do you have a link that lists the best AI labs so I can check on that and do a general update of my knowledge later on?

"So it's a big deal that Sussex seems to have completely given up on it. "

Normally I'd think it was chasing fads given the bandwagon effect that deep learning in particular is having. This time, though, I saw enough logic programming experts give up on it to know either the hardware or the approach itself isn't up to most AI challenges. Time showed it was probably approach itself. So, I agree Sussex moving on is very significant.

"But, really- AI is now machine learning. Nobody is willing to try anything else."

Non-ML is dominating in terms of highest performers in planning, synthesis, and databases. Those I know that are getting tons of research. They just don't call it AI anymore despite it being artificial tools replacing intelligence of humans that do such work. Genetic Programming's "Humies" awards probably deserve some mention given it's not really traditional AI or what you'd think of in machine learning. It's its own thing. There could be others. I agree that machine and deep learning are the big things getting vast majority of effort... damn near drowning everything else out... but others are still there. I already mentioned that name changes but I think it also depends on your university. I noticed in the past a lot and now a little that specific universities get big on specific approaches in general or subfields. Yours might have went all in on the trend more than another. A survey would be nice at this point that covered the activities I mentioned above that were called AI in the past.

"I'm not sure what's the situation with Lisp."

Me either. For AI and other research, most academics write tools with whatever they like best or others are using. For production, they usually code it in a common language for that like C++, Java, etc. One planner for timetabling college exams I recall was designed/prototyped in LISP then coded to C++ for release. The point of bringing up LISP is that DSL's, either executing or generating for specific languages, let you do every part of something as diverse as Watson in the same, common language with benefits of others selectively. For a Prolog example, you might parse in data with FSM's in C generated from LISP DSL, do simple filtering/preprocessing with safe 3GL, handle queries with Prolog style, and do user interaction with event-driven model. All integrated into same data structures, function calls, whatever through common language that's easy to parse/transform. Most apps don't need such power but a full AI might find it useful. Many machine learning techniques can just use a 3GL, though.

Unrelated, how far along are you on porting your prior work to open-source Prolog?


SICP should be mandatory. It only happens to use Lisp, but is a very fundamental text about programming. So it is an important read, even if you do not plan to use Lisp in your future.


Lisp is relevant to what you might call the "future that used to be" of AI. Projects like SHRDLU. Back in the days when it was thought that better structured representation of symbolic data was the key to AI.

Modern AI is more of a numerical-statistical steamroller, where you're better off learning numpy or R and the like.


That's just one part of AI. If you want to see how 'dumb' the 'modern AI' is, you just have to use stuff like Google Translate. Modern. But dumb. Not AI.


Lisp (SICP)? Smalltalk? Prolog?

Yes?

For mind expansion, I'd probably add J. For Lisp specifically, I'd look at Racket over SICP because Racket is an ecosystem where a lot of interesting new work is happening and SICP is not really a book on Lisp programming (e.g. it does not address macros (the book on Lisp macros is Graham's On Lisp)).

The current main stream of AI is built on statistics and machine learning. It's psychological basis is behaviorism. Lisp (and Prolog) are more likely to be applied in symboloic (psychologically Jungian?/Freudian?/Lacanian?/etc?) approaches to AI. There are classic AI text books that use Common Lisp, and it's also an interesting language (there's a Rails like convention over configuration embedded in it).


Yes, everybody looking for mind-expansion should learn Lisp (and Haskell too).

Just don't box yourself into the Lisp best practices. Those are great for creating successful projects, but not for learning what programming can be. (Nowadays I think this advice will extend to Haskell too, but I'm still not certain.)

I don't think it will be relevant for any modern AI (ditto for Prolog). But knowing it does change the way you program - more yet if you are in some "domain specific language prone" environment, like Haskell.

I'll also agree with weavie in that you'll probably won't ever be "done" learning Haskell, nor taking lessons from Lisp. I don't think anybody has ever been.


SICP isn't all of lisp. But it teaches you a lot about programming.

Lisp or Scheme are good languages to learn, and I would reccomend them

Smalltalk is perhaps the only "real" object oriented language. It's a fantastic language to learn, and in some ways very similar to Lisp. While you're at it, learn CLOS or its cousins in Schemeland, which have some similarities. I've heard that some OO fans swear by them.

Prolog is the furthest from conventional programming of the three, and I'm not very familiar with it. I'd definitely reccomend learning it, though.


SICP is phenomenal, but I'd recommend "On Lisp" first.


I would rather recommend SICP fist. On Lisp is great but not the ideal Lisp starter book. Practical Common Lisp would rather be that, and its a more recent book with modern topics.


If one isn't set on Common Lisp (assuming not since SICP is not CL-focussed), the Racket universe has amazing resources for beginners. How to Design Prgrams is wonderful intro material, and the Racket guide, and subsequent recommended reading on the Racket website are all pretty aces.

Racket also has a SICP language should one want to experience the original text without running into broken parts.


Why is that?


I think once you are done learning Haskell you will know the answers to all these questions. Although one could argue you will never be "done" learning Haskell.


I think you'll get a better return from going deeper into Haskell than from dabbling in other languages. Haskell does enough things right and there's at least a decade's worth of valuable study in using Haskell libraries effectively.


Personally, I like programming languages that force me to think about a problem completely differently than what I'm used to. So for mind-expansion I would suggest Smalltalk and then Prolog, because they are completely different from Haskell, much more different than Lisp/Scheme is.

I'm not an expert on lisp and AI. From what I understand lisp has been the traditional go-to language for AI research, and for "classic" AI you probably cannot go wrong with learning at least the basics of lisp. However, a lot of recent AI research seems to be on neural networks, deep learning, and statistics. I don't know how important lisp is in those fields.


If you use Lisp, you have Prolog and Smalltalk, too. Built-in.


Yes, and while that may be an advantage for actual use of the language in practice, I think it's a disadvantage for learning the concepts behind OOP and logic programming.

Prolog forces you to solve your problem with logic programming. You cannot take the easy way out and solve it (wholly or partially) in some other way that you already know. Likewise, Smalltalk forces you to solve the problem with OOP. Haskell forces you to deal with pure functions and lazy evaluation. APL forces you to solve the problem with vector operations.

Some of the approaches and solutions may be clumsy and worse than what you can do with a mix or some other programming paradigm. But I thought the point of the exercise was to learn the different programming paradigms, and in that context I prefer programming languages that do not offer alternative ways of solving the problem. I prefer a more restrictive approach and lisp is anything but restrictive (CLOS, kanren, etc).

Also I find that a different surface syntax helps me context switch to a different programming paradigm. For example, when I see s-expressions my mind switches to scheme and thinks accordingly, and that would be a distraction to me if I wanted to program imperatively, for example. When I see Prolog, I have a hard time thinking in terms of OOP. Maybe that's just me.

The reason for suggesting Smalltalk first was that it's farther away from Haskell than Prolog (Prolog clauses can appear similar to functional programming). Switching to Prolog after Smalltalk is another big jump. I like big jumps. They help me switch context. The OP may be different, I don't know.


LispWorks Prolog can use Prolog syntax inside Lisp:

    $ lispworks

    LispWorks(R): The Common Lisp Programming Environment
    Copyright (C) 1987-2014 LispWorks Ltd.  All rights reserved.
    Version 7.0.0

    CL-USER 1 > (require "prolog")
    T

    CL-USER 2 > (in-package "CP-USER")
    #<The CP-USER package, 0/16 internal, 0/16 external>

    CP-USER 3 > (erqp)

    ?- consult(appenda).
    YES.
    OK.

    ?- appenda([1,2,3],[14,15,16],L1).

    L1 = [1,2,3,14,15,16]

    OK.


At a slight tangent, does anyone participating in this discussion know of a good resource for explaining the kinds of things that people use kanren for?


> computers of the future will not be able to implement any of our ideas without creating time-travelling algorithms that borrow processing power from other computers that are further into the future.

> This sounds difficult, but in lisp it isn't difficult at all.

> in haskell this is a built-in feature

The creepiest part is that it's almost true: https://hackage.haskell.org/package/tardis-0.4.1.0/docs/Cont...


I still do wonder why reddit was reimplemented in Python, but with rewrites in other languages I think that rewriting in itself made it more readable and not using the other language. Plus maybe using a web framework instead of doing most by hand, because I assume lisp had even fewer good web frameworks than python then.


>I still do wonder why reddit was reimplemented in Python

Because they new Python better and/or it has a better and vastly bigger ecosystem and libs, and it being Lisp only gave the very marginal returns of being cool to your programming peers.


Python had better libs. And they wrote web.py. Which is just barely less cool than Flask.


That seems like an unnecessary jab.


Not if it's true.

Python does have a better and vastly bigger ecosystem and libs.

And doing something in Lisp doesn't offer any huge benefits over Python -- besides the cool factor.

Sure, Lisp has some tricks Python doesn't have -- macros, homoiconicity, uniform syntax, etc -- but still with those, in real life, people haven't been able to come with anything like an order of magnitude or at least several times better programs or faster development than with something like Python.

Just ask Norvig.


Actually, someone did ask Norvig... ;-)

https://news.ycombinator.com/item?id=1803627


They told us why they reimplemented it in python.

http://webcache.googleusercontent.com/search?q=cache:orxAytX...


For the same reason people talk english and not latin.

Even if latin is a better and more expressive language, choosing english as a language will make the project more viable.

A language is a method of communication, so it is a sound choice to pick a language that is understood by many, not the language that is better of more powerful.

Quick thought experiment: how probable is it that a project (open source or not) will die because it uses a language most programmers are not fluent with? How long does it takes anybody (meaning you include the not so good programmers) to learn python or lisp, and effectively be able to use it and solve problems with it?


LISP is powerful because its fundamental syntatic structure and data structure are the same- a list where the first element is the function-name or keyword. You can basically write programs that extend themselves by writing out new functions and data.

Other modern languages like html and json have the nested list as the fundamental data structure. But they arent self-executing like LISP.

But having one main syntactic and data structure- the list- is what makes LISP annoying too. You have to translate other syntax and data structures into lists. Its always possible, but sometimes kludgey and unreadable.

An advantage of LISP in the 1960s was that it was interpreted, i.e. relatively high speed feedback for short programs, compared to its competitors which where batch-compiled, and might take hours to see how they would run.


"But having one main syntactic and data structure- the list- is what makes LISP annoying too. You have to translate other syntax and data structures into lists. Its always possible, but sometimes kludgey and unreadable."

I like everything but the quotation above. This is a higher-level version of the ubiquitous "the parens are ugly" objection.

This objection misses the point because "Lisp is a building material, not language." In order to use Lisp effectively you design an intermediate language appropriate for the problem domain. This approach is fundamentally at odds with a language such as Java, whose premise is that it provides all the abstractions you need to solve the problem directly. With Lisp you literally have whatever language you want.

Non-Lisps leak more and more as complexity increases. Two examples that jump to mind right away are lambdas in Python and generics in Java. In both cases the designers plugged holes in the abstractions they chose years before. Both are kludgey in my eyes, clear afterthoughts. Both features are trivial in Lisp. Lisp is so simple that the limitations never arose in the first place. Your language has made a compromise to simplify these things, but at the expense of flexibility. C++'s accretion disk is a marvel at this point.

You can use macros to streamline elaborate data structure creation by the way.


> But having one main syntactic and data structure- the list- is what makes LISP annoying too. You have to translate other syntax and data structures into lists.

This is what macros (especially reader macros) are for. Just translate it once and then save the transformation as a macro and write it the natural way for you.

Of course, that's a lot easier to do/maintain when it is just you working on the project.


As an example on non-list, literal data structure (atom), see:

    CL-USER> (ql:quickload :local-time)
    CL-USER> (in-package :local-time)

    LOCAL-TIME> (now)
    => @2016-08-05T17:41:54.492977+02:00

    LOCAL-TIME> (enable-read-macros)
    LOCAL-TIME> (describe @2016-08-05T17:41:54.492977+02:00)

    @2016-08-05T17:41:54.492977+02:00
      [standard-object]

    Slots with :INSTANCE allocation:
      DAY   = 6001
      SEC   = 56514
      NSEC  = 492977000

    LOCAL-TIME> (class-of @2016-08-05T17:41:54.492977+02:00)
    #<STANDARD-CLASS LOCAL-TIME:TIMESTAMP>
And again, how your AST is encoded has nothing to do with the value you manipulate at execution: (make-hash-table) is a list with one symbol, but when evaluated, you obtain a good old hash-table.


> Paul Graham originally wrote reddit, in lisp, on the back of a napkin while he was waiting for a coffee. it was so powerful that it had to be rewritten in python just so that ordinary computers could understand it.

Because you need a true Lisp machine... Next thing you know, you'll be waking up in "the matrix".


> Because you need a true Lisp machine... Next thing you know, you'll be waking up in "the matrix".

Hey now, Symbolics machines may have not been the fastest, but I rarely fell asleep in front of them.


And thus the student was enlightened.


I prefer Racket to Lisp. But in a sense it's also Lisp, just not Lisp-2.


I prefer Scheme to Racket. But that's also lisp.

And scheme and racket are far closer together than Common Lisp.


When I started using Clojure, the first two weeks were mess. I was trying to fit everything in procedural way of thinking and boy was it a mess. I felt really miserable and maybe that is how it felt when I first started to walk. And then suddenly everything clicked and the world was beautiful.

Clojure has a steep learning curve when you start. After about a year and a half there is another much steeper curve and if you cross it, I have heard it feels like you have super powers. I am probably 4/10 on Clojure and 9.5/10 on Java. However I take around 10% of time to get better results with Clojure vs Java.

I can try taking a shot at listing why I love Clojure however for me it is as difficult as explaining why I love someone:

1. Computers are functional. You give them a massive subroutine and they execute it. All procedural languages have built semantics to make it supposedly easy for humans to write code but end up with a lot of hacks. Only you start writing functional you feel finally connected to the computer.

2. Immutable structures: With most languages it's difficult to have assumptions about how your code will work. You pass your data to a function, can they change the data due to a bug or bad design. Do you create a copy before sending it to them? All those issues go away. It's better to understand expectations and stupid errors go out of the way.

3. Reusable code: because it's functional and without side effects it's much more easier to create reusable code as a part of standard library. You have functions like butLast (give me all elements of an array except last), even?, rest (return all elements of array except first). Imagine how much less boiler plate would it be. Check out some of the functions here: http://clojure.org/api/cheatsheet

4. Encourages you to write smaller chunks of functions, not 2000 line long functions. Improves code reusability and testing.

5. Doesn't allow bad design practices - circular dependency.

6. Because the language is built on a very few core constructs, you or anyone else can enhance the language. I have seen situations where someone had a proposal to extend the language, they just releases it as a library. If people like it and use it, it would get included as a part of core framework.

7. The quality of available talent - Because there is a steep learning curve almost everyone Clojure programmer that you meet is likely to be very smart.

8. Has inbuilt support for magical stuff like macros, STM. You can Google it to find out more.

9. Imagine all the java libraries that were very complicated to use and with their confusing java docs. Clojure community has built a lot of super easy to use wrappers on top of it and you could easily write one too.

10. In other languages if I see a library on github which hasn't been updated for years, it feels like it has been abandoned and I would not consider using it. With Clojure, because of functional and no side effects - a code which works now is likely to keep on working with it's current contract. It is not uncommon to see libraries which were built by some of the rockstars in the community within a day and not updated for years but still they work great.

11. Because of being functional and no side effects, code hot replace actually works. Many other languages have attempted it, but it always comes with a bit of caveats. With Clojure it just works. My developer code flow was that I would use an IDE and which I am writing code, it would hot replace the code in VM, find the tests which depends on the changed code and run just them. So I can see the code I am breaking while I am writing code. This is unbelievable. When we had an issue, I used this to connect to a REPL (a commandline for VM) - hot swapped a code to add logging, figured out the issue, fixed the code locally, copied the function to replace the code on running server, fixed the code and pushed it again (I wouldn't recommend people do this, but explains the power at your fingertips).

This list could go on and on. In my current job, I am building a big team where any set of people could come together and build a startup. Like Google's Area 120, but started more than an year before that. Unfortunately I am not encouraging people to use Clojure here, because it would set them back by months. I really love my current job, however I keep getting impulses of quitting everything and just code Clojure full time.

I hope people can see that I am smitten. Give it a try and I promise you wouldn't regret.

edited: for readability


> 1. Computers are functional.

Just have a look at Assembly code. Looks procedural to me. FP is an abstraction on top of procedural programming. Computers aren't functional at all. Programs may be functional, not computers.


You have functions like butLast (give me all elements of an array except last), even?, rest (return all elements of array except first). Imagine how much less boiler plate would it be.

As a Python programmer, those sound like a poor tradeoff for slicing syntax. And why do (drop) and (nthrest), which seem to do essentially the same but for lazy/non-lazy, take the arguments in the inverse order?


    > As a Python programmer, those sound like a poor 
    > tradeoff for slicing syntax. 
Because Python has a static syntactic system. To Lispers, it makes no difference whether or not it's slicing syntax or a function call. Clojure also has a `subvec` function that behaves the way you're getting at.

    > And why do (drop) and (nthrest), which seem to do 
    > essentially the same but for lazy/non-lazy, take the 
    > arguments in the inverse order?
The partial function application for `drop` allows you to create a "dropping" function like `(partial drop 10)` that you can pass about (such a function has a particular use-case w/ transducers), and partially applying `nthrest` lets you create a "to drop" function like `(partial nthrest [1 2 3])` that you can pass about.


Because Python has a static syntactic system. To Lispers, it makes no difference whether or not it's slicing syntax or a function call.

Conversely, as a programmer, I don't care if the syntactic system is static or not :) what I care about is ergonomics.

subvec doesn't seem to support negative indices, so while it sounds superficially similar, it can't replace all those tiny functions (like butLast), while slicing can.


    > subvec doesn't seem to support negative indices
You don't need negative indices to define a function like `butlast` (or any function for that matter):

    (defn my-butlast [vec] 
        (if (empty? vec) nil
            (subvec vec 0 (dec (count vec)))))
If you really care about negative indices support (although you shouldn't because it is wrong) then you can easily add this. IMO, negative indices hide bugs and obfuscate code. Complaining that a Lisp doesn't support syntax Y is kind of silly because of the extendable syntax of the language -- if a language implements Y, then Lisp can implement it as well. Since we have macros, we don't even need to touch the core language, and this is a huge advantage when it comes to language durability. As the Clojure community is very healthy, we have appropriated all sorts of ideas from other languages (most notably, a very "inspired" implementation of Channels from another language and a new spec system that bares a striking resemblance to Racket's contracts...) :) If Python comes out with an earth-shattering language construct, we'll gladly appropriate it and continue along our merry way.


You don't need negative indices to define a function like `butlast`

No, my point is that if you have negative indices, you don't need butlast and a bunch of other functions; those just add more cognitive burden, in my opinion.

Complaining that a Lisp doesn't support syntax Y is kind of silly because of the extendable syntax of the language

I wasn't complaining, and this discussion isn't just about the syntax. The top poster was citing those functions as qualities of Clojure and its stdlib, so I was comparing the two. Obviously I can write a subvec with negative indices support, but then I can also easily write a butLast in Python. But those wouldn't be in the stdlib.

I'm not even trying to put down Clojure, I don't even know enough about the language, my reply was specifically about those functions and the praise thereof.


    > those just add more cognitive burden, in my opinion.
Of course that is a valid viewpoint, but remember that using a function (instead of splicing syntax) has the added advantage of being composable and applicable. These two features are crucial when you're dealing with efficient persistent data structures and separating purity/impurity, so in a very real sense you do need these things as they make life easier in other areas. Python isn't a functional language and doesn't have to concern itself with dealing with persistent data structures efficiently and so they don't mind that to achieve a composable and applicable splicing solution they'd have to wrap the splice in an abstraction. That's fine, because if you're writing Python like that, then you're probably writing Python wrong (in the sense that it wouldn't perform or read as well as it would if you would have just used a functional language.) But if you're writing functionally, you obviously consider that abstraction's standard existence a good thing.

The linguistic philosophies of Clojure and Python are radically different that these functions are a quality for Clojure, while if you're writing Clojure like you'd write Python you would consider them unnecessary.


Yeah, functions have those advantages. That's why Python also has slice(), which you can then apply to any list, in a composable way. But when you don't need those features, the splicing syntax adds less crud than a bunch of function with non-obvious names.

Eg. use of slice():

  >>> butLast = slice(-1)
  >>> getitem([7, 8, 9], butLast)
  [7, 8]


    >  But when you don't need those features
Ah, in a purely functional language you always need those features :)


The good thing about dynamic syntax systems is that it can evolve with time and change to best fit a problem domain. That guarantees ergonomics efficiency dimensions are always near optimal.

The bad thing about dynamic syntax systems is that it can evolve with time and change to best fit a problem domain. That guarantees ergonomics learnability dimensions are always far from optimal.


I would add: high-order functions like every? which accepts a predicate and a sequence and checks if the predicate is true for every item. Just imagine all the for loops, null checks and what not that you save in every place where such a check is required. Same goes for sum, filter, map, reduce.

Reducing accidental complexity - the core team is working hard to make things even simpler as time goes by - reducers, core.async, transducers, spec ...


I'm also looking to learn a Lisp, maybe Clojure, maybe some scheme. Clojure for me has a big drawback (and also an advantage) to run in a highly complex system, at least as I perceive it, as I've never done anything in Java. What does one need to learn from Java world to program in Clojure?

I understand perfectly what you say about the hacks with procedural languages. There's why I'm going for a functional language soon.

Do you use a common SQL databse with Clojure or did you go the full route with Datomic? Do you use Clojure for desktop apps? For mobile apps? Or just back-end stuff?


You don't need to know Java to use Clojure. Give it a try, you might like it.


> 1. Computers are functional.

From what did you conclude this?


I sense it would be way more entertaining to learn how to make a language with clang, learning BNF notation, understanding LL and LR parsing, than learning lisp or haskel.

That way you quickly overwhelm zealous and eager students so they can calm down and stick with the languages normal people use, and stop with those usual functional rabbit holes.


Today we could add node.js to the replacements list.


Clojure is currently the sexy lisp du jour.

When I first learned Clojure, to me it was the first time programming truly clicked with me. That first time I ever felt that spark of "oh my god, so that's what programming can be like!"


Clojure never really caught on with me. It's not the ideas, so much as the really complex ecosystem that the JVM mandates. Which is why I tend to avoid the JVM like the plague.

Now Scheme on the other hand...


Familiarity lessens fear. I've been coming to Clojure from Java so to me the JVM and its standard libraries are a feature not a drawback.


The standard library is a benefit, the complexity requirements are the drawback: In java, and the JVM, nothing, no matter how simple, is ever simple.


I really enjoy playing with it, and every now and then, do my 4Clojure challenges.

But the only scenario I could eventually use it for, portable code between Android and UWP isn't really covered by it.

ClojureCLR is behind Clojure and doesn't run on .NET Core.

Clojure for Android still has lots of performance problems and the project hasn't been updated since mid-2015.


Take a look at ClojureScript (if you haven't already).

It compiles to a subset of JavaScript which is then compiled and optimized with the Google Closure compiler (the one used for GMail and GMaps) to produce even better JavaScript.

You can target UWP and Android via React Native + ClojureScript react wrapper(s). A interesting fact is that the wrappers are up to twice faster than the normal react due to immutability in the language which enables very efficient implementations of shouldComponentUpdate.

https://blogs.windows.com/buildingapps/2016/04/13/react-nati...


I don't want to use JavaScript based tools.

Right now I am using C++14.


Isn't that roughly akin to saying, "I don't want to use the combination hammer/screwdriver; I prefer to bang the threaded nails in with flint flakes that I have knapped myself and that are sharp on every side?"


No, C++ is a first class language in iOS, Android and WP SDKs with corresponding tooling, debuggers and IDE support.

JavaScript implies another layer to debug and even more complexity writing FFI code.


Would ClojureScript and React Native be an avenue worth exploring for cross platform development?


No, not a big fan of JavaScript.

Right now I am using C++14, with an eye on eventually move to Xamarin.


Why exactly are you saying it wouldn't be a good idea just because you don't like JavaScript? React Native is actually loads of fun, even more so if you're doing it in Clojure.


I don't like JavaScript.

Also it isn't a supported language, except on WP.

C++ has first class support on all SDKs in terms of tooling, visual debuggers and IDEs.

Using JavaScript implies yet another layer to debug and two FFI layers between the JavaScript and platform APIs.


Just because you don't like JavaScript doesn't mean the person you're talking to won't. Furthermore I don't know what "first class support" means when React Native has an IDE built on Atom provided by Facebook. It is a bit abstract compared to writing out Java or Objective-C but so is C++. What frameworks allow you to write native UIs in C++ on all platforms?


First class support for a programming language means it is part of the SDK provided for the OS vendor for the said platform.

It also means that the official IDE for the specific platform supports the language and is able to offer mixed debugging across all languages provided by the vendor SDK.

SDL, Cocos-2d, openFrameworks, Qt among many others.

Or just keep the logic in C++ and make use of Objective-C++, C++/CX, SafeJNI for the native UI.

The suggestion was made for me to use ClojureScript, which implies enjoying JavaScript in addition to Clojure. So I am the one that is supposed to enjoy it or not.


I thought the person was asking for themselves, not for you?


Given that it was an answer to my comment about not being happy with the state of Clojure for targeting Android and UWP, I would assume it was a suggestion to me.


What was it about Clojure that made it click?


I like Clojure because it's a Lisp 1 and the novelty of having MVCC implemented in my language.

With respect to the Lisp 1/2 debate, I hate how this:

    (defun foo (g y)
        (g y))
    (foo #'- 10) ; error
Is an error in Common Lisp. Instead, you have to:

    (defun foo (g y)
        (funcall g y))
    (foo #'- 10) ; works
That's just so inelegant IMO. Scheme does the "right" thing and lets you do:

    (define (foo g y) (g y))
    (foo - 10) ; works
but lacks a de facto build tool and package management system. (I also really like Scheme's macro system, but not everyone agrees). Clojure solves the build tool and package management system, does the "right" thing with higher order functions, and offers seamless Java(Script) interop, which is why it has enjoyed more "mainstream" success than other Lisps.


But:

    (define (foo lst)
      (list (map + lst)
            (map - lst)))
WTF?

Or:

   (define (foo + - lst)
     (+ (reduce + lst)
        (reduce - lst)))

    (foo - +)


> I like Clojure because it's a Lisp 1

That's one of the things I hate about Clojure. I _like_ being able to do this:

    (defun foo (list)
      (list "length" (length list)))


That is convenient, but keep in mind that namespaces solve this problem:

    (defn foo [list]
        (clojure.core/list "length" (count list)))
Or, in the case of CL, we could do this:

    (defun foo (list) 
        (cl:list "length" (length list)))
I'd argue that you should do this anyway to clarify what you mean by "list" when you shadow a common name like that.


> I'd argue that you should do this anyway to clarify what you mean by "list" when you shadow a common name like that.

Absolutely not, don't encourage people to do this in CL. Clojure is Clojure, you have to do this, or if you prefer you can use use (refer ... :rename {...}) to reference external names with different local names.

CL packages have their own semantic w.r.t. importing and shadowing symbols. Without context, if I read your example in CL, I am starting to ask myself if I am in a package where you defined a "list" function, because this is the unique case that requires me to explicitely qualify cl:list. If there was any confusion in the first place, I could easily select the symbol and ask Lisp what it is and whose package it belongs to.


Readability matters. Specifying the namespace of `list` communicates one and only one idea: I am referring to a particular binding of `list`. By leaving out the namespace qualifier when you shadow a symbol you can only add to the cognitive load necessary to understand the meaning of the code. A languages' only purpose to communicate meaning, whether that be to a machine or a person.

    > if I read your example in CL, I am starting to ask myself 
    > if I am in a package where you defined a "list" function
Anytime you use a shadowed symbol, qualify it with a namespace. Just because you can do something does not mean you should. It won't hurt the code and it makes the meaning perfectly clear.


> Readability matters.

That's what I am telling you: in CL, your example does not make sense except in a package "FOO" where FOO:LIST is a function/macro, simply because there is absolutely no ambiguity between a function named LIST and a variable named LIST.

If I am in the CL-USER package and write (LIST (LENGTH LIST)), both occurrences of LIST refer to the same symbol, and there is no "shadowing" in place in the CL sense of the term.

> Specifying the namespace of `list` communicates one and only one idea: I am referring to a particular binding of `list`

The fact that you prefix or not the symbol with a package has no impact on the binding, i.e. the (CL) namespace where the symbol is being looked-up. As an operator, list is bound in the function namespace (and can't be redefined). Likewise, (find-class 'list) or (gethash my-hash 'list) look for different bindings in different namespaces.

What (CL:LIST LIST) actually communicates is that both symbols are most likely to be different ones, since for some reason the writer felt the need to distinguish them. That example tells me that something unusual is happening.

Assuming we are in package "P":

- You can refer to a symbol from another package like you did:

    (lib:foo bar)
- or, if you don't have currently a FOO function in your package, you can (import 'lib:foo), in which case you can write:

    (foo bar)
You may mix symbols from different packages, as long as they don't conflict, but in some cases that might be difficult to understand. Whether you qualify all of them or only a subset depends on the actual context.

- Finally, if you happen to have already a FOO in your package P but you prefer to use FOO from "lib", you can shadow the first one, and so:

    (foo bar)
... will read as (lib:foo p:bar). But then you can't access p:foo anymore. That's why "Anytime you use a shadowed symbol, qualify it with a namespace" is a strange thing to say under the CL notion of shadowing symbols, because by definition the symbol that was shadowed is uninterned and not accessible anymore, except if you manage to keep a reference to it or store a value that was bound to it in some namespace.


    > simply because there is absolutely no ambiguity between a 
    > function named LIST and a variable named LIST.
To a machine, correct. To a human, this is not always the case.

    > But then you can't access p:foo anymore. That's why 
    > "Anytime you use a shadowed symbol, qualify it with a 
    > namespace" is a strange thing to say under the CL notion 
    > of shadowing symbols, because by definition the symbol 
    > that was shadowed is uninterned and not accessible 
    > anymore, except if you manage to keep a reference to it 
    > or store a value that was bound to it in some namespace.
Of course the machine knows what you mean. I never denied this. But when you're in a massive file fixing a 15 year old bug this information becomes less clear and you're going to appreciate a fully-qualified name.

If you have two namespaces providing the same symbol, qualify which one you mean. This is true in just about any language. The meaning of the code is painfully obvious to the human who has to maintain it. Even if you can keep track of it without any issues, assume the maintainer is totally inept and can't. Names are important, take the time to get them right.

EDIT:

Also adding I was using the term `shadowing` somewhat sloppily. Names are important enough that anytime one is being used in more than one way we should take great precaution.


I don't object using fully qualified symbols to distinguish packages for the benefits of human readers. I care about human readers as much as you, don't worry about that.

I am not talking about different symbols which happen to share the same name (in different packages), but about the different meanings of a single symbol in different contexts. In the example of (CL:LIST LIST), I said that a reader would infer that both symbols are different, whereas you are saying that we should prefix them even if they are, in fact, the same symbol.

I am using the CL definition of namespace, by the way:

    namespace n. 
    1. bindings whose denotations are restricted to a particular kind.
    ``The bindings of names to tags is the tag namespace.''
    2. any mapping whose domain is a set of names. 
    ``A package defines a namespace.''
That means that even though packages define namespaces, not all namespaces are tied to packages. In particular, you cannot distinguish the function namespace from the value namespace with a package prefix.

That's why in Lisp I can clear any confusion by writing either LIST or #'LIST, and that's why I like FUNCALL. Within standard evaluation rules (LIST X) does not use the value of a variable named LIST.

In all your comments above you use words which have both verb and noun meanings, but you did not need to explain it to me. Did I seem confused? Did I say: "Hey, you wrote 'to shadow ...' but that's wrong, a 'shadow' is the thing I see on the ground when its sunny".

What words look like is very distinct from their "role" in a sentence, and this is so much a natural thing to do for humans that most mainstream programming languages enable this.

    static void foo (int foo) {
        System.out.println(foo);
    }
So, in the extreme case of:

    (defmethod foo ((foo foo))
      (funcall foo))
... how would you disambiguate things using package prefixes?

    (defmethod fun:foo ((var:foo class:foo))
      (funcall var:foo))
Anybody who introduces made-up packages like this in CL to "clean-up the mess" will lose the right to commit code ;-)

> Also adding I was using the term `shadowing` somewhat sloppily. Names are important enough that anytime one is being used in more than one way we should take great precaution.

Yeah, right. I am talking about lisp:namespaces instead of clojure:namespaces. But I can "name things" and "have a name" without having to distinguish both usages.


R7RS has a standard package system. There seems to be some consensus on using Marc Feeley's Snow for distribution of R7RS packages. If you're running Racket, there's also PLaneT.


Lisp's propensity for terse, expressive code. Persistent data structures. A novel approach to concurrency by mainstream language standards.


You should add 2006 to the title. The topic is very mid 2000s.


With anything (especially technical) I read online, the first thing I look for is the post/publish date. Some tech content is timeless, but much of it is useless if older than two years.


Absolutely. All of the jokes in this article have been reimplemented in Ruby and then in Javascript since it was published.


Braced myself before reading this, but what a pleasant surprise!

> infinite number of closing parentheses

Nowadays, you can express it with finite parentheses using -> macros popularized by Clojure.


> Braced myself before reading this

You should have parenthesized yourself.


Reminds me of an old joke:

CEO: Someone stole the last 50 megabytes of our main product's source code

Programmer: The last megabytes you say? No worries, it's written in Lisp.


The thread macro is really a treat and allows for more expressive functional piping of data. A lot like dot chaining in your python/scala/rubys or cascading in Javascript - get a lot done with minimal boilerplate


Clojure even has a .. macro which makes dot chaining even more readable - the resulting code has less dots and less parenthesis than the equivalent java/python/ruby/scala code!


Extant lisps rated by their likelihood of changing your life:

Scheme: 7/10 Common Lisp: 3/10 Racket; 8/10 Emacs Lisp: 0/10 Clojure: 9/10 LFE: 9/10

These rankings are scientific fact.


They'll all change your life, but Scheme will change your life more, and better.


That blog post is tagged “functional”, but Lisp isn't a functional language - not even an impure one.

Functional programming is expressing computations as the evaluation of mathematical functions whenever it's possible. Sure, some operations are intrinsically not mathematical functions (like sending data over a network, returning the system date, you name it), but, in general, data structures and algorithms should be implemented as values and mathematical functions on them.

Mathematical functions are mappings from values of a domain to values of a codomain. Naturally, if we want mathematical functions to be a basic building block for programs, we need a rich set of values on which said functions may operate.

Now, here's the thing: Lisp doesn't have compound values. The only values Lisp has are numbers, characters and references to objects. All of them are primitive and indivisible.

“But, catnaroek, what the hell are you smoking? Aren't lists, structs and vectors compound values?”

No, in Lisp, they're not. They're mutable objects whose state at any given point in time may be interpreted as a value in your favorite metalanguage of choice. Maybe if the metalanguage isn't Lisp itself, you can interpret the object's state as a useful value!

In case the above wasn't too clear, let's review the difference between values and objects:

(0) A value is something you can bind to a variable. (At least in a call-by-value language, which Lisp most definitely is.) It doesn't matter, or even make sense, to ask whether “this 2” is different from “that 2”. There is always one number 2, regardless of how many times the number 2 is stored in physical memory. Nor does it make sense to ask whether 2 will suddenly become 3 tomorrow. 2 and 3 are always distinct values.

(1) An object is an abstract memory region. Every object has an identity and a current state. The identity is always a primitive, indecomposable value. The state may be compound, but it isn't always a value. (This depends on the language.) Even if the current state is a value, the state at a later point in time might be a different value. Objects with immutable state are largely impractical - why bother distinguishing the physical identities of entities that never change?

Now it becomes perfectly clear that, while Lisp has compound objects, it doesn't have compound values. Sadly, without compound values, you can't have functions taking compound values as arguments, or returning compound values as result. Since compound entities are unavoidable in practical programming, this means you pretty much have to use objects.

“But, catnaroek, what if we encode compound values as Gödel numbers?”

Hah! Have fun with that!

---

As an exercise for the reader, for each of the following languages, determine whether it's functional or not: Scheme, Racket, Clojure, Scala, F#, Erlang, JavaScript. Justify your answers.


>you can't have functions taking compound values as arguments, or returning compound values as result.

sigh. Okay then, wiseguy, what's map doing then? Because that's sure as hell a function, and it's definitely returning and being called with a list.


Returning a compound object, obviously. And Common Lisp's `mapcar` isn't a mathematical function.


>Common Lisp's `mapcar` isn't a mathematical function.

Yes it is. It takes a function and a list as input, and outputs a list of the results of that function applied to each element of the list. It has no side effects, it maps a domain to a range, and it's referentially transparent, so long as the function you passed to it is.


> It has no side effects

It has the side effect of creating new object identities, at least when its “list” argument isn't `nil`. That identity is a completely new value that didn't exist before `mapcar` was called.

> it maps a domain to a range

In mathematics, the domain and codomain of a function are sets. Sets don't exist “in time”, let alone get new elements in time.


Oh for god's sake...

Yes, the value did conceptually exist before the function ran. Or is Haskell's map equally not functional?

And the set for map doesn't grow over time.


> Or is Haskell's map equally not functional?

Haskell's `map` is a function at the time it is called. If it is called a second time, and new objects were allocated in between, then `map` is a different function, but still a function.

> And the set for map doesn't grow over time.

`mapcar` creates new object identities that didn't exist before. It's as effectful as it gets.


Whether or not the object is new in RAM is purely semantic. It's the same values. And whether they are eq is entirely irrelevant to this discussion.


> Whether or not the object is new in RAM is purely semantic.

Well, we're arguing programming language semantics! And what matters isn't whether it's a new memory block in RAM, which is an implementation detail, but rather whether the language will treat the new value or object as equal to anything that previously existed.

> It's the same values.

Can you bind the “value” to a variable?


I can bind the concrete representation of it returned by map to a variable, which is then equal to all other concrete representations of that value. Same as rust, or any other programming language.


> I can bind the concrete representation of it returned by map to a variable, which is then equal to all other concrete representations of that value.

What you're actually binding to a variable is just the identity of the first cons cell in the “list”. That's by no means an actual list value. The difference will be exposed as clearly as daylight if you mutate the cons cell.


Yes, because that's how lists work. Just like in Haskell.


Nope. Haskell doesn't expose the physical identity of anything that isn't a reference cell (STRef, IORef, etc.). It's not even a type error - there's syntactically no way to query it. You just operate on values directly. The same is true in SML. Even objects are manipulated via their identities, which are values.

Because values are a useful abstraction.


That isn't what I meant. What I meant was that lists are kept as a reference to the head cons cell, in every language that implements lists as singly-linked lists. That's just how it works.


> What I meant was that lists are kept as a reference to the head cons cell, in every language that implements lists as singly-linked lists.

That's an implementation detail. Language abstractions are to be judged by their own semantics, not their possible implementations. And, FWIW, when I evaluate ML programs by hand on a piece of paper, lists aren't implemented as references to anything - a list is just a sequence of elements. On the other hand, if I had to evaluate Lisp programs on paper (thank goodness I don't have to!), object identities would still appear all over the place.


>when I evaluate ML programs by hand on a piece of paper, lists aren't implemented as references to anything - a list is just a sequence of elements.

Well, yeah. Jusr like when I evaluate Lisp. But the idea of a cons cell is inherent to ML's list struture: They call it the cons operator for a reason.


> Well, yeah. Jusr like when I evaluate Lisp.

Then you're not really evaluating Lisp programs according to Lisp's well-defined semantics. Changing the semantics of a language makes the result a different language.

> But the idea of a cons cell is inherent to ML's list struture: They call it the cons operator for a reason.

Yep, but “cons cell” means completely different things in Lisp and ML. In Lisp, a cons cell is an object. In ML, a cons cell is a value.


>Then you're not really evaluating Lisp programs according to Lisp's well-defined semantics. Changing the semantics of a language makes the result a different language.

So long as the actual semantics (how the language behaves) stays the same, you can interpret it in your head any way you want. You don't think of lists as a string of pointers when evaluating ML, and it's the same thing.

>In Lisp, a cons cell is an object. In ML, a cons cell is a value.

That's not the earthshaking difference you seem to think it is. In fact, values and objects are two aspects of the same thing. At the end of the day, lists are still chains of pointers, mutable or no, "object" or "value" (and I have to put it in quotes because your definition isn't in any way the common one).

I give up. Your mental model seems to be horrifyingly warped and broken, but it seems to accurately model computation, so it really doesn't matter.


> You don't think of lists as a string of pointers when evaluating ML,

No, it's not. The Definition of Standard ML says absolutely nothing about lists or strings being implemented as pointers. Lists and strings are values.

> and it's the same thing.

It's not.

> That's not the earthshaking difference you seem to think it is.

It is. Values have a nontrivial equational theory that enables two syntactically different programs to be deemed equal using local reasoning exclusively. That is, without observing the effect of a computation on its much larger context. This in turn enables much more aggressive optimizations, both by hand and automatic, than are possible if all you have is object identities.

> In fact, values and objects are two aspects of the same thing.

Nope. Values are the more fundamental notion, mathematically. Object identities are but a particular kind of value, and so are object states in any mathematically civilized language.

> At the end of the day, lists are still chains of pointers, mutable or no, "object" or "value" (and I have to put it in quotes because your definition isn't in any way the common one).

Lists aren't chains of pointers when I evaluate them on paper. A list is just a sequence of elements. But even when working on a computer, using actual lists has concrete benefits, for example, the compiler is free to apply optimizations like “gluing” consecutive nodes to improve locality of reference (in the generated program) and ease the job on the garbage collector (assuming the list is used in a single-threaded fashion). In the limit, it's as if I had used a C++-style `std::vector`! When all you have is object identities, this optimization is obviously unsound.


I would like to see a light-syntax dialect of Lisp that recognized semantic white space to get you out of the parentheses maze. (But you could still optionally use parens when you wanted.)

Disclaimer: I've been coding almost exclusively F# and SQL for nearly 3 years, and any kind of ceremony and noise characters in code just throw me off. Typescript is being hard on me right now.


Reduce/RLISP looks like this:

    symbolic procedure detq u;
       % Top level determinant function.
       begin integer len;
          len := length u;   % Number of rows.
          for each x in u do
            if length x neq len then rederr "Non square matrix";
          if len=1 then return caar u;
          matrix_clrhash();
          u := detq1(u,len,0);
          matrix_clrhash();
          return u
       end;
Which would be approximately look like this in Common Lisp:

    (defun detq (u &aux (len (length u)))
      (dolist (x u)
        (if (not (= (length x) len))
            (error "Non Square Matrix")))
      (if (= len 1)
          (return-from detq (caar u)))
      (matrix-clrhash)
      (setf u (detq1 u len 0))
      (matrix-clrhash)
      u)


Actually, that's already been done in Scheme. Three times. Wisp(http://www.draketo.de/proj/wisp/) is probably the best one. It runs as a preprocessor, but if you use a scheme with the right syntax hooks (guile, racket, a few others), you might find a direct implementation. In fact, the main distribution has one for guile.

There's also something similar for Common Lisp, but I don't know where or what.


> I would like to see a light-syntax dialect of Lisp that recognized semantic white space to get you out of the parentheses maze.

Take a look at sweet-expressions: http://readable.sourceforge.net/


I'd reccomend wisp over readable: it's more readable than readable.


The wisp I found is JavaScript-written-in-S-expressions, not Lisp-written-in-something-other-than-S-expressions. Naturally, I tend prefer S-expressions, but I was responding to someone who wanted a homoiconic alternative.


Predictably I offended somebody. Discussions of parens in Lisp often turn out this way. I actually think Lisp is a beautiful language, the only language (other than assemblers) that I'm somewhat familiar with that is homoiconic. (At least a couple more according to https://en.wikipedia.org/wiki/Homoiconicity.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: